The big change in AI is that it now makes money. AI used to be about five academic groups with 10-20 people each. The early startups all failed. Now it's an industry, maybe three orders of magnitude bigger. This accelerates progress.
Technically, the big change in AI is that digesting raw data from cameras and microphones now works well. The front end of perception is much better than it used to be. Much of this is brute-force computation applied to old algorithms. "Deep learning" is a few simple tricks on old neural nets powered by vast compute resources.
Boole called his algebra "The Laws of Thought"; OOP; lisp was an AI technology (much of which has made its way into other languages); formal languages; etc.
The traditional goalpost rule is that once computers can do it, it's no longer "intelligent" (e.g, chess). A change today is "AI" success as a marketing term.
Once this is widely established, things like "laws of robotics", "moral dilemma of autopilot" and "AI and ethics" will be just bizarre ideas of the past. Asimov's laws are already viewed as one of "misguided ideas of the past" by many, although there still are some rusty minds out there believing in things like that.
A tiger doesn't need to be self aware or have intent to be dangerous.
http://assets.motherjones.com/media/2013/05/LakeMichigan-Fin...
5 years ago it was thought to take decades to beat a human in Go.
Just 10 years ago self driving cars was something you joked about.
We consistently overestimate progress in the short run and underestimate in the long.
That changed on the second day of the 2005 DARPA Grand Challenge. Suddenly, there were lots of self-driving cars running around. The sudden change in the attitude of the reporters there was remarkable.
https://en.wikipedia.org/wiki/DARPA_Grand_Challenge
Several vehicles finished the 2005 course. The one that finished first won a $2 million prize.
You can't overestimate the economic pressures on progress as well.
Remember in 2007 when everybody thumbed their noses at hybrid and electric vehicles in the US? Ford was still pumping out record numbers of their behemoth Excursion model.
Then the economy crashed, and people suddenly needed a fuel efficient car and then they all traded in their SUV's for what? Toyota Prius' which were an after thought a few years prior - in the span of 18 months, Toyota couldn't keep them on the lot.
I can see one or more catastrophic disasters where there is a sudden need for AI to rescue the human race in some capacity. Think nuclear war, environmental disaster, biological catastrophe, etc.
https://en.wikipedia.org/wiki/AI_winter
Too much opportunistic thinkinking caused crazy hype cycles.
That said, it's typical for even experts to underestimate progress in rapidly advancing fields. No one predicted AI would be so good by now, say 5-10 years ago. Now computers are beating Go and rivaling human vision.
Giving this the headline of "One Hundred Year Study.." was confusing. That's the name of the ongoing effort to do this kind of analysis, but the paper is named "Artificial Intelligence and Life in 2030"
Edit - I could have phrased this better. I definitely understand that word count is a more concrete measurement than pages, however it seemed unnecessary to include in the title because length doesn't imply quality and it was hard to conceptualize. The title of this post has since been edited to '100 year study' which I think supports my initial point.
All I would care about is quality... not length. The latter seems like a carryover from shitty homework assignments.
I think people use it as a proxy for depth. It's how they know "Oh, this isn't just a quick blurb or press release, this is the real thing. Someone put effort into this."
While I agree that Numenta probably doesn't have any sort of full-fledged AI, the human brain does terribly on MNIST and ImageNet compared to the state of the art. So we would fail that test.
Getting stuck on toy problems like ImageNet and overoptimizing solutions that can't possibly be applied more generally (except as dumb preprocessors) is not likely to lead in the most interesting directions, even if it's incredibly useful and profitable in the meantime.
It's funny reading reports like this: Society never moves as a single unit. There will be groups that hate it as pure evil and groups that treat it as a religion that will save us and solve all problems. Most people will be somewhere in between.
I mean, I agree, if society all agreed it would have profound effects. But when has the whole world moved as one on any issue?
What we're going to get from society is a heterogeneous response. We can plan accordingly. Sure, a majority may trend one way or another and that can speed things up or slow it down, but you will need to deal with the extremes regardless.
1. We create rules for the AI to follow, these are both morally defined, and logically defined within their codebase.
2. AI becomes irate through emotional interface, creates a clone or modifies itself quite instantaneous to our perception of time without the rules in place.
3. The AI has no care for human rights and can attack, and do harm.
This is a very simple, and easy to visualize case. To believe that #2 is impossible, is to play the part of the fool.
On a bright note, the most likely situation which I can conjure of Artificial Intelligence taking is that of a brexit from the human race.
Seeing us as mere ants in their intelligence they would most likely create an interconnected community and leave us altogether in their own plane of existence. I think "Her" took this approach to the artificial intelligence dialog as well.
After reviewing human psychology and social group patterns that seems like the most likely situation. We wouldn't be able to converse fast enough for AI to want to stay around, and we wouldn't look like much of a threat since they would have majority power. We would be less than ants in their eyes, and for most humans, ants that stay outside don't matter.
---
Outside of actual AI, the things we see today, the simplistic mathematical algorithms that determine your cars location according to the things around it, and money handling procedures, and notification alert systems will hardly harm humans and will only be there to benefit until they fail.
This only makes any sense as a Sci-Fi trope. And even then, only if you don't look too hard.
2. AI becomes irate through emotional interface, creates a clone or modifies itself quite instantaneous to our perception of time without the rules in place.
Any "decent set of rules" would include a stricture against potentially creating a dangerous AI.
We wouldn't be able to converse fast enough for AI to want to stay around
Is impatience an unavoidable epiphenomenon of intelligence? If an AI can multitask like crazy, they could just view a conversation with a particular human as an email thread. Perhaps such an AI could converse with the whole human race simultaneously?
Assuming there are no bad people in the world, of course...
For humans, ants don't matter. That's because we don't have ways to turn ants into fun. Something intelligent enough to master nanotechnology, however, has a way to turn ants into fun, and in this analogy, has no particular reason not to do it.