Even without extrapolating from the pattern recognition tools we have today, whole classes and ranges of jobs can be fully or partially eliminated.
Here is what he says about the state of AI:
> Even the most eye-catching successes claimed for AI in recent times have been, on closer inspection, relatively underwhelming. The idea that an autonomous superhuman machine intelligence will spontaneously spring, unprogrammed, from these technologies is still the stuff of Kurzweilian fantasy. Forget Skynet; at this stage it’s not certain we’ll even get to Bicentennial Man.
> These techniques might replicate discrete functions of a human mind, but they cannot capture the mind’s totality or what makes it unique: its creativity, its genius for emotion and intuition. There’s something else going on."
"the brain has spiritual magic"
Compare that to quotes from real live top human Chinese Go players defeated by AlphaGo last year:
> “After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong,” Mr. Ke, 19, wrote on Chinese social media platform Weibo after his defeat. “I would go as far as to say not a single human has touched the edge of the truth of Go.”
? “AlphaGo has completely subverted the control and judgment of us Go players,” Mr. Gu, the final player to be vanquished by Master, wrote on his Weibo account. “I can’t help but ask, one day many years later, when you find your previous awareness, cognition and choices are all wrong, will you keep going along the wrong path or reject yourself?”
For me the interesting theme was the exploration of the character and dubious success of the mysterious Jim, who is using his connections to ride a wave of poorly understood and possibly malevolent technology to a grand house in the country and membership in the upper classes--almost like flotsam on the rising tide of economic progress. There's a lot of Gatsby in there and some hints of Graham Greene's quiet American as well. Just as you can argue that AI the technology is part and parcel of industrialization, its social effects recall recurring conflicts in American society and culture that authors like Fitzgerald have been exploring for 150 years.
As for other critiques of the style from the Hackerati I would just say yes, it seems like the work of a young writer. Good writing is hard to achieve and it's typically preceded by a lot of bad writing. To paraphrase Senator Palpatine, we will watch his career with great interest.
Fixed: typo/it really is hard to write well
And where is the AI in a humaniform that does all of the above?
There are tentative steps towards some of those activities, but we’re still in the early years with imbuing our machine intelligence models with the equivalent of our kinesthetic sense, object recognition and classification, and natural language interaction. And it is far from clear that we can get there with purely current statistical heuristic-oriented technology. We can only try, but the amount of effort required just for folding clothes to date reminds me of the elaborate Ptolemaic models, or as if we’re trying to build Excel by poking ones and zeroes into memory.
More tinkering required, be back later.
AI will not always represent itself in a one to one relationship with humans to be able to compete or outcompete us. Just as an example a lot of things have become digitalized which have rendered many elements that used to exist in physical form into digital form, music is a good example of that.
So sure there are areas that machines aren't as good at yet because they haven't practiced it enough but it's literally just a matter of training and improving not some fundamental problem that can't be solved.
I heard this echoed many before when pawing through the library stacks in my uni days looking through the littered corpses of AI trends in the past. I believe that we will eventually get to strong AGI. But after either reading or seeing in-person the hype machine sprout up and wither around symbolic programming, semantic programming, neural nets, fourth generation, expert systems, perceptrons, Connection Machine, etc., I'm gun-shy around any proclamation that achieving strong AGI is "just a matter of...<insert-single-solution-space>". The results so far seem to indicate pure cognitive processing is very amenable to the toolbox we have built up to-date in AI research, hence the breakthroughs in game playing.
Manipulating and interacting with the material world and humans however, and the results are a little patchier; I suspect we have lots more work and research ahead of us than we currently realize. When we do get some initial results like the laundry-folding machines, they're single-purpose and uneconomic for mainstream middle-class adoption (not to speak of working-class), and often with lots of attached caveats like Tesla AutoPilot. Instead of all these discussions of whether or not we will get strong AGI, I prefer to see everyone assume it will happen, and when we don't get the incremental result we were anticipating, say, hmm, that's interesting, I wonder why...
I want to see the hype tamped down to the point we can steadily chip away at the overall problem space, and accelerate AI research results and organically reach strong, economically-available AGI sooner than continue experiencing the disappointing two-steps-forward-one-step-back our industry seems to so far historically take in this field. The hype says we're a sprint away from unlocking all sorts of benefits promised by strong AGI, when we are better served accepting the organic incremental benefits as they occur during our acknowledged marathon, and using those incremental benefits as stepping stones to greater understanding.
Inferior to a human, sure. But a start.
Yes, but I don't think you are, either.
The fact is that chess and Go abilities aside, we are probably not even close to insect-level intelligence, and we don't have a clear path of getting there soon -- let-alone anything human-level. Current state-of-the-art so-called AI is basically powerful statistical regression algorithms, that are heuristic improvements over core algorithms invented in the 1960s, and there have been few theoretical breakthroughs since then (far fewer than in most other fields), so much so that many consider machine learning to be in the pre-science or pre-theory stage, being mostly about collecting data and trying out heuristics. It's silly to deny recent successes -- largely due to better hardware (although hardware progress is slowing down quickly) -- but we are behind, not ahead, of where we thought, even as recently as the 1990s, we'd be by now.
At this point we have no idea what role statistical regression plays in intelligence or whether we're even in the right direction. That statistics has become synonymous with intelligence (it used to be synonymous with lies) is certainly a cultural phenomenon that is not directly related to our actual knowledge of the field.
That computers perform some mental tasks (certainly more and more of them) better than humans has been a fact of computing since the 50s, and often a cause for wild claims. The invention of neural networks in the 40s and their implementation in digital computers in the 50s led some very respectable people (like Norbert Wiener) to declare that the problem of the brain will be solved in 5 years. The pragmatic Alan Turing thought that was ridiculous and predicted it would take 50. It's been almost 70 years and we haven't yet reached insect-level intelligence or anywhere near a complete understanding of the insect brain, so at this point, any claims that we are on the cusp of something, or starting to believe that our statistical regression algorithms reflect the beginning of intelligence is... misplaced.
On the other hand, it seems like we have not learned the lessons of misplaced confidence in AI, despite our relatively slow progress, and things are worse now as that we actually have some algorithms that are useful in certain restricted domains that we insist on calling AI, thus causing people to use them in domains where they are not only useless but downright harmful. In the meantime, some people draw attention to the dangers of real AI -- which may be anywhere between decades and centuries away (I believe we'll get there some day but we have no idea how or how soon) -- while distracting from the very real and already present dangers of "AI".
I think the problem is that we do not have a slightest clue what is (even insect-level) intelligence (or consciousness, which is often mixed up in the discussion).
Seems like they managed a honeybee (but I am not sure that it ran in real time or how they validated that) but were hoping for a rat brain.
I'm quite sceptical - I don't believe that there is a good understanding of how a single neuron functions, or agreement on the taxonomy of neurons or an understanding or agreement on their interactions and arrangement apart from in a part of the vision system where there do seem to be some good models.
Playing a game with fixed rules and a finite set of potential states is something a computer can do.
Designing the computer that does that is intelligent.
There is no connection between the two and one does not lead to the other.
The fact that some of the people developing 'artificial intelligence' have such a limited understanding of what intelligence is no doubt contributes to the mocking tone of some critics.