>I anticipate that after this passes, we can start to do the right thing — focusing on using machine learning to build things that are meaningful and realistic.
UGHHH. Why did you hide this in a long article making the 180 degree contradictory point that AI (which really means machine learning, remember?) is dumb?
This article is a disservice to its reader because it downplays a huge shift in the world that any ML practitioner should understand very well: machine learning/AI will ultimately replace a lot of what humans do, and it's moving at a faster pace than ever. I've led several small-ish projects (1-2 people, 3-6 months) that could replace dozens or even 1000s of experts in their respective fields. There are thousands of people like me, and there are more every day. The buzz may wear off, in the same way Time Magazine stopped talking about how the internet was going to take over the world after the dot-com crash, but the effects and the efforts will continue unabated.
Can you give some specifics on projects?
https://tech.iheart.com/mapping-the-world-of-music-using-mac...
Not to mention personalized recommendations, which basically aren't possible at scale without some level of ML:
https://tech.iheart.com/mapping-the-world-of-music-using-mac...
https://news.ycombinator.com/item?id=12269568
The thing to keep in mind is every machine learning practitioner who is worth their salary is doing stuff like this. A lot of our every day work isn't as sexy as teaching a computer Go, but it's game-changing to most industries.
It criticizes Watson, but Watson was roundly debunked last year as far behind IBM's marketing machine.
It trots out the "teenagers & sex" quote, which frankly is applied to every new technology. "Teenagers & sex" belongs to a very small family of cliches that everyone in tech has heard. The fact that it gets used indiscriminately makes it mean very little with each new application.
And finally, I'll like to point out the irony of someone trotting out tired thoughts to get attention while criticizing supposedly overhyped tech. To the right, we have people making exaggeratedly positive claims about tech to grab your eyes, and to the left, we have their mirror image.
I could have a failed mental model, but I'm under the impression that the relationship is the other way around. AI is a broad field encompassing various strategies to build intelligent machines. ML is one particular strategy where large volumes (think Big Data) of training data is used to teach by example. (Which makes Deep Learning a subset of ML, where "deep" neural networks are at play.)
I was expecting some evidence and then I hit "I hope you liked this article."
In this benchmark people or models are given a text, and later asked a number of questions. Questions are quite real. See for example here: https://rajpurkar.github.io/SQuAD-explorer/explore/1.1/dev/S...
Models already have performance which are as good as human's. This is real. This is not hype.
I'm going to borrow that analogy of "teenagers perceptions of sex" - it's hilariously accurate for deep learning.
And, as someone who went through the data warehousing fad in the late 90s, there's a lot of naive belief in pouring in a lot of data and magic happening.
That said, there has been a lot of advance that, once it's happened, we just don't call it AI any longer. Route optimization (Google Maps), predictive analytics in some domains, image recognition. Yeah, a lot of it is just fuzzy pattern recognition but some of it is pretty good.
The more fundamental question IMO is how far DL can even take you. We've actually seen a lot of progress there but we also haven't seen a lot of forward motion in cognitive science for example. So do we just run out of steam in some of the areas, like autonomous vehicles, where we think we're doing pretty well today.
I’m too scared to comment or say anything else though. It would be the equivalent of saying something negative about bitcoin on r/Bitcoin.
Amara's law seems like a common human fallacy but it's also true that it's very hard to estimate correctly a growing trend. We have something that is obviously important and will have profound effect, but even small error in the estimates of technological proliferation can lead to 5-25 year time differences.
Another thing is the hype from outside the field. There are more people outside the field hyping it up than there are people inside hyping it. Investors, media and marketing are are powerful force when they jump in and they don't cool down easily.
I'm not sure where you got that impression but skepticism of AI hype gets a lot of airtime in HN threads.
That's a partisan overstatement. It's how some other "we" gets funding, at best. Yes there's a correlation between getting funding and success, but to say the correlation between hype and success directly, seems disingenuous.