Comparisons between AI and crypto are horribly misguided IMO.
Is AI overhyped? Sure. However -
AI/ML is creating utility everywhere in our lives - speech to text, language translation, recommendation engines, relevancy ranking in search, computer vision, etc. and seems to be getting embedded in more and more processes by the day.
Crypto never amounted to anything beyond a currency for black market transaction, a vehicle for speculation, and a platform for creating financial scams.
That’s exactly the hype talk that’s going to burst this bubble.
Here’s tech’s dirty little secret. Despite all the screams about automation and universal basic income… the exact numbers where job replacement would show are in the labor productivity numbers. If GDP stays flat or grows while the number of jobs is reduced… bingo… you’d see that number climb.
Productivity has actually stayed flat or gone down over the last 15 years. Despite the fact that we’ve had trillion dollar corporate behemoths now. Despite that fact that we’re enabling a surveillance state Orwell couldn’t imagines. Despite the polarization we see. And teen anxiety going through the roof along teen/pre-teen suicides.
When you said AI (and in my view tech in general) are everywhere, I’m guessing this wasn’t what you meant…
My favorite explanation is that many new technologies end up redistributing wealth rather than creating it, which certainly tracks with both subjective and quantified growth in inequality on the same time period. However, a slightly more optimistic take is that tech is aligning production better with people's preferences, so that the same productivity enables people to live more distinct lifestyles that suit them.
[1] https://www.brookings.edu/articles/how-to-solve-the-puzzle-o...
If this is a good explanation, it begs the question of what AI might do to destroy productivity as well. If you’re constantly sexting with your AI girlfriend, who just happens to be extraordinary adept at tapping into your sexual proclivities, maybe you won’t get as many support tickets resolved as your boss was hoping.
More hypothetically, I would also expect that a world in which people spend a lot of time with screens strapped to their head, consuming an infinite stream of entertainment provided by generative AI, is not going to produce higher GDP.
No time off. No health care. Operate 24/7. No unions. No work safety concerns. No lawsuits over being unfairly fired. Control over exactly how something gets done or said.
If only the AIs will stop hallucinating or can consistently comply with policies …
That person gave a list of tech they were talking about in their comment immediately afterwards: "speech to text, language translation, recommendation engines, relevancy ranking in search, computer vision, etc. and seems to be getting embedded in more and more processes by the day."
I'm not sure it's worth quibbling over whether we should use the term "everywhere" or "in many places"; the general point stands that it's found many different uses, and has done what effective tech does - fade into the background in many cases, just becoming part of our daily lives.
Sure, we're not seeing the off the wall predictions from the singularity crowd, but it seems to be tech that most people find broadly useful.
Translation is an interesting example: some labor has been displaced, but not nearly all because there's still value in having human eyes carefully checking the translation of high value documents. But free translation let's regular people translate things freely - a new capability which displaced no one.
However, productivity measures human productivity and human labor. The very cheap new translation modality is therefore completely missed by productivity measurements.
Meanwhile, there's /more/ jobs available right now, despite all of this. The US has hit a historic low in unemployment and wages are going up, leading to a decline in measured productivity. Productivity is output per dollar of wages, which means we twist our hands in anxiety when workers start doing better...
I lack the economics knowledge to do more than parrot the response I've heard to this, so take this with the appropriate level of "hmmm":
As I understand it, the counter-claim is that the measure of GDP mostly excludes exactly the set of things that grows absurdly fast.
For example, the measure of inflation may include the cost of a smartphone in the standard basket of goods, but not the fact the GPU of a smartphone (or Apple TV) of today, operating in double precision mode, can do more than the Numerical Wind Tunnel supercomputer in 1993 costing 100 million dollars.
Or that everyone has a free encyclopaedia a hundred times the size of the Encyclopædia Britannica.
And maps which for most users are as good as Ordnance Survey, but free and worldwide, when the actual OS prices for just the UK is… currently discounted to £2,818.17, from £4,025.97.
Or that getting you genome sequenced now costs a grand rather than 3 billion. Although that might not yet even be in the basket, I don't know where the actual baskets of goods get listed in most cases, and search results aren't helping — one result, on a government website, lists "health", but even digging into the spreadsheet didn't illuminate much detail there.
The UK basket of goods is here https://www.ons.gov.uk/economy/inflationandpriceindices/arti... and the various sublinks.
Maybe you design a wrench that is 1000x cheaper and faster to use and more reliable. Well, if it makes your car building operation 0.0001% faster, that's the impact. The details of the wrench and how impressive it is are irrelevant to any observer.
If having your genome sequenced leads to far longer or better lives then we would see the impact in productivity. Same with everything else on the list.
My life is permeated by tech (and big part of it is AI) and made 100 times easier. I can buy a plane ticket to another country while waiting for a subway (did people use to take an hour to go to a special place, wait in line and talk to human to buy a ticket? I still remember this). I go there and quickly navigate in a city I know next to nothing about, find something niche, like cool local cafés in the area because GPS and google maps. I go to a restaurant and I can use google translate to understand the menu. I don't even need to type unfamiliar words, AI scans the image and translates it on the fly. The same google translate with speech recognition AI helps me to converse with a person when we don't share any common language. I can click couple of buttons and video-call my mum who lives on the other side of the world. If I need to buy something I need very rarely, I can order it online and not think where I find a shop that sells those things. Even if I don't know the right word, I can now ask chatgpt "what do you call in German that fancy thing you mount on the ceiling and attach lights to it?"
My life is _hugely_ more efficient thanks to tech and AI. Does it help me to contribute more to the abstract economic growth? I don't know, perhaps not. But I just don't care about GDP.
AI is already so ubiquitous and useful that you blindly take it for granted without even thinking.
Spanish, for example, has many examples of words that are innocuous in one dialect and profane in another.
Last year, for an important letter that had to be written in Japanese, a language that I don't know. Using Google Translate for that was unthinkable, because Google Translate is pretty poor and I had no way of checking and correcting the translated text.
If I was going to name the biggest good thing AI has done to humanity so far is the ability to read internet sites in other languages like Chinese (Google sucks at it, you have to use other tools, I use an app called "tap translate screen"). Also ability to do voice to text and translation at the same time on mobile devices (currently requires online connection).
As for the rest of your comment... please don't hijack other conversations for soapboxing on the industry as a whole. Instead, submit your post and open a real conversation.
I think the contention is with AI suddenly being redefined as only referring to language models, and the view that intelligence has been solved by these models.
There has clearly been a massive marketing push to label these models as the "one true AI", both from companies and from AI influencers. This is where the echo chamber exists, and it's easy to get stuck in it.
Maybe I am wrong and we have solved intelligence. But I seriously doubt it.
1. ChatGPT was released for general-purpose use. It's not a data science team at a FAANG company or healthcare or finance enterprise using ML for a specific business need. It's there for anyone to ask it anything.
2. A design decision was made to have ChatGPT output words in "real time" instead of all-at-once after a delay. To the user, that makes it look and feel like it's consciously and actively responding to you in a way that animated ellipses do not. I never knew what it would feel like talking to an AI, but when I first used ChatGPT, I thought: this must be it.
You know very well that that's not what GP is referring to. Speech to text, natural language translation, recommendation, and computer vision are all very useful things, but also were very much real and in consumer hands long before the current hype cycle.
Generative AIs are in their hype cycle. IMO the tech is overhyped to hell and back, but it will still probably yield better results than Crypto, there are legitimate uses for it. But those uses need to be ok with an 80% correct solution, which is not sufficient for all the things LLM hypelords are saying they can be used for and there is no path forward for closing that 20% gap.
Fabulous, I'll be stealing that ...
AI only does bad things to me so far: surveillance, spam, fakes, search results poisoned, twitter and reddit closed up because of it etc.
Where is my automatic captcha solver? Where is a robot that will get me to a live person in a support call? Where is a spam filter that doesn't send all useful emails to spam? Where is a filter to hide fake reviews on Amazon? To fight against Amazon's crazy product ranking system? Such useful things are nowhere on the horizon.
> Where is a filter to hide fake reviews on Amazon?
While it[1] doesn’t hide, it can generate some insightful information about fake reviews for me.
Disclaimer: Not affiliated, only a happy person using FakeSpot since this year.
Until then, you’ll get a lot of “it’s a really hard problem to solve!” Coupled with zero progress.
But what it already has begun to do, and will continue to do is change the way we interact with computers. The era of having a personal voice assistant that is capable, adaptable, and intuitive is VERY close and that is something that's exciting. Siri and Alexa are going to look downright primitive compared to what we'll have in the next 2-5 years and that is going to be VERY mainstream, and VERY useful for huge swaths of the population.
Crypto still hasn't proven itself to be useful in any way shape or form that isn't immediately over-shadowed by a different medium.
You’re treating it as a fact that LLM are going to replace existing products, at some unknown future date.
“In 5 years, all code will be written by AI”
“In 5 years, LLM will replace Siri and Alexa”
“In 5 years, AI will replace [sector of jobs]”
The thing that frustrates me about these statements is that you don’t know what AI technology is going to look like in 5 years, so stop treating it like a fact. It’s possible LLM are useful in all of these places, but we don’t know that yet.
That’s a fact.
I also know that voice-interfaces to date have been incredibly stiff and there is ample room for improvement. I know, for a fact, that having AI enable better voice interfaces will make computing better and more accessible. I have a hard time understanding how those are hype-driven comments and/or opinions.
We do know these things for a fact. Not being able to articulate exactly which breakthroughs will be most important doesn’t make it hype.
There doesn't seem to be a rush because it makes the implementation a lot more expensive, and those things are, I suspect, not profitable products (revenue sources) to their respective companies. They are a kind of enhancement to a layer of products and services; people take them for granted now and so you can't take them away.
A smarter Google Assistant would do nothing for Google's bottom line, and in fact it would cost more money to operate.
If it's not done right, it could ruin the experience. For instance, it cannot have worse latency on common queries than the old assistant.
All I did was hold it's hand, it wrote every line of code. You are living in fantasy land if you think we will be writing lines of code in 10 years.
Pontificating their nonsense around the hype about LLMs to the point where they don't even trust it. The same thing they did with ConvNets and they still don't trust that either since they both hallucinate, frequently.
I can guarantee you that people will not trust an AI to fly a plane without any human pilots on board end-to-end (auto-pilot does not count) and it is simply due to the fundamental black-box nature of these so-called 'AI' models being untrustworthy in high risk situations.
Stone ages. That’s not 5 years from now. That’s today.
Year of the voice assistant is getting close to year of Linux on desktop.
What you’re promising has been promised time and time again, received endless hype cycles then collapsed once people realised the limits of the technology. Yes, this time the tech is much more capable than what came before but I’m inclined to believe we’ll yet again find a limit that means we’re using it for some things but our lives still aren’t drastically changed.
Case in point, I asked Siri to change my work address. She stated that I needed to use the Contacts app to do that. This is not very helpful. The issue here is not Siri’s inability to understand what I want, it is that the Contacts app does not support this method of data input. Siri is also probably not very good at extracting structured address information from me via natural language, but the new LLMs can do this easily.
Intuitive to use? Or has intuition?
Google and Amazon have tried to sell theirs for a long time. And none were actually selling much. Amazon admitted to be selling theirs at a loss. Facebook has tried their own - and quickly cancelled them. Google's is in every Android device - and yet pretty much nobody uses them. Even Apple's Siri is more annoyance than help.
That something can be built doesn't mean it will sell or that people will actually want to use it. If you create a solution looking for an imaginary problem that your marketing thinks is what people want instead of a solution that solves a real existing problem, you do get a solution looking for a problem ...
Also, answering questions and communicating in natural language is the easy part of such assistant. For the thing to be useful it must be able to actually do something too. Which is incredibly difficult beyond the (closed) ecosystem of its vendor. Thirdparty integrations are usually driven by who pays the manufacturer for the SDK and partner contract (seen as a marketing opportunity), not by what the users actually want it to integrate with. Hoping for one of these with an open API that anyone could integrate whatever they want with, I am not holding my breath here.
I don't really see either of those things as a real possibility. Within my lifetime, anyway.
Seems like it has proven very useful for Stripe [0], Moneygram [1], TicketMaster [2], etc.
Unlike AI which continues to consume tons of resources to burn the entire world down to the ground without any viable efficient methods of training, inference or fine-tuning their AI models in the past decade with its chatbot hype and gimmickry [3], crypto does not need to consume tons of CO2 to operate, thanks to alternative and greener consensus algorithms available in production today. [4]
Being 'useful' is not an excuse to destroy the planet around untrustworthy AI models getting themselves confused over a single pixel or hallucinating in the middle of the road.
[0] https://stripe.com/gb/use-cases/crypto
[1] https://stellar.org/moneygram
[2] https://business.ticketmaster.com/business-solutions/nft-tok...
[3] https://gizmodo.com/chatgpt-ai-water-185000-gallons-training...
[4] https://consensys.net/blog/press-release/ethereum-blockchain...
Both have resulted in a bunch of hopefuls starting companies, in order to attract mountains of venture capital. Companies that will only have a loose connection to the tech that drives the hype.
Is there any insight from this observation?
Yes: be deeply skeptical of anyone claiming tech they are personally invested in is revolutionary.
For most people that's a promise for the future, but beside some translation tools (who are far from perfect) there is not much.
For instance: semantic search is and was a big topic, but so far even ChatGPT is not a real answer, Stable Diffusion is very nice if you want to produce some cartoon alike graphics or some porn deepfakes, but just not ready for simple common photo editing tasks. OCRs have gotten better, but still nothing that "by magic" makes a badly scanned piece of paper into an almost-native clean pdf and so one.
Yes, there is much progress and potential, but not much for real world usages.
Likewise, speech to text and language translation are more available, but they're still pretty bad. And computer vision is much better than 10 years ago, but I wouldn't bet anyone's life on it.
And yet, the hype train is still gaining momentum. Having been through more than my share of AI winters, I can feel another one coming and it's going to be bad.
I've predicted 9 of the last 1 AI winters.
As always, sounds cool. Actually do some of it and then let's talk more.
No wonder that its adoption for legal purposes didn't go anywhere.
Some of the applications crypto folks were going on about: decentralization (of course) until the IRS started taxing your crypto transactions; helping 3rd world countries with weak currencies (partially happened); international trade (quickly became untrue and monitored by government agencies), ledgers for tech companies (they can all already build audit trails. i've yet to see many applications where companies are willing to give up control over their trust to a 3rd party system with no scrubbing functionality. looking at you, AWS status page).
Like I said, same vibe, different applications. Until some are built it's all just hype and conjecture. The applications we've seen work well are already well-accepted for their faults (content generation, summarization, etc)
If you are a machine-learning practitioner, you should be familiar with all of those techniques and how they are used so that you can solve practical problems with them. But if you just read about AI in the news and figure you're going to found the next great startup and make a billion off it, you'll probably start by feeding a whole bunch of data into Tensorflow and then getting useless garbage out of it.
This hype bubble is specifically about LLMs, extremely large-parameter transformers that are trained on all the data OpenAI or Google can get their hands on. And then supposedly if you ask them the right questions, you will get useful answers back. For people that put in the time and experimentation to actually find the right questions and the right applications, that will probably be true - but the hype is that this will change everything, and it most certainly will not, just in the same way that beam search is frequently useful but it definitely does not change everything.
But slick promoters will nevertheless manage to use people's lack of knowledge to redirect billions of dollars in capital into their and their employees' pockets, the same way that slick promoters used crypto to redirect billions of dollars in capital into their and their employees' pockets.
OpenAI which is funded by Microsoft and promoted by Microsoft account executives creates hype as if it's open although nothing, including its so-called open-source Whisper, is open. People feeding Microsoft pretend that "they" are revolutionizing the world. NVIDIA and Microsoft are making money out of these large models and positioning the bigger as the better.
In the case of AI: both potentially quite useful (unlike crypto) and incredibly, toxically overhyped (just like crypto).
Ironically, the fact that a lot of people think "You it's actually kind of useful sometimes, therefore you can't compare it crypto/web3" is part of the engine that drives the hype.
The tech religious overhype train is here to stay. There has never been a more established need for calm honesty.
Do you understand what a massive impact that has been? It has disrupted one of the largest industries on the planet, which is drug trafficking.
People are buying drugs on the darknet, who would never buy it on the streets or want to be associated with regular drug users.
[1] https://www.unodc.org/res/wdr2021/field/WDR21_Booklet_2.pdf
"Cocaine News" with Donald Trump Jr. | The Daily Show:
https://www.youtube.com/watch?v=47yFRXZqB0g
Don Jr. Swears He’s Not on Coke—He’s Just ‘Impassioned’:
https://www.thedailybeast.com/donald-trump-jr-is-tired-of-co...
So speaking of echo chambers…
Overall economic productivity didn't shoot up in the last decade despite the dramatic progress we had in software and hardware (e.g. [1]), and it's not clear that AI/ML will dramatically change that. Yes searching pictures on my iphone by text is convenient and Netflix recommendations might be more addictive, but the path from that to ubiquitous economic prosperity, safety, and comfort (the technoutopia many here are striving for) is not clear at all. It's also not very clear if those marginal improvements are worth the substantial share of total human brainpower thrown at them.
[1] https://www.aei.org/economics/good-news-bad-news-on-us-produ...
AI is just like crypto, but better!
Not sure that's reaaaaaaaaaally going to bring people around.Nope. The hype around AI by the AI bros is totally similar with the crypto bros back then.
> AI/ML is creating utility everywhere in our lives - speech to text, language translation, recommendation engines, relevancy ranking in search, computer vision...
Yet I guarantee that you don't trust any of their outputs for any serious applications and you need to constantly check for it's reliability since its output is often wrong, inaccurate or even outright nonsense. You don't trust it yourself, which that is the problem of this entire hype cycle.
On top that, it is all at the expense of the planet getting incinerated with no efficient alternatives to counter the amount of extreme waste of resources that these systems are consuming. [1] [2]
> Crypto never amounted to anything beyond a currency for black market transaction, a vehicle for speculation, and a platform for creating financial scams.
'never'
So Moneygram, Stripe, Checkout.com, etc using it is 'never amounted to anything'? If it was only for financial scams, all of them would have stopped using it a long time ago.
They simply didn't because financial scams on a transparent public ledger is a scammers nightmare and sounds like a very poor platform for creating financial scams.
But maybe you need to look outside of the AI bubble and see the trillions of dollars in which the banks have allowed in actual black market transactions by criminals in the FinCEN files [0] which is nothing compared to crypto.
[0] https://www.standard.co.uk/tech/ai-chatgpt-water-usage-envir...
[1] https://www.nytimes.com/2020/09/20/business/fincen-banks-sus...
[2] https://gizmodo.com/chatgpt-ai-water-185000-gallons-training...
Both don't seem to work.
no
calling something "hype" should not be a stand-in for data
Consider how power structures (eg nation states) may change in such a future.