3 years ago, if you told me you could facetime with a robot, and they could describe the environment and have a "normal" conversation with me, i would be in disbelief, and assume that tech was a decade or two in the future. Even the stuff that was happening a 2 years ago felt unrealistic.
astrology is giving vague predictions like "you will be happy today". GPT-4o is describing to you actual events in real time.
"Rather than ship a product, companies can ship blueprints and everyone can just print stuff at their own home! Everything will be 3d printed! It's so magical!"
Just because a tech is magical today, doesn't mean that it will be meaningful tomorrow. Sure, 3d printing has its place (mostly in making plastic parts for things) but it's hardly the revolutionary change in consumer products that it was touted to be. Instead, it's just a hobbiest toy.
GPT-4o being able to describe actual events in real time is interesting, it's yet to be seen if that's useful.
That's mostly the thinking here. A lot of the "killer" AI tech has really boiled down to "Look, this can replace your customer support chat bot!". Everyone is rushing to try and figure out what we can use LLMs (Just like they did when ML was supposed to take over the world) and so far it's been niche locations to make shareholders happy.
how sure are you about that?
https://amfg.ai/industrial-applications-of-3d-printing-the-u...
how positive are you that some benefits in your life are not attributable to 3d-printing used behind the scenes for industrial processes?
> Just like they did when ML was supposed to take over the world
how sure are you that ML is not used behind the scenes to benefit your life? do you consider features like fraud detection programs, protein-folding prediction programs to create, and spam filters valuable in and of themself?
I'm sure 10 years from now, assuming LLMs don't prove me wrong, I'll make a similar comment about LLMs and a new hype that I just made about 3d printing, and I'll get EXACTLY this reply. "Oh yeah, well here's a niche application of LLMs that you didn't account for!".
> how positive are you that some benefits in your life are not attributable to 3d-printing used behind the scenes for industrial processes?
See where I said "in consumer products". I'm certainly not claiming that 3d printing is never used and is not useful. However, what I am saying is that it was hyped WAY beyond industrial applications.
In fact, here I am, 11 years ago, saying basically exactly what I'm saying about LLMs that I said about 3d printing. [1]. Along with people basically responding to me the exact same way you just did.
> how sure are you that ML is not used behind the scenes to benefit your life? do you consider features like fraud detection programs, protein-folding prediction programs to create, and spam filters valuable in and of themself?
Did I say it wasn't behind the scenes? ML absolutely has an applicable location, it's not nearly as vast as the hype train would say. I know, I spent a LONG time trying to integrate ML into our company and found it simply wasn't as good as hard and fast programmed rules in almost all situations.
[1] https://www.reddit.com/r/technology/comments/15iju9/3d_print...
sure, but my experience is that if you are able to optimize better on some previous limitation, it legitimately does open up a whole different world of usefulness.
for example, real-time processing makes me feel like universal translators are now all the more viable
That said, yeah it's mostly niche locations like customer support chatbots, because the killer app is "app-to-user interface that's undisguisable from normal human interaction". But you're underestimating just how much of the labor force are effectively just an interface between a customer and some app (like a POS). "Magical" is exactly the requirement to replace people like that.
That's the sleight of hand LLM advocates are playing right now.
"Imagine how many people are just putting data into computers! We could replace them all!"
Yet LLMs aren't "just putting data into a computer" They aren't even really user/app interfaces. They are a magic box you can give directives to and get (generally correct, but not always) answers from.
Go ahead, ask your LLM "Create an excel document with the last 30 days of the high temperatures for blank". What happens? Did it create that excel document? Why not?
LLMs don't bridge the user/app gap. They bridge the user/knowledge gap, sometimes sort of.
Is that not a very meaningful thing to be able to do?
The most interesting uses of AI tools in a classroom I've seen is teachers showing students AI-generated work and asking students to critique it and fact check it, at which point the students see it for what it is.
No? Solving homework was never meaningful. Being meaningful was never the point of homework. The point was for you to solve it yourself. To Learn with your human brain, such that your human brain could use those teaching to make new meaningful knowledge.
John having 5 apples after Judy stole 3 is not interesting.
So far the biggest usecase for LLMs is mass propaganda and scams. The fact that we might also get AI girlfriends out of the tech understandly doesn't seem that appealing to a lot of folks.
Understanding atomic energy gave us both emission-free energy and the atomic, and you are correct that we can't necessarily where the path of AI will take us.
The first users of Eliza felt the same about the conversation with it.
The important point is to know that GPTs don't know or understand.
It may feel like a normal conversation but is a Chinese Room on steroids.
People started to ask GPTs questions and take the answers as facts because the believe it's intelligent.
>GPT-4o is also describing things that never happened.
https://www.cbsnews.com/news/half-of-people-remember-events-...
>People started to ask [entity] questions and take the answers as facts because the believe it's intelligent.
Replace that with any political influencer (Ben Shapiro, AOC, etc) and you will see the exact same argument.
People remember things that didn't happen and confidently present things they just made up as facts on a daily basis. This is because they've learned that confidently stating incorrect information is more effective than staying silent when you don't know the answer. LLMs have just learned how to act like a human.
At this point the real stochastic parrots are the people who bring up the Chinese room because it appears the most in their training data of how to respond to this situation.
Can you prove that humans are not chinese rooms on steroids themselves?
What hype cycle does this smell like? Because it feels different to me, but maybe I'm not thinking broadly enough. If your answer is "the blockchain" or Metaverse then I know we're experiencing these things quite differently.
Where platforms and applications are rewritten to take advantage of it and it improves the baseline of capabilities that they offer. But the end user benefits are far more limited than predicted.
And where the power and control is concentrated in the hands of a few mega corporations.
Page after page of Wired breathlessly predicting the future. We'd shop online, date online, the world's information at our fingertips. It was going to change everything!
Silly now, of course, but people truly believed it.