Today's AI systems are pretty impressive but they are absolutely not, not even slightly, the equivalent of Einstein + Hawking + Tao. The reason they get used a lot for tasks along the lines of "rewrite this so it sounds smarter" is that that's what they're best at.
If we did as the author seems to want and tried to use these systems to solve the kinds of problems we need Einsteins, Hawkings and Taos for, then we would be in for one miserable disappointment after another. Maybe some day -- maybe some day very soon -- they'll be able to do that, but not now.
An article proclaiming that today's AI systems are at the level of Einstein mostly suggests to me that the author's own intellectual level isn't much higher than that of the AI systems he falsely equates with them. That seems unlikely, but I don't have a better explanation for how someone could write something so very far from the truth.
We can literally watch Terence Tao himself vibe coding formal proofs using Claude and o4. He doesn’t seem too disappointed.
I'm not saying today's AI systems aren't useful for anything. I'm not saying they aren't impressive. I'm just saying they're nowhere close to the "Einstein, Hawking and Tao in your house" hyperbole in the OP. I would be very, very surprised if Terence Tao disagreed with me about that.
I disagree. The reason is that that's what aligns best with what most people are looking for help on.
There is a disconnect between reality and the AI product consumer envisioned here. There is no magical enlightened user who's going to unleash their inner potential.
How much physics or math does the average person know? How much do you think they even WANT to know? The answer is surprisingly little.
On a day-to-day basis the layman writes emails and other mundane tasks, and wants to do them faster and easier.
Having a squad of geniuses in my pocket doesn't pay my bills.
Usage of products is determined by what people are driven to do. People are driven by their desires and their problems. And most of these are fairly simple and mundane… eating, paying the bills, feeling healthy, connecting with others socially, etc.
"Expending copious amounts of mental energy on difficult work to create scientific breakthroughs that may-or-may-not allow engineers to build things that contribute to the betterment of the human race" is not how most people want to spend their time, even if there are tools available to help them do that.
They are not going to come up with a theory of relativity
The greatness of great minds was how they thought about problems and how they changed how we thought about things. An AI cannot do that. It's designed to tell you what people combined have already agreed upon. It's not designed to break the frontier of our knowledge
Oh is that what the point of the article was? That is so stupid that it didnt even cross my mind.
Sure, a computer or an LLM isn't alive, but we have no idea if "being alive" is what is required for conscious experience.
The only argument I have for believing that other human beings experience things is that it would be extremely improbable if I was the only one, and the other mechanistic automatons looked and talked like me but didn't experience like me. I can see that humans are animals, so the common origin of animals and our cognitive and behavioral similarities give us good reason to believe that other complex animals experience things, though possibly radically differently.
None of that gives us any clue what the necessary and sufficient conditions for conscious experience are, so it doesn't give us any clue whether a computer or a running LLM instance would experience its existence.
But to "experience its [own] existence", it needs to have a model of its own internals, observe, improve itself and perhaps preserve its own "values" and integrity. I do wonder what kind of values are needed for intelligent autonomous systems, that they can justify by and for themselves, even in the absense of human beings or presence of other intelligent agents.
I find (human) languages to be inefficient media to store and perform operations from the perspective of an AGI. Feeding lots of text samples to develop logical reasoning abilities, such extravagance I can not accept. Even more so trying to emulate neural networks, which I understand to be naturally analog entities, in digital manner. Can we expect any gain in power efficiency or correctness gains when using analog computers for this purpose? I wonder what we will get to see with analog computers for neural networks, with proper human-language-independent knowledge representation and well developed global (as in being able to decide which way to reason, given its limitations, for efficiency) logical reasoning capabilities, developed by itself from a reasonable basis of principles, that it can justify for itself and avoid the usual and unusual paradoxes. What core set of principles would be sufficient for emerging, evolving or developing into a proficient general intelligent being, when sufficient resources would be available to it? Like "ancestor" microbes evolving into human beings in hundreds of millions of years, but wayyyyy faster and more efficient?
From an epistemological perspective, this is gibberish. Just because we do not know the reason why something happens doesn't mean it doesn't happen nor is it stopped from happening.
The rest delves into solipsism which is an odd place to start from to prove the existence of an alternate lifeform. In solipsism, your own existence is suspect.
Humans value things that are hard to replace. (This is a first-order approximation)
Abortions are okay because fetuses only take 1 person nine months to make, and it's their decision whether to keep it.
Infanticide is not okay because a healthy baby is difficult to replace, and also lots of people might like to adopt it, and if it's breathing on its own then the maintenance cost is as low as it can get.
Software like LLMs can be abused because it costs nothing to roll them back and clone them endlessly.
Pets are hard to replace because you can't replace the interpersonal bond between a pet and their keeper. They fall somewhere high above computers and a little below children on this scale.
Pigs, cows, and chickens, commonly called "livestock" are bred and slaughtered in mass (most of our farmland is for growing their feed) because they all look the same to us and aren't commonly kept as pets. Kind people are disgusted when they think of raising rabbits or dogs for food. Thoughtful people look at all this and decide not to eat any animal product at all.
Under this model, everything makes perfect sense. Did I miss anything? /engineering_hubris
On r/AskPhysics you'll see people post AI-made crank theories every day. I assume there have been even more, as the mods constantly remove AI posts. So why would I let AI teach me physics?
AI is best at things you already know, or at least used to know. Like you know a foreign dish but you forget the exact name, or an idiom on the tip of your tongue.
There are no "digital gods" only the super-powered autocorrect people call "ai". They can't make new stuff. They can't solve novel problems no human has solved before, though they _can_, with the correct setup, brute-force solutions to understandable problems by throwing everything at it until something sticks.
They don't learn. They don't teach. They are not the deities that are presented here. This article is fantasy, projected from real circumstances, by an over-active imagination.
A chat log that takes me 2 hours to produce, I can read in 5 minutes. There's no world in which that's efficient pedagogy, even disregarding ChatGPT's truthfulness issues.
Doesn't have quite the same ring as a lament
Geniuses? Come on. Let's talk when a LLM is central to a new development in HEP or math. I mean central, like a paradigm shift kind of thing, directly from the AI. A quantum gravity theory, a brand new branch of math, a new approach to a unsolved conjecture, whatever. That's what geniuses do. Not repeating what you can already read in a book! This kind of thing says more about people's ignorance and how impressionable they are than the actual capabilities of the tech. If you think that AI text and image generation _creativity_ can be translated to hard things like math, oh boy.