Don't read the statement as a human dunk on LLMs, or even as philosophy.
The gap is important because of its special and devastating economic consequences. When the gap becomes truly zero, all human knowledge work is replaceable. From there, with robots, its a short step to all work is replaceable.
What's worse, the condition is sufficient but not even necessary. Just as planes can fly without flapping, the economy can be destroyed without full AGI.
I don’t know why statements like this are just taken as gospel fact. There are plenty of economic activities which do not disappear even if an AI can do them.
Here’s one: I support certain artists because I care about their particular life story and have seen them perform live. I don’t care if an AI can replicate their music because the AI didn’t experience life.
Here’s another: positions that have deep experience in certain industries and have valuable networks; or that derive power by being in certain positions. You could build a model that incorporates every single thing the US president, any president, ever said, and it still wouldn’t get you in the position of being president. Many roles are contextual, not knowledge-based.
The idea that AGI replaces all work only makes sense if you’re talking about a world with completely open, free information access. I don’t just mean in the obvious sense; I mean also “inside your head.” AI can only use data it has access to, and it’s never going to have access to everyone’s individual brain everywhere at all times.
So here’s a better prediction: markets will gradually shift to adjust to this, information will become more secretive, and attention-based entertainment economics will become a larger and larger share of the overall economy.
Yeah, but obviously no human can clear that bar either.
> Here’s another: positions that have deep experience in certain industries and have valuable networks
What stops an AGI from gaining "deep experience in an industry"? Or forming networks? There's plenty of popular bot accounts across social media already.
You can't get deep experience in any industry if there's a machine that can do the entry-level work for a fraction of the cost you can. And keep in mind that, by definition, this machine can learn to do everything you can, so it's in a much better position than you to get that deep experience you speak of.
If we get what's essentially mass-producable brains, and information gets more secretive as you say, if we have say 1000 machines for every person in the economy, they're in a better position than you to produce said valuable secret information.
As I said, not all types of jobs are set up this way. Pure knowledge ones, sure. But ones dependent on context are not going to have this elimination of entry-level work in the first place.
and we get 1000 robots for every person in the economy, they're in a better position than you to produce said valuable secret information.
Again, no, they aren't, because certain types of information are not merely a question of computational power.
There is this constant assumption that all knowledge is just a math problem to solve, ergo AI will eventually solve it. That isn't how information actually functions in the real world.
There’s no “gap that becomes truly zero” at which point special consequences happen. By the time we achieve AGI, the lesser forms of AI will likely have replaced a lot of human knowledge labor through the exact “brute-force” methods Chollet is trying to factor out (which is why many people are saying that doing so is unproductive).
AGI is like an event horizon: It does mean something, it is a point in space, but you don’t notice yourself going through it, the curvature smoothly increases through it.