Edit: because if "AGI" doesn't mean that... then what means that and only that!?
"Agentic AI" means that.
Well, to some people, anyway. And even then, people are already arguing about what counts as agency.
That's the trouble with new tech, we have to invent words for new stuff that was previously fiction.
I wonder, did people argue if "horseless carriages" were really carriages? And "aeroplane" how many argued that "plane" didn't suit either the Latin or Greek etymology for various reasons?
We never did rename "atoms" after we split them…
And then there's plain drift: Traditional UK Christmas food is the "mince pie", named for the filling, mincemeat. They're usually vegetarian and sometimes even vegan.
It's kind of a simple enough concept... it's really just something that functions on par with how we do. If you've built that, you've built AGI. If you haven't built that, you've built a very capable system, but not AGI.
"Can", but not "must". The difference between an LLM being harnessed to be a customer service agent, or a code review agent, or a garden planning agent, can be as little as the prompt.
And in any case, the point was that the concept of "completely autonomous agentic intelligence capable of operating on long-term planning horizons" is better described by "agentic AI" than by "AGI".
> It's kind of a simple enough concept... it's really just something that functions on par with how we do.
"On par with us" is binary thinking — humans aren't at the same level as each other.
The problem we have with LLMs is the "I"*, not the "G". The problem we have with AlphaGo and AlphaFold is the "G", not the ultimate performance (which is super-human, an interesting situation given AlphaFold is a mix of Transformer and Diffusion models).
For many domains, getting a degree (or passing some equivalent professional exam) is just the first step, and we have a long way to go from there to being trusted to act competently, let alone independently. Someone who started a 3-year degree just before ChatGPT was released, will now be doing their final exams, and quite a lot of LLMs operate like they have just about scraped through degrees in almost everything — making them wildly superhuman with the G.
The G-ness of an LLM only looks bad when compared to all of humanity collectively; they are wildly more general in their capabilities than any single one of us — there are very few humans who can even name as many languages as ChatGPT speaks, let alone speak them.
* they need too many examples, only some of that can be made up for by the speed difference that lets machines read approximately everything