It is AGI.
There is no point to rant about "oh but there is no long-term memory, no goal-setting, no planning and such" — it is really easy to augment it with all that, see Auto-GPTs and similar projects. Just loop it through itself and give it an interface to the external world, and it could do anything. Incrementally improve the model, and you'll get to superhuman performance in all tasks.
People like Ben Goertzel believe that GOFAI / Symbolic AI is going to be revived from the AI Winter era like how deep learning was. Here's a recent quote from his Twitter: "LLMs are very powerful and will almost surely be part of the first system to achieve HLAGI. I suspect more like 20% than 80% though...."
The fact that LLMs appear so intelligent to humans is really a reflection of our inability to imagine effects of scale. We can understand simple linear predictions as trivial calculations, but when language-based pattern discovery is many layers deep and those patterns are combined in nontrivial (but non-intelligent) ways, we project intelligence onto the result.
LLM is NN which is not a true Expert system, so it will never get rid of hallucinations. But one ability impresses me greatly - an ability to read long text and to make a digest, I craved this tool when I was a student.
[1] https://en.wikipedia.org/wiki/Expert_system
[2] https://en.wikipedia.org/wiki/Lisp_(programming_language)
One could point out that humans suffer from the same problem. Hallucinations should not be a problem for declaring something AGI. Additionally saying "science advances" is roughly the equivalent to "we've just realized we've all been hallucinating about X".
And there's in the extreme Turing's theorems. They do mean even a perfect ChatGPT would still not know everything (and practically, it would need time, a lot of time, before it really knew more than humans do).
But they can, in some scenarios, give the appearance of being close, which might be enough to be useful for some AGI purposes, whatever those might be.