OK, so where we differ is in defining AGI. To me, and I think most people, it's referring to human-level (or beyond) general intelligence. Shane Legg from DeepMind has also explicitly defined it this way, but I'm not sure where others in the industry stand.
LLMs do have a broad range of abilities, so not narrow AI, but clearly it's not general intelligence (or at least not human level), else they would not be failing or struggling on things that to us are easy - general means universal (not confined to specific types of problem), not just multi-capability.
The lack of reasoning ability, especially since it is architecturally based, seems more than a matter of patching up corner cases that aren't handled well. This shoring up of areas of weakness by increasing model size, adding targeted synthetic data and post-training is mostly just addressing static inference, much like adding more and more rules to CYC.
To make an LLM capable of reasoning it needs to go beyond a fixed N-layers of compute and support open-ended exploration, and probably replace gradient descent with a learning mechanism that can also be used at inference time. In a recent interview John Schulman (one of the OpenAI co-founders) indicated that they hoped that RL training on reasoning would improve it, but that is still going to be architecturally limited. You can learn a repertoire of reasoning templates than can be applied in gestalt fashion, but that's not the same as being able to synthesize a solution to a novel problem on the fly.
LLMs are certainly amazing, and as you say 10-years ago we would have regarded them as AI, but of course the same was true of expert systems and other techniques - we call things we don't know how to do "AI" then relabel them once we move past them to new challenges. Just as we no longer regard expert systems as AI, I doubt in 20 years we'll regard LLMs (which in some regards are also very close to expert systems) as AI, certainly not AGI. AGI will be the technology than can replace humans in many jobs, and when we get there LLMs will in hindsight look very limited.