Yeah but it's "exploration" answers all the reasonable objections by just extrapolating vague "smartness" (EDITED [1]). "LLMs seem smart, more data will make 'em smarter..."
If apparent intelligence were the only measure of where things are going, we could be certain GPT-5 or whatever would reach AGI. But I don't many people think that's the case.
The various critics of LLMs like Gary Marcus make the point that while LLMs increase in ability each iteration, they continue to be weak in particular areas.
My favorite measure is "query intelligence" versus "task accomplishment intelligence". Current "AI" (deep learning/transformers/etc) systems are great at query intelligence but don't seem to scale in their "task accomplishment intelligence" at the same rate. (Notice "baby AGI", ChatGPT+self-talk, fail to produce actual task intelligence).
[1] Edited, original "seemed remarkably unenlightening. Lots of generalities, on-the-one-hand-on-the-other descriptions". Actually, reading more closely the article does raise good objections - but still doesn't answer them well imo.