And if only scaling that context length weren't quadratic...
Indeed, we would really expect an AI to be able to achieve AGI. And it might decide to do all kinds of alien things. The sky would not be the limit!
We have more than 100 trillion synapses in our brains. That's not our "parameter" count. It's the size of the thing that's getting squared at every "step". LLMs are amazing, but the next valley of disillusionment is going to begin when that quadratic scaling cost begins to rear its head and we are left in breathless anticipation of something better.
I am not as worried, I guess, as your average AI ethicist. I can hope for the best (I welcome the singularity as much as the next nerd), but quadratic isn't going to get easier without some very new kinds of computers. For those to scale to AGI on this planet it's questionable if they'll have the same architecture we're working with now. Otherwise, I'd expect a being whose brain is a rock with lightning in it to have take over the world long, long ago. Earth has plenty of both for something smart and energy efficient to have evolved in all these billions of years. But it didn't and maybe that's a lesson.
That all said, these LLMs are really amazing at language. Just don't ask them to link a narrative arc into some subtle detail that appeared twice in the last three hundred pages of text. For a human it ain't a problem. But these systems need to grow a ton of new helper functionality and subsystems to hope to achieve that kind of performance. And, I'll venture that kind of thing is a lower bound on the abilitites of any being who would be able to savage the world with it's intellect. It will have to be able to link up so, so many disparate threads to do it. It boggles our minds, which are only squaring a measly 100T dimension every tick. Ahem.