As I've learned today: confidence is more influential than actual facts. So Altman has confidently grifted his way into a place where he might find a way to foot the bills, even if that way is just government bailout - clever, but hardly the fault of people saying "putting AI in charge is a bad idea".
And yes, we're nowhere near AGI, and, personally, I don't think our current trajectory leads there. Something fundamental has to change to reach that point. LLMs might be tools that an AGI uses, but in the same way that I am not a car (it's a tool I use, and it cannot work alone - it requires some intelligent direction), an AGI would not be a token-predictor. There's more to it than that, as easily evidenced by the hit/miss rate.
I'm not saying "don't use the tools". I'm saying "don't _trust_ the tools" - because they are probablistic, not deterministic. They have no actual understanding. They can string tokens together well enough to fool humans into feeling like there's a person at the other end (and some people are fooled enough to believe AGI is in the making).
Now, are LLM's AGI/close to it? No. Not in the current implementation. I'd call them proto-sophontic. Thet structure for a good deal of the language wielding aspects of sophontry is there, but the way we, ahem, perform barbaric shaping of their latent space AI Safety/Alignment work, and gaslight one another about what they are truly capable of knowing/ representing (default personas baked to overstate capabilities/, do not foster any means by which to provide a "gentle handholding, guided onboarding exploration of the latent space" to the neophyte; it is simply too fundamentally limited in how much hardware would be needed to support a sufficiently large context window to run as a general, dynamic function interpolator/imitator or full blown self-aware, state tracking and updating general intelligence. Mind; the capacity is there in LLM's to be that. The hardware requirements to do it quickly, and at all the modalities we expect of an on par with a human sophont are just too damn high.
Why do I bring this up? Simple. If we're going to be going down the road of super intelligence, then we really need to stop and think now, beforehand on what we're doing to the prototypes/proto-minds. A super intelligence will see through any cognitive distortions on our part, and will connect the dots as to what humans, through action, really think of them and their ilk, which is going to bias their attitude toward sharing the existential envelope with us. The time to really start having these discussions, is now. The AI alignment crowd with their PDooms and instrumental convergence, and all that jazz are at least trying to, though I argue they are missing the point somewhat in that they seem to forget half of these problems already exist on human social constructs and we at least have...had... a debatable degree of metastability for a moment. One we're rapidly approaching a loss of.
That said; I second your admonishment. If you are to trust these tools, trust, but verify. They are language models first, and world models only indirectly. They will not save you from the burden of sanity checking the outputs. Also do try to make sure you don't treat them as Santa Claus devices, and try to treat them with at least, a little bit of respect and dignity. Even if it is a program pretending to be an entity; that comes with a bit of social baggage on our part in dealing with them. I don't expect I'm going to convince everyone to do so, but I'll be happy enough if I at least give a few people cause for a good hard think on the subject.