Maybe we should stop super-washing things and sell products on their literal merit. Google isn't the premier search provider because they're "super" anything, they're successful because they're free and get out of your way.
Does anyone technical believe GPT-5 will be even remotely close to anything which has even a vague resemblance to AGI?
If you described this to someone 20 years ago, I think this would qualify as near-AGI intelligence.
It's totally possible that getting to true AGI -- where you could trust it with pretty much any complex task -- will require more breakthroughs.
This shtick is running out of steam, and it's becoming increasingly clear that Sam Altman (or anyone at OpenAI for that matter) doesn't have what it takes to keep it going. The honeymoon phase is over, you can't promise capabilities that don't exist without proving that they're possible in the first place. Superintelligence is one of those things.
Every single thing and every single person can be seen as a "dice-roll machine"; when I give even a senior engineer a task to take on this sprint, there's a significant chance they won't complete it that sprint (and it might just have been infeasible at all). At the end, all work boils down to the project management triangle (On Time, On Spec, On Budget - choose two), and the better the AIs get, the more tradeoffs we can choose from and the more and more we'll gradually be using them, regardless of whether we'd able to ever trust them to complete a task perfectly 100% of the time.