It's one big game of musical chairs, and everyone can hear the phonograph slowing down.
OpenAI is making these desperation plays because they've ran out of hype. GPT-5 "bombed", the wider public doesn't believe AI is going to keep getting exponentially better anymore. They're out of options to generate new hype beyond spewing ever larger numbers into the news cycle.
AMD is making this desperation play because soon, once the AI bubble pops, there'll be a flood of cheap unused GPUs & GPU compute. Nobody's going to be buying their new cards when you can get Nvidia's prior gen for pennies on the dollar.
- GPT 3.5: Good for finding reference terms. I could not trust anything it said, but it could help me find some general terms in fields I was unfamiliar with.
- GPT 4: Good for cached, obscure knowledge. I generally could trust the stuff it said to be true, but none of its logic or conclusions.
- GPT 4.5: Good for reference proofs/code. I cannot trust its proofs or code, but I can get a decent outline for writing my own.
- GPT 5: Good for directed thinking. I cannot trust it to come up with the best solution on its own, but if I tell it what I'm working on, it's pretty decent at using all the tricks in its repertoire (across many fields) to get me a correct solution. I can trust its proofs or code to be about as correct as my own. My main issues are I cannot trust it to point out confusion or ask me, "is this actually the problem we should be solving here?" My guess is this is mostly a byproduct of shallow human feedback, rather than an actual issue with intelligence (as it will often ask me at the end of spending a bunch of computation if I want to try something mildly different).
For me, GPT 5 is way more useful than the previous models, because I don't have a lot of paper-pushing problems I'm trying to solve. My guess is the wider public may disagree because it's hard to tell the difference between something better at the task than you, and something much better.
I used scare quotes for a reason. It didn't "bomb" in the sense of failing [insert metric], it bombed in the sense that OpenAI needed it to generate exponentially more hype and it just didn't. (And on a lesser level, GPT-5 was supposed to cut OpenAI's costs but has failed to do so)
> I can trust its proofs or code to be about as correct as my own.
I have little to say about this, as I find such claims to be broadly irreplicable. GPT-5 scores better on the metrics, but still has the same "classes" of faults.
AMD did this deal because it's literally offering financing to them. OpenAI doesn't have access to capital markets like AMD does. So it's selling off shares of its own stock to finance the purchase of billions of dollars worth of GPUs. And the trick appears to be working since the stock is up 30% today, meaning it has paid for itself and then some.
Now it seems clear that what’s missing is another architectural leap like transformers, likely many different ones. That could come from almost anywhere? Or what makes this something where big tech is the only potential source of innovation?
Neurosymbolic architectures are the future, but I think LLMs have a place as orchestrators and translators from natural language -> symbolic representation. I'm working on an article that lays out a pretty strong case for a lot of this based on ~30 studies, hopefully I can tighten it up and publish soon.
At best, they can sell their IP to BigTech, who will then commercialize it.
It's a bubble. The tricks keep working until they suddenly don't, and then all the prior tricks unwind themselves.