That doesn't appear to be what happened. But the marketing sure has a lot of people working quick to presume so.
I would guess it's only a matter of days before that proof, or one very similar, is found in the training data, if that hasn't happened already, just as has been the case every time.
No fundamental change in how the LLM functions has been made that would lead us to expect otherwise.
Similar "discoveries" occurred all the time with the dawn of the internet connecting the dots on a lot of existing knowledge. Many people found that someone had already solved many problems they were working on. We used to be able to search the web, if you can believe that.
The LLMs are bringing that back in a different way. It's functional internet search with an uncanny language model, that sadly obfuscates the underlying data while making guesswork to summarize it (which makes it harder to tell which of its findings are valuable, and which are not).
It's useful for some things, but that's not remotely what intelligence is. It doesn't literally understand.
>* if you bring a GPT5-class LLM, you can walk away with a gold medal without having any idea what you're doing.*
My money won't be betting on your GPT5-class business advice unless you have a really good idea what you're doing.
It requires some (a lot of) intelligence and experience to usefully operate an LLM in virtually every real world scenario. Think about what that implies. (It implies that it's not by itself intelligent.)