Yeah, ok. The research is interesting, warranted, but writing an article about it, and leading with the conclusions gathered from toy models and implying this generalises to production LLMs is useless.
We've been here before with small models. Training on LLM outputs leads to catastrophic collapse. Every outlet led with this. But no-one red the fine-print, they were testing on small toy models, and were using everything that came out to re-train. Of course it's gonna fail. L3 / phi / gpt-oss models showed that you can absolutely train on synthetic datasets and have great results.
Research in this area is good, and needed. Mainly to understand limitations, discover if there are any scale levels where "emergent" stuff appears and so on. But writing articles based on incipient research, based on tiny models is not worth the effort.
There's a mountain of reasons why this makes sense from a cost perspective, and seemingly it does also for quality, too, as the newer models train substantially more cheaply and still outperform the older models.
Naively, this seems like it would be relevant.
You are just trotting out the tired argument that model size magically fixes the issues, rather than just improves the mirage, and so nothing can be known about models with M parameters by studying models with N < M parameters.
Given enough parameters, a miraculous threshold is reached whereby LLMs switch from interpolating to extrapolating.
Sure!
I do think that larger models will perform better, but not because they fundamentally work differently than the smaller models, and thus the idea behind TFA still stands (in my opinion).
You're conflating two very different things. Training on synthetic data one time is very different than cyclically training models on their own data. It has nothing to do with model size.
> [...] cyclically training models on their own data. It has nothing to do with model size.
Of course it does. GRPO is basically "training models on their own data". You sample, you check for a known truth, you adapt the weights. Repeat. And before GRPO there was RLAIF which showed improving scores at 3 "stages" of generate - select - re-train. With diminishing returns after 3 stages, but no catastrophic collapse.
My main point was about articles and cherrypicking catchy phrases, not criticising research. We need the research. But we also need good articles that aren't written just for the negativity sells titles.
cheeky edit: see this thread [1]. I know slashdot has fallen a lot in the last years, but I skimmed the root comments. Not one addressing the "toy" model problem. Everyone reads the title, and reinforces their own biases. That's the main problem I was trying to address.
1 - https://slashdot.org/story/25/08/11/2253229/llms-simulated-r...
I can see how performing well on benchmarks at the expense of everything else counts as great results if that's the point of the model.
I've recently been taking a look at another paper, from 2023, and subsequent research. It has a morally similar finding, though not focused on "reasoning traces", but it's based on GPT-4:
https://proceedings.neurips.cc/paper_files/paper/2023/hash/d...
> In a recent pre-print paper, researchers from the University of Arizona summarize this existing work as "suggest[ing] that LLMs are not principled reasoners but rather sophisticated simulators of reasoning-like text."
What does this even mean? Let's veto the word "reasoning" here and reflect.
The LLM produces a series of outputs. Each output changes the likelihood of the next output. So it's transitioning in a very large state space.
Assume there exists some states that the activations could be in that would cause the correct output to be generated. Assume also that there is some possible path of text connecting the original input to such a success state.
The reinforcement learning objective reinforces pathways that were successful during training. If there's some intermediate calculation to do or 'inference' that could be drawn, writing out a new text that makes that explicit might be a useful step. The reinforcement learning objective is supposed to encourage the model to learn such patterns.
So what does "sophisticated simulators of reasoning-like text" even mean here? The mechanism that the model uses to transition towards the answer is to generate intermediate text. What's the complaint here?
It makes the same sort of sense to talk about the model "reasoning" as it does to talk about AlphaZero "valuing material" or "fighting for the center". These are shorthands for describing patterns of behaviour, but of course the model doesn't "value" anything in a strictly human way. The chess engine usually doesn't see a full line to victory, but in the games it's played, paths which transition through states with material advantage are often good -- although it depends on other factors.
So of course the chain-of-thought transition process is brittle, and it's brittle in ways that don't match human mistakes. What does it prove that there are counter-examples with irrelevant text interposed that cause the model to produce the wrong output? It shows nothing --- it's a probabilistic process. Of course some different inputs lead to different paths being taken, which may be less successful.
Yes, which makes sense, because if there's a landscape of states that the model is traversing, and there are probablistically likely pathways between an initial state and the desired output, but there isn't a direct pathway, then training the the model to generate intermediate text in order to move across that landscape so it can reach the desired output state is a good idea.
Presumably LLM companies are aware that there is (in general) no relationship between the generated intermediate text and the output, and the point of the article is that by calling it a "chain of thought" rather than "essentially-meaningless intermediate text which increases the number of potential states the model can reach" users are misled into thinking that the model is reasoning, and may then make unwarranted assumptions, such as that the model could in general apply the same reasoning to similar problems, which is in general not true.
And Gemini has a note at the bottom about mistakes, and many people discuss this. Caveat emptor, as usual.
As for your question: ‘So what does "sophisticated simulators of reasoning-like text" even mean here?’
It means CoT interstitial “reasoning” steps produce text that looks like reasoning, but is just a rough approximation, given that the reasoning often doesn’t line up with the conclusion, or the priors, or reality.
My dude, have you ever interacted with human reasoning?
Even for someone who kinda understands how the models are trained, it's surprising to me that they struggle when the symbols change. One thing computers are traditionally very good at is symbolic logic. Graph bijection. Stuff like that. So it's worrisome when they fail at it. Even in this research model which is much, much smaller than current or even older models.
What do you think the explanation might be for there being "such a market"?
It is being marketed as directly related to human reasoning.
You cannot even see the comments of people who pointed out the flaws in the study, since they are so heavily downvoted.
I have encountered this problem numerous times, now. It really makes me believe that the models do not really understand the topic, even the basics but just try to predict the text.
One recent example was me asking the model to fix my docker-compose file. In it, there's the `network: host` for the `build` part. The model kept assuming that the container would be running with the host network and kept asking me to remove it as a way to fix my issue, even though it wouldn't do anything for the container that is running. Because container runs on `custom_net` network only. The model was obsessed with it and kept telling me to remove it until I explicitly told that it is not, and cannot be the issue.
``` services:
app:
build:
network: host
networks:
custom_net:
...
```This is correct. There is no understanding, there aren't even concepts. It's just math, it's what we've been doing with words in computers for decades, just faster and faster. They're super useful in some areas, but they're not smart, they don't think.
LLMs have a large knowledge base that can be spit out at a moment notice. But they have zero insight on its contents, even when the information has just been asked a few lines before.
Most of the "intelligence" that LLMs show is just the ability to ask in the correct way the correct questions mirrored back to the user. That is why there is so many advice on how to do "proper prompting".
That and the fact that most questions have already been asked before as anyone that spend some time in StackOverflow back in the day realized. And memory and not reasoning is what is needed to answer them.
LLM reasoning is brittle and not like human cognition, but it is far from zero. It has demonstrably improved to a point where it can solve complex, multi-step problems across domains. See the numerous successful benchmarks and out of sample evals (livebench.ai, imo 2025, trackingai.ai IQ, matharena.ai etc).
I gained multiple months of productivity from vibe coding personally in 2025. If being able to correctly code a complex piece of software from a vague, single paragraph description isn't reasoning, what else is? Btw, I don't code UIs. I code complex mathematical algorithms, some of which never found in textbooks.
> LLMs have a large knowledge base that can be spit out at a moment notice. But they have zero insight on its contents, even when the information has just been asked a few lines before.
LLMs have excellent recall of recent information within their context window. While they lack human-like consciousness or "insight," their ability to synthesize and re-contextualize information from their vast knowledge base is a powerful capability that goes beyond simple data retrieval.
If anything LLMs show polymath-level ability to synthesize information across domains. How do I know? I use them everyday and get great mileage. It's very obvious.
> Most of the "intelligence" that LLMs show is just the ability to ask in the correct way the correct questions mirrored back to the user. That is why there is so many advice on how to do "proper prompting".
Prompting is the user interface for steering the model's intelligence. However, the model's ability to generate complex, novel, and functional outputs that far exceed the complexity of the input prompt shows that its "intelligence" is more than just a reflection of the user's query.
To summarize, I'm appalled by your statements, as a heavy user of SoTA LLMs on a daily basis for practically anything. I suspect you don't use them enough, and lack a viceral feel or scope for their capabilities.
This was one of those infuriating things that drove so many away from SO and jump ship the second there was an alternative.
That and search engines seemed to promote more recent content.. so an old answer sank under the ocean of blog spam
Agreed completely, and the sentiment seems to be spreading at an ever-increasing rate. I wonder how long it will be before the bubble collapses. I was thinking maybe as long as a few years, but it might be far sooner at this rate. All it will take is one of the large AI companies coming out and publicly stating that they're no longer making meaningful gains or some other way that shows the public what's really going on behind the curtain.
I'm certain the AI hype bubble will be studied for generations as the greatest mass delusion in history (so far).
I'm willing to accept that maybe LLMs cannot invent entirely new concepts but I know for a fact that they can synthesize and merge different unfamiliar concepts in complex logical ways to deliver new capabilities. This is valuable on its own.
Then when it fails to apply the "reasoning", that's evidence the artificial expertise we humans perceived or inferred is actually some kind of illusion.
Kind of like a a Chinese Room scenario: If the other end appears to talk about algebra perfectly well, but just can't do it, that's evidence you might be talking to a language-lookup machine instead of one that can reason.
That doesn't follow, if the weakness of the model manifests on a different level we wouldn't call rational in a human.
For example, a human might have dyslexia, a disorder on the perceptive level. A dyslexic can understand and explain his own limitation, but that doesn't help him overcome it.
GPT-5 Thinking (Think Longer) and Opus 4.1 Extended Thinking both get it right.
Maybe this unique problem is somehow a part of synthetic training data? Or maybe it's not and the paper is wrong? Either way, we have models that are much more capable at solving unique problems today.
Why? If it’s out of domain we know it’ll fail.
To see if LLMs adhere to logic or observed "logical" responses are rather reproduction of patterns.
I personally enjoy this idea of isolation "logic" from "pattern" and seeing if "logic" will manifest in LLM "thinking" about in "non-patternized" domain.
--
Also it's never bad give proves to public that "thinking" (like "intelligence") in AI context isn't the same thing we think about intuitively.
--
> If it’s out of domain we know it’ll fail.
Below goes question which is out of domain. Yet LLMs handle the replies in what appearing as logical way.
``` Kookers are blight. And shmakers are sin. If peker is blight and sin who is he? ```
It is out of domain and it does not fail (I've put it through thinking gemini 2.5). Now back to article. Is observed logic intristic to LLMs or it's an elaborate form of a pattern? Acoording to article it's a pattern.
“All A are B, All C are D, X is A and B, what is X?” is not outside this domain.
To me, it feels a lot like Deming's "what gets measured gets done" (with the quiet part "...oftentimes at the expense of everything else."). Of course, the quiet part is different in this case.
What is this "domain" of which you speak? Because LLMs are supposedly good for flying airplanes, mental health, snakebites, and mushroom poisoning.
If they had _succeeded_, we'd all be taking it as proof that LLMs can reason, right?
Most humans are unsophisticated simulators of reasoning-like text.
We don't have a good scientific or philosophical handle on what it actually means to "think" (let alone consciousness).
Humanity has so far been really bad at even using relative heuristics based on our own experiences to recognize, classify, and reason about entities that "think."
So it's really amusing when authors just arbitrarily side-step this whole issue and describe these systems as categorically not being real but imitating the real thing... all the while not realizing such characterizations apply to humanity as well.