It's not clear that this analogy helps distinguish what humans do from what LLMs do at all.
That is not the argument
You aren’t thinking, you are just “generating thoughts”.
The apparent “thought process” (e.g. chain of generated thoughts) is a post hoc observation, not a causal component.
However, to successfully function in the world, we have to play along with the illusion. Fortunately, that happens quite naturally :)
Something which oddly seems to be in shorter supply than I'd imagine in this forum.
There's lots of fingers-in-ears denial about what these models say about the (non special) nature of human cognition.
Odd when it seems like common sense, even pre-LLM, that our brains do some cool stuff, but it's all just probabilistic sparks following reinforcement too.
Who’s to say that in among that processing, there isn’t also ‘reasoning’ or ‘thinking’ going on. Over the top of which the output language is just a façade?