It's not clear that this analogy helps distinguish what humans do from what LLMs do at all.
That is not the argument
You aren’t thinking, you are just “generating thoughts”.
The apparent “thought process” (e.g. chain of generated thoughts) is a post hoc observation, not a causal component.
However, to successfully function in the world, we have to play along with the illusion. Fortunately, that happens quite naturally :)
Something which oddly seems to be in shorter supply than I'd imagine in this forum.
There's lots of fingers-in-ears denial about what these models say about the (non special) nature of human cognition.
Odd when it seems like common sense, even pre-LLM, that our brains do some cool stuff, but it's all just probabilistic sparks following reinforcement too.
There "seems to be" something special? Maybe from the perspective of the sensing organ, yes.
However consider that an EEG can measure brain decision impulse before you're consciously aware of making a decision. You then retrospectively frame it as self awareness after the fact to make sense of cause and effect.
Human self awareness and consciousness is just an odd side effect of the fact you are the machine doing the thinking. It seems special to you. There's no evidence that it is, and in fact, given crows, dogs, dolphins and so on show similar (but diminished reasoning) while it may be true we have some unique capability ... unless you want to define "special" I'm going to read "mystical" where you said "special".
You over eager fuzzy pattern seeker you.
I hope we get to know everything during our lifetimes, or we reach immortality so we have time to get to know everything. This feels honestly like a timeline where there's potential for it.
It feels a bit pointless to have been lived and not knowing what's behind all that.
Who’s to say that in among that processing, there isn’t also ‘reasoning’ or ‘thinking’ going on. Over the top of which the output language is just a façade?