The term “thinking” is rather ill-defined, too bound to how we perceive our own wakeful thinking.
When conversing with LLMs, I never get the feeling that they have a solid grasp on the conversation. When you dig into topics, there is always a little too much vagueness, a slight but clear lack of coherence, continuity and awareness, a prevalence of cookie-cutter verbiage. It feels like a mind that isn’t fully “there” — and maybe not at all.
I would agree that LLMs reason (well, the reasoning models). But “thinking”? I don’t know. There is something missing.
The current crop of models do not "sleep" in any way. The associated limitations on long term task adaptation are obvious barriers to their general utility.
> When conversing with LLMs, I never get the feeling that they have a solid grasp on the conversation. When you dig into topics, there is always a little too much vagueness, a slight but clear lack of coherence, continuity and awareness, a prevalence of cookie-cutter verbiage. It feels like a mind that isn’t fully “there” — and maybe not at all.
One of the key functions of REM sleep seems to be the ability to generalize concepts and make connections between "distant" ideas in latent space [1].
I would argue that the current crop of LLMs are overfit on recall ability, particularly on their training corpus. The inherent trade-off is that they are underfit on "conceptual" intelligence. The ability to make connections between these ideas.
As a result, you often get "thinking shaped objects", to paraphrase Janelle Shane [2]. It does feel like the primordial ooze of intelligence, but it is clear we still have several transformer-shaped breakthroughs before actual (human comparable) intelligence.
1. https://en.wikipedia.org/wiki/Why_We_Sleep 2. https://www.aiweirdness.com/
Just now in an debugging session with claude code:
* let me read this file...
* let me read this file...
* I think there's a caching issue with the model after dropping the module. Let me check if there's a save or reload needed after DROP MODULE. First, let me verify something:
* creates a bash/javascript script to verify its assumption
* runs the script (after review and approval)
* Aha! I found the problem! Look at the output...
How is this not thinking?Whereas a gorilla who learns sign language to communicate and use that communication to achieve aims which have direct correlation with its sense of self - that's thought in the Cogito, Ergo Sum sense of the word.
Thought as commonly concieved by the layman is a sort of isolated phenomenon that is mechanical in nature and can be judged by its outward effects; whereas in the philosophical tradition defining thought is known to be one of the hard questions for its mysterious qualia of being interconnected with will and being as described above.
Guess I gave you the long answer. (though, really, it could be much longer than this.) The Turing Test touches on this distinction between the appearance of thought and actual thought.
The question goes all the way down to metaphysics; some (such as myself) would say that one must be able to define awareness (what some call consciousness - though I think that term is too loaded) before you can define thought. In fact that is at the heart of the western philosophical tradition; and the jury consensus remains elusive after all these thousands of years.
Something doesn't need to learn to think. I think all the time without learning.
There's also an argument for machines already starting to crack learning with literal reinforcement training and feedback loops.
Your language game was when you said the 'cognition ends...', as cognition is just a synonym for thinking. "The thinking ends when the inference cycle ends. It's not thinking'" becomes a clear contradiction.
As for "the process by which it does that is wholly unrelated", buddy it's modelled on human neuron behaviour. That's how we've had this generative AI breakthrough. We've replicated human mental cognition as closely as we can with current technology and the output bears striking resemblance to our own generative capabilities (thoughts).
Happy to admit it's not identical, but it's damn well inside the definition of thinking, may also cover learning. It may be better to take a second look at human thinking and wonder if it's as cryptic and deep as we thought ten, twenty years ago.
Updates your models for the next morning, which is why the answer is there when it wasn’t before.
“Let me think about this.” “I have to think on it.”
My brain regulates all sorts of processes unconsciously, like breathing, for example. I don’t treat those as “thinking,” so I don’t know why other unconscious brain activity would be either.
"Thinking" to me is very much NOT just conscious reasoning. So much of what I think is not done consciously.
Indeed "let me think about it" is often simply giving my brain time to "sit on it", for another expression - only after which will I have enough mind time on the various alternatives for a worthwhile conscious decision.
The continuity is currently an illusion.
Much like speaking to a less experienced colleague, no?
They say things that contain the right ideas, but arrange it unconvincingly. Still useful to have though.
Yes I would.