Humans don't require input to, say, decide to go for a walk.
What's missing in the LLM is volition.
Impossible to falsify since humans are continuously receiving inputs from both external and internal sensors.
> What's missing in the LLM is volition.
What's missing is embodiment, or, at least, a continuous loop feeding a wide variety of inputs about the state of world. Given that, and info about of set of tools by which it can act in the world, I have no doubt that current LLMs would exhibit some kind (possibly not desirable or coherent, from a human POV, at least without a whole lot of prompt engineering) of volitional-seeming action.
Temperature changes, visual stimulus, auditory stimulus, body cues, random thoughts firing, etc.. Those are all going on all the time.
I don't choose to think random thoughts they appear.
Which is different than thoughts I consciously choose to think and engage with.
From my subjective perspective it is an input into my field of awareness.
But again this doesn’t see to be the same thing as thinking. If I could only reply to you when you send me a message but could reason through any problem we discuss just like “able to want a walk” me could, would that mean I no longer could think? I think these are different issues.
On that though, these see trivially solvable with loops and a bit of memory to write to and read from - would that really make the difference for you? A box setup to run continuously like this would be thinking?
So of course it doesn't do everything a human does, but it still can do some aspects of mental processes.
Whether "thinking" means "everything a human brain does" or whether "thinking" means a specific cognitive process that we humans do, is a matter of definition.
I'd argue that defining "thinking" independently of "volition" is a useful definition because it allows us to break down things in parts and understand them
Very much a subject of contention.
How do you even know you're awake, without any input?