So the makers proudly say
Will optimize its program
In an almost human way.
And truly, the resemblance
Is uncomfortably strong:
It isn't merely thinking,
It is even thinking wrong.
Piet Hein wrote that in reference to the first operator-free elevators, some 70+ years ago.
What you call hallucination, I call misremembering. Humans do it too. The LLM failure modes are very similar to human failure modes, including making up stuff, being tricked to do something they shouldn't, and even getting mad at their interlocutors. Indeed, they're not merely thinking, they're even thinking wrong.