> how do you define intelligence, and on what basis are you claiming an LLM doesn't have it? Ultimately, it's a question of definitions, and I suspect it'd actually be quite difficult to give a rigorous (non-handwavy) definition of intelligence that an LLM can't satisfy. This may partly be an indictment of our understanding of intelligence.
In my opinion there's nothing wrong with the traditional definition which is "the ability to acquire and apply knowledge and skills". But if you want to reach a minimum of "hand-waviness" then it's additionally required to define 'acquire', 'apply', and 'skills'. My personal definition is that acquiring knowledge requires building some sort of internal semantic model of it, though the occurrence of which there is actually evidence of in LLMs (see "abliteration"), so one out of three so far. But it falls apart at 'apply'. How do we even define applying? Well, I do not define it as what LLMs do, which is to predict the next token of the data.
I, personally, apply my knowledge by recognizing where it may be applicable, bringing it to thought, and then using that in the construction of ideas or strategies that I can act on. There's a degree of separation here between thought and action that doesn't currently seem to exist in LLMs; some creators are trying to simulate it by having an LLM for thoughts and another LLM for actions, or by enabling the thoughts to call tools that perform actions, or by having the LLM think before acting as in DeepSeek R1, but that isn't quite it.
An LLM still doesn't understand, say, spatial reasoning when it is helping me write something like a battle in a story. I have spatial reasoning because I can literally see what is happening while I write. I can see, and feel, and hear, and everything. Maybe that's just my dissociative disorder, but I will continue to await the day where LLMs might be able to do stuff like that. Until they can have that essentially happening "in their head", reason about it, and write using that, I won't believe that LLMs can "apply" much of anything just yet. (Other than machine learning I guess.)
> If we use traditional ways of measuring intelligence, like IQ tests, then LLMs beat the average human.
I think the whole notion of IQ is flawed because of neurodivergence. To put things in vaguely ableist-sounding terms (I don't mean it that way, but it will always sound that way), LLMs right now feel too neurotypical (pattern-based) and I would like to see future LLM developments that allow models leaning closer to autistic (logic-based).