But if I had to guess, I believe they'd argue that an LLM is basically all a priori knowledge. It is trained on a massive data set and all it can do once trained is reason from those initial axioms (they aren't really axioms, but whatever). While humans, and actually many other animals to a lesser extent, can make observations, challenge existing assumptions, generalize to solve problems, etc.
That's not exactly my definition of intelligence, but that might be what they were going for.
So, if we look at it from this perspective, human thinking is not fundamentally different from LLMs in that both rely on existing material to create new ideas.
The main difference is that LLMs process text statistically, while humans interpret text in context, influenced by emotions, experiences, biases, and goals. LLMs' interpretation is probabilistic, not conceptual.
Additionally, revolutionary thinking often requires rejecting past ideas and forming new conceptual frameworks, but LLMs cannot reject prior data, they are bound by it.
At any rate, the question remains, are LLMs capable of revolutionary ideas just like humans?
In that way, my dog is far more intelligent than LLM, in that he has a mental model of his world. An LLM is only intelligent relative to a human actor, and so it is no different than any other technology that humans have created to pursue their own ends.