> If LLMs were capable of understanding, they wouldn't be so easy to trick on novel problems.
Got it, so an LLM only understands my words if it has full mastery of every new problem domain within a few thousand milliseconds of the first time the problem has been posed in the history of the world.
Thanks for letting me know what it means to understand words, here I was thinking it meant translating them to the concepts the speaker intended.
Neat party trick to have a perfect map of all semantic structures and use it to trick users to get what they want through simple natural-language conversation, all without understanding the language at all.