They have a very complex multidimensional "probability table" (more correctly a compressed geometric representation of token relationships) that they use to string together tokens (which have no semantic meaning), which then get converted to words that have semantic meaning to US, but not to the machine.
The presence or absence of understanding can't be proven by mere association of with a "probability table", especially if such probability table is exactly expected from the perspective of physics, and if the models have continuously gained better and better performance by training them directly on human expressions!
The same goes for humans. Most awards are built on novel research built on pre-existing works. This a LLM is capable of doing.
You're focusing too close, abstract up a level. Your point relates to the "micro" system functioning, not the wider "macro" result (think emergent capabilities).