No, but we know how it works and it is just a stochastic parrot. There is no magic in there.
What is more suprising to me that humans are so predictable that a glorified autocomplete works this well. Then again, it's not that suprising....
But even if knew how something works (which in present case we don't), shouldn't diminish our opinion of it. Will you have a lesser opinion of human intelligence, once we figure out how it works?
We do have proofs that hallucination will always be a problem. We have proofs that the "reasoning" for models that "think" are actually regurgitation of human explanations written out. When asked to do truly novel things, the models fail. When asked to do high-precision things, the models fail. When asked to do high-accuracy things, the models fail.
LLMs don't understand. They are search engines. We are experience engines, and philosophically, we don't have a way to tokenize experience, we can only tokenize its description. So while LLMs can juggle descriptions all day long, these algorithms do so disconnected from the underlying experiences required for understanding.
1. Multi-step reasoning with backtracking when DeepSeek R1 was trained via GRPO.
2. Translation of languages they haven't even seen via in-context learning.
3. Arithmetic: heavily correlated with model size, but it does appear.
I could go on.
Albeit it's not an LLM, but a deep learning model trained via RL, would you say that AlphaZero's move 37 also doesn't count as emergence and the model has no understanding of Go?
LLMs are cool, a lot of people find them useful. Hype bros are full of crap and there's no point arguing with them because it's always a pointless discussion. With crypto and nfts it's future predictions which are just inherently impossible to reason about, with ai it's partially that, and partially the whole "do they have human properties" thing which is equally impossible to reason about.
It gets discussed to death every single day.
We do know how LLMs work, correct? We also know what they are capable of and what not (of course this line is often blurred by hype).
I am not an expert at all on LLMs or neuroscience. But it is apparent that having a discussion with a human vs. with an LLM is a completely different ballpark. I am not saying that we will never have technology that can "understand" and "think" like a human does. I am just saying, this is not it.
Also, just because a lot of progress in LLMs has been made in the past 5 years, that we can just extrapolate the future progress on this. Local maxima and technology limits are a thing.
NO! We have working training algorithms. We still don't have a complete understanding of why deep learning works in practice, and especially not why it works at the current level of scale. If you disagree, please cite me the papers because I'd love to read them.
To put in another way: Just because you can breed dogs, it doesn't necessary mean that you have a working theory of genes or even that you know they exist. Which was actually the human condition for most of history.