I think the real magic here comes from the fact that LLMs are a specialized sort of neural network, and that neural networks are universal approximators [0]. In other words, LLMs are general learners because they are neural networks.
This is also not particularly profound, except that there are mathematical proofs of the universal approximation theorem that give us insight into why it must be so.
---
[0]: https://en.wikipedia.org/wiki/Universal_approximation_theore...
Before transformers we built different neural network architectures for each domain. These architectures offered better inductive biases for their respective domains and thus traded off some of the expressivity for better learnability and generalization.
Nowadays the best architectures seem to be merging towards transformers. They appear to offer more generally useful inductive biases and thus a better trade-off between the three ingredients than the earlier architectures.
I'm not saying that if your goal is to come up with a usable general learning algorithm that it is just "as simple as neural network and done." What I'm saying is the converse: that the general learning capabilities of LLMs are most likely explained by the fact that, well, they are general learners, via the universal approximation theorem.
Your other comment, I think, suggests why we're just now starting to see more general learning capabilities out of neural networks, when the theory says that a single hidden layer is enough: with a single hidden layer, you really need to get all the weights pretty close to "right" to see general learning/universal approximator behavior. When you have more than one hidden layer, then some of your weights can be wrong, as long as the errors are corrected in later layers.
Now, I'm not an AI researcher or even anyone who works anywhere near this area, but I did take a course or two in grad school, and this seems at least intuitively plausible to me. If there are researchers in the field reading this, I'd definitely like to hear their takes, because I'm totally open to being completely wrong here. I'd rather be one of the lucky 10,000 than just have this half-baked idea that seems right. :-)
I don’t feel like the points made here align with any insight about the workings of LLMs. The fact that, as a human, I “wouldn’t know where to start” when asked to add two numbers without doing any addition doesn’t apply to computers (running predictive models). They would start with statistics over lots of similar examples in the training data. It’s still remarkable LLMs do so well on these problems, while at the same time doing somewhat poorly because they can’t do arithmetic!
Re: "LLMs are not particularly good at arithmetic". There are published results that show that LLMs using certain techniques reach close to 100% accuracy on 8-digit addition: https://arxiv.org/pdf/2206.07682.pdf. There are also recent results from OpenAI where their model obtained solid results on high school math competition problems, which are harder than arithmetic: https://openai.com/research/improving-mathematical-reasoning... I haven't looked into counting syllables or recognizing haikus but I bet that this is a result of tokenization and not an inability of the model to create a representation of the underlying phenomena.
I'm not an expert in the field, but, there are lots of previous algorithms for predicting the next token in a series (Markov chains, autocomplete). None of them felt so much pressure to make an accurate prediction that they had no alternative but to teach themselves arithmetic! It seems what is different about LLMs (as far as the post goes) is that we can anthropomorphize them.
More seriously, I guess I just feel like a meaningful sketch of an explanation for why algorithm X (where X is LLMs in this case) for continuing a piece of text is good at problem A should involve something about X and A. Because it is clearly highly dependent on the exact values of X and A, not just whether A can be posed as a text completion problem and humans would prefer the computer learn to solve the underlying problem to produce better text. For example, it could help to imagine a mechanism by which algorithm X could solve problem A. The closest thing to a mechanism (something algorithm X, i.e. LLMs, might be doing that's special) in the post is the talk of necessity being the mother of invention and "a deeper understanding of reality simplifies next-token prediction tasks," and the suggestion that if you were an LLM you might want to use "the rules of addition."
It's true that modeling arithmetic in some way could help a LLM account for known arithmetic problems in the training data, which could help it on unseen arithmetic problems, but what problems an LLM can solve is a function of what it can model. Anything an LLM can't model or can't do, it just doesn't. LLMs are really bad at chess, for example. The patterns of digits in addition may be similar enough to the hierarchical patterns in language the LLM is modeling. But it's not clear if the LLM is using the "rules of addition" or not. As far as I know, we don't actually understand why LLMs are able to store so much factual information, produce such coherent stories, and do the specific things they can do.
I suspect most of this is due to tokenization making it difficult to generalize these concepts.
There are some weird edge cases though, for example GPT-4 will almost always be able to add two 40 digits number but it is also almost always wrong when adding a 40 digit and 35 digit number.
I'm reminded of "Benny's Rules", where someone sat down with a "self-directed" 6th grader of high IQ who had been doing okay in math classes... but their success so far was actually based on painstakingly constructing somewhat-lexical rules about "math", mumbo-jumbo that had been just good enough to carry them through a lot of graded tests.
> Benny believed that the fraction 5/10 = 1.5 and 400/400 = 8.00, because he believed the rule was to add the numerator and denominator and then divide by the number represented by the highest place value. Benny was consistent and confident with this rule and it led him to believe things like 4/11 = 11/4 = 1.5.
> Benny converted decimals to fractions with the inverse of his fraction-to-decimal rule. If he needed to write 0.5 as a fraction, "it will be like this ... 3/2 or 2/3 or anything as long as it comes out with the answer 5, because you're adding them" (Erlwanger, 1973, p. 50).
[0] https://blog.mathed.net/2011/07/rysk-erlwangers-bennys-conce...
I assume because there's little documentation about how many syllables every word has on the internet?
LLMs understand it to a certain extent. It's more then "predicting" the next token. When people ascribe "predicting the next token" it's a niave and unintelligent description to cover up what they don't understand.
I mean you can describe a human brain as simply wetware, a jumble of signals and chemical reactions that twitch muscles and react to pressure waves in the air and light. But obviously there is a higher level description of the human brain that is missing from that description.
The same thing could be said about LLMs. I can tell you this, researchers completely understand token prediction that much can be said. What we don't currently understand is the high level description. Perhaps it's not something we can understand as we've never been able to understand human consciousness at a high level either.
That's the thing with people. Nobody actually understands the high level description of a fully trained LLM. People are lambasting others because they "think" they understand when they only actually understand the low level primitives. We understand assembly, but you don't understand the Operating system written in assembly.
Take this for example:
Me: 4320598340958340958340953095809348509348503480958340958304985038530495830 + 1
chatGPT: 4320598340958340958340953095809348509348503480958340958304985038530495830 + 1 equals 4320598340958340958340953095809348509348503480958340958304985038530495831.
The chances of chatGPT memorizing or even predicting the next tokens here are in a probability too low to even consider. There are so many possible numbers here even numbers that aren't true but have a "higher probability" of being close to the truth from a token/edit-distance standpoint. It's safe to say, from a scientific standpoint, chatGPT in this scenario understands what it means to add 1.Realize that this calculation results in an overflow. chatGPT needs symbolic understanding to perform the feat it did above.
But there are, of course, things it gets wrong. But again we don't truly understand what's going on here. Is it lying to us? Perhaps it can't differentiate between just a generated statistical token or a actual math equation. It's hard to say. But from the example above, by probability, we know that an aspect of true understanding and ability exists.
<numbers>0 + 1 -> <numbers>1
Even simple attention mechanisms would handle that quite well with enough examples of <numbers>
Given that the history of science is mostly driven by trying to find the simplest explanation for observed phenomenon, thinking about regularization makes it much less surprising that LLMs end up learning how the world "actually works".
I'm not sure LLMs are trained to simplify anything. They have billions of parameters after all.
statistics on large amount of amount of data just seems to work after all.
The good news is that this will pit one LLM against others, and virtually eliminate any potential for a single powerful AI to emerge and do something harmful.
Something you'll find if you ever train a neural network to learn a mathematical function is that it will only ever approximate that function. It won't try to guess what the function is exactly like a human might do.
For example consider, f(1) = 2, f(2) = 4, f(3) = 6, f(4) = 8, f(5) = 10.
As a human you know how important precision is in maths and you know generally humans like round numbers so you naturally assume that, f(x) = x2
Neural networks don't have these biases by default. They'll look for a function that gets close enough maybe something like, f(x) = x1.993929910302942223
From a neural network's perspective the loss between this answer and the actual answer is almost so trivial that it's basically irrelevant.
Then a human who likes round numbers comes along and asks the network, what's f(1,000)? To which the neural network replies, 19939.3
Then the human then goes away convinced the AI doesn't know maths, when in reality the AI basically does know maths, it just doesn't care as much about aromatic precession as the human does. Because again, to the AI 19939.3 is a perfectly acceptable answer.
So now for fun let me ask ChatGPT some arithmetic questions...
> ME
> what's 2343423 + 9988733?
> ChatGPT
> The sum of 2343423 and 9988733 is 12392156.
WRONG! It's actually 12332156. That's an entire digit out and almost 0.5% larger than the actual answer!
> ME
> what is 8379270 + 387299177?
> ChatGPT
> The sum of 8379270 and 387299177 is 395678447.
Er, okay, that was right. Bad example, let me try again.
> ME
> what is 2233322223333 + 387299177?
> ChatGPT
> The sum of 2233322223333 and 387299177 is 2233322610510.
WRONG! It's actually 2233709522510. That's 6 digits out and almost 0.02% smaller than the actual answer!
If you take a more open minded view I think it's fair to say ChatGPT basically does know arithmetic, but its reward function probably didn't prioritise arithmetic precision in the same way a decade of schooling does for us humans. For ChatGPT having a few digits wrong in an arithmetic problem is probably less important that its reply containing that sum being slightly improperly worded.
I guess what I'm saying is that I'm not sure I quite agree with the author that LLMs don't do arithmetic at all. It's not that they're trying to guess the next word without arithmetic, but more that they're not doing arithmetic the same as we humans do it. Which is may have been the point the author was making... I'm not really sure.
They can write code to do math, but without code they can only estimate how likely a series of numbers are to be seen together.
They're very likely to get things like 2+2=4 correct because that's probably unique and common in their training data. They're unlikely to get two random numbers correct because it doesn't actually know what those numbers mean.
I'd propose that your claim that LLMs don't understand at maths is very similar to the claim that Neuton didn't understand the Laws of Motion.
Yes – Neuton's laws are wrong, but they're also practically correct for 99.999% of applications. If correctness is viewed as a binary, Neuron is 100% wrong, but as a scalar Neuron is basically right.
Neural networks are inherently bad at finding exact rules, but they're excellent at approximating them to an accuracy that is acceptably good, this is bit that people miss when they say LLMs can't do maths.
When you claim they don't understand the rules of maths, I agree that they don't understand the explicit rules, but with the caveat that they probably understand something that allows them to approximate those rules "well enough".
This is why if you ask ChatGPT a question like 23435234 + 3243423 it's not going to say -33.1. It might not give the right answer, but it will almost always give you something that's close and very plausible. So while it might not understand the exact rules, it basically understands what happens when you add two numbers and 99% of the time will give you an answer that is basically correct.
The larger point I was trying to make here is that I think we humans are kinda biased when it comes to maths because we understand character precision which is the bias I think you're basing your reasoning on here. We humans believe precision is extremely important in the context of maths unlike other textual content. But an LLM isn't operating with that bias. It's just trying to approximate maths in a way that is correct enough in a similar way that it's trying to approximate the likely next character (or more correctly token) of other text content.
I don't think approximations are 100% wrong and perhaps us humans being bothered about LLMs giving answers to maths questions that are 0.1% wrong actually says more about our values and how we view maths than it says about an LLMs mathematical abilities.