You’re confusing several different ideas here.
The idea you’re talking about is called “the bitter lesson.” It (very basically) says that a model with more compute put behind it will perform better than a cleverer method which may use less compute. Has nothing to do with being “generic.” It’s also worth noting that, afaik, it’s an accurate observation, but not a law or a fact. It may not hold forever.
Either way, I’m not arguing against that. I’m saying that LLMs are too general to be useful in specific, specialized, domains.
Sure bigger generic models perform (increasingly marginally) better at the benchmarks we’ve cooked up, but they’re still too general to be that useful in any specific context. That’s the entire reason RAG exists in the first place.
I’m saying that a language model trained on a specific domain will perform better at tasks in that domain than a similar sized model (in terms of compute) trained on a lot of different, unrelated text.
For instance, a model trained specifically on code will produce better code than a similarly sized model trained on all available text.
I really hope that example makes what I’m saying self-evident.
If you'd compare it with ie. model for self driving cars – generic text models will not win because they operate in different domain.
In all cases trying to optimize on subset/specialized tasks within domain is not worth the investment because state of art will be held by larger models working on the whole available set.
> You're explaining it nicely and then seem to make mistake that contradicts what you've just said – because code and text share domain (text based)
"Text" is not the domain that matters.
The whole trick behind LLMs being as capable as they are, is that they're able to tease out concepts from all that training text - concepts of any kind, from things to ideas to patterns of thinking. The latent space of those models has enough dimensions to encode just about any semantic relationship as some distinct direction, and practice shows this is exactly what happens. That's what makes style transfer pretty much a vector operation (instead of "-King +Woman", think "-Academic, +Funny"), why LLMs are so good at translating between languages, from spec to code, and why adding modalities worked so well.
With LLMs, the common domain between "text" and "code" is not "text", but the way humans think, and the way they understand reality. It's not the raw sequences of tokens that map between, say, poetry or academic texts and code - it's the patterns of thought behind those sequences of tokens.
Code is a specific domain - beyond being the lifeblood of programs, it's also an exercise in a specific way of thinking, taken up to 11. That's why learning code turned out to be crucial for improving general reasoning abilities of LLMs (the same is, IMO, true for humans, but it's harder to demonstrate a solid proof). And conversely, text in general provides context for code that would be hard to infer from code alone.
All digital data is just 1s and 0s.
Do you think a model trained on raw bytes would perform coding tasks better than a model trained on code?
I have a strong hunch that there’s some Goldilocks zone of specificity for statistical model performance and I don’t think “all text” is in that zone.
Here is the article for “the bitter lesson.” [0]
It talks about general machine learning strategies which use more compute are better at learning a given data set then a strategy tailor made for that set.
This does not imply that training on a more general dataset will yield more performance than using a more specific dataset.
The lesson is about machine learning methods, not about end-model performance at a specific task.
Imagine a logistic regression model vs an expert system for determining real estate prices.
The lesson tells us that, given the more and more compute, the logistic regression model will perform better than the expert system.
The lesson does not imply that when given 2 logistic regression models, one trained on global real estate data and one trained on local, that the former would outperform.
I realize this is a fine distinction and that I may not be explaining it as well as I could if I were speaking, but it’s an important distinction nonetheless.
[0] http://www.incompleteideas.net/IncIdeas/BitterLesson.html