Step away from LLMs for a second and recognize that “Yesterday it was X, so today it must be X+1” is such a naive take and obviously something that humans so easily fall into a trap of believing (see: flying cars).
To be frank, from what I can see, even if all progress stopped right now, it would take 1-2 decades to fully operationalise the existing potential of LLMs. There would be massive economic and social change. But progress is not stopping, and in some measurements, continues to improve exponentially. I really think this is incredibly transformative. Moreso than anything humanity has ever experienced. In the last year, OpenAI and potentially Claude have been working on recursive self-improvement. Meaning these models are designing better versions of themselves. This means we have effectively entered the singularity.
I agree with all of this though
> In this case, both you and the other are speculating about the near future of a thing, neither of you knows.
One of us is making a much grander claim than the other:
- LLMs have limitless potential for growth; because they are not capable of something today does not mean they won’t be capable of it tomorrow
- LLMs have fundamental limitations due to their underlying architecture and therefore are not limitless in capability> We went from 2 + 7 = 11 to "solved a frontier math problem" in 3 years, yet people don't think this will improve?
All that says is that the speaker thinks models will improve past where they are today. Not that it's a logical certainty (the first thing you jumped on them for), and certainly not anything about "limitless potential for growth" (which nobody even mentioned). With replies like this, invoking fallacies and attacking claims nobody made, you're adding a lot of heat and very little light here (and a few other threads on the page).
Exceedingly generous interpretation in my opinion. I tend to interpret rhetorical questions of that form as “it’s so obvious that I shouldn’t even have to ask it”.
The belief in the inevitability of progress is a bad assumption. Especially if you assume a particular technology will keep advancing.
We have robust scaling laws that continue to hold at the largest scales. It is absolutely a very safe bet that more compute + more training + algorithmic improvements will certainly improve performance it's not like we're rolling a 1 trillion dollar die.