I think you fundamentally don't understand the nature of exponential growth, and the power of diminishing returns. Even if you
double the GPU capacity over the next year, you won't even remotely begin to come close enough to producing a step-level growth of capability such as what we experienced between 2 to 3, or even 3 to 4. The LLM concept can only take you so far, and we're approaching the limits of what an LLM is capable of. You generally can't just push an innovation infinitely, it
will have a drop-off point somewhere.
the "Large" part of LLMs is probably done. We've gotten as far as we can with those style of models, and the next innovation will be in smaller, more targeted models.
> As costs have skyrocketed while benefits have leveled off, the economics of scale have turned against ever-larger models. Progress will instead come from improving model architectures, enhancing data efficiency, and advancing algorithmic techniques beyond copy-paste scale. The era of unlimited data, computing and model size that remade AI over the past decade is finally drawing to a close. [0]
> Altman, who was interviewed over Zoom at the Imagination in Action event at MIT yesterday, believes we are approaching the limits of LLM size for size’s sake. “I think we’re at the end of the era where it’s gonna be these giant models, and we’ll make them better in other ways,” Altman said. [1]
[0] https://venturebeat.com/ai/openai-chief-says-age-of-giant-ai...
[1] https://techcrunch.com/2023/04/14/sam-altman-size-of-llms-wo...