Why would you presume this? I think part of a lot of people's AI skepticism is talk like this. You have no idea. Full stop. Why wouldn't progress be linear? As new breakthroughs come, newer ones will be harder to come by. Perhaps it's exponential. Perhaps it's linear. No one knows.
Where I'm skeptical of AI would be in the idea an LLM can ever get to AGI level, if AGI is even really possible, and if the whole thing is actually viable. I'm also very skeptical that the discoveries of any AGI would be shared in ways that would allow exponential growth; licenses stopping using their AGI to make your own, copyright on the new laws of physics and royalties on any discovery you make from using those new laws etc.
Prove it.
Also, AI will need resources. Hardware. Water. Electricity. Can those resources be supplied at an exponential rate? People need to calm down and stop stating things as truth when they literally have no idea.
That's nothing close to AGI though. An AI of some kind may be able to design and test new algorithms because those algorithms live entirely in the digital world, but that skill isn't generalized to anything outside of the digital space.
Research is entirely theoretical until it can be tested in the real world. For an AGI to do that it doesn't just need a certain level of intelligence, it needs a model of the world and a way to test potential solutions to problems in the real world.
Claims that AGI will "solve" energy, cancer, global warming, etc all run into this problem. An AI may invent a long list of possible interventions but those interventions are only as good as the AI's model of the world we live in. Those interventions still need to be tested by us in the real world, the AI is really just guessing at what might work and has no idea what may be missing or wrong in its model of the physical world.
Sure, you can scale it, but if an LLM takes, say, $1 million a year to run an AGI instance, but it costs only $500k for one human researcher, then it still doesn’t get you anywhere faster than humans do.
It might scale up, it might not, we don’t know. We won’t know until we reach it.
We also don’t know if it scales linearly. Or if it’s learning capability and capacity will able to support exponential capability increase. Our current LLM’s don’t even have the capability of self improvement or learning even if they were capable: they can accumulate additional knowledge through the context window, but the models are static unless you fine tune or retrain them. What if our current models were ready for AGI but these limitations are stopping it? How would we ever know? Maybe it will be able to self improve but it will I’ll take exponentially larger amounts of training data. Or exponentially larger amounts of energy. Or maybe it can become “smarter” but at the cost of being larger to the point where the laws of physics mean it has to think slower, 2x the thinking but 2x the time, could happen! What if an AGI doesn’t want to improve?
Far too many unknowns to say what will happen.
Just from the fact that the LLM can/will work on the issue 24/7 vs a human who typically will want to do things like sleep, eat, and spend time not working, there would already be a noticeable increase in research speed.
So I don't think it's a given that progress will just be "exponential" once we have an AGI that can teach itself things. There is a vast ocean of original thought that goes beyond simple self-optimization.
Fundamentally discovery could be described as looking for gaps in our observation and then attempting to fill in those gaps with more observation and analysis.
The age of low hanging fruit shower thought inventions draws to a close when every field requires 10-20+ years of study to approach a reasonable knowledge of it.
"Sparks" of creativity, as you say, are just based upon memories and experience. This isn't something special, its an emergent property of retaining knowledge and having thought. There is no reason to think AI is incapable of hypothesizing and then following up on those.
Every AI can be immediately imparted with all expert human knowledge across all fields. Their threshold for creativity is far beyond ours, once tamed.
Those observations only lead to scaling research linearly, not exponentially.
Assuming a given discovery requires X units of effort, simply adding more time and more capacity just means we increase the slope of the line.
Exponential progress requires accelerating the rate of acceleration of scientific discovery, and for all we know that's fundamentally limited by computing capacity, energy requirements, or good ol' fundamental physics.
Progress has been exponential in the generic. We made approximately the same progress in the past 100 years as the prior 1000 as the prior 30,000, as the prior million, and so on, all the way back to multicellular life evolving over 2 billion years or so.
There's a question of the exponent, though. Living through that exponential growth circa 50AD felt at best linear, if not flat.
Has it? Really?
Consider theoretical physics, which hasn't significantly advancement since the advent of general relativity and quantum theory.
Or neurology, where we continue to have only the most basic understanding of how the human mind actually works (let alone the origin of consciousness).
Heck, let's look at good ol' Moore's Law, which started off exponential but has slowed down dramatically.
It's said that an S curve always starts out looking exponential, and I'd argue in all of those cases we're seeing exactly that. There's no reason to assume technological progress in general, whether via human or artificial intelligence, is necessarily any different.
That's all noise.
I hear this sort of argument all the time, but what is it even based on? There’s no clear definition of scientific and technological progress, much less something that’s measurable clearly enough to make claims like this.
As I understand it, the idea is simply “Ooo, look, it took ten thousand years to go from fire to wheel, but only a couple hundred to go from printing press to airplane!!!”, and I guess that’s true (at least if you have a very juvenile, Sid Meier’s Civilization-like understanding of what history even is) but it’s also nonsense to try and extrapolate actual numbers from it.