People moving away from prideful principle to leverage new tech in the past doesn't guarantee that the same idea in the current context will pan out.
But as you say.. we'll see.
To LLMs specifically as they're now? Sure.
To LLMs in general, or generative AI in general? Eventually, in some distant future, yes.
Sure, progress can't ride the exponent forever - observable universe is finite, as far as we can tell right now, we're fundamentally limited by the size of our light cone. And while in any field narrow enough, progress too follows an S-curve, new discoveries spin off new avenues with their own S-curves. If you zoom out a little those S-curves neatly add up to an exponential function.
So no, for the time being, I don't expect LLMs or generative AIs to slow down - there's plenty of tangential improvements that people are barely beginning to explore. There's more than enough to sustain exponential advancement for some time.
In other words, it’s possible to have rapid technological advancement without significant improvement/benefit to society.
This is certainly true in many ways already.
On the other hand, it's also complicated, because society/culture seems to be downstream of technology; we might not be able to advance humanity in lock step or ahead of technology, simply because advancing humanity is a consequence of advancing technology.
Intergalactic travel is, of course, rather slow.
Most of the discussion on the thread is about LLMs as they are right now. There's only one odd answer that throws an "AGI" around as if those things could think.
Anyway, IMO, it's all way overblown. People will learn to second-guess the LLMs as soon as they are hit by a couple of bad answers.
by that I mean, leveraging writing was a benefit for humans to store data and think over longer term using a passive technique (stones, tablets, papyrus).. but an active tool might not have a positive effect on usage and brains.
if you give me shoes, i might run further to find food, if you give me a car i mostly stop running and there might be no better fruit 100 miles away than what I had on my hill. (weak metaphor)
But I don't know if it fits an S-curve or if they are just bellow the trend.
1. Current reasoning models can do a -lot- more than skeptics give them credit for. Typical human performance even among people who do something for employment is not always that high.
2. In areas where AI has mediocre performance, it may not appear that way to a novice. It often looks more like expert level performance, which robs novices of the desire to practice associated skills.
Lest you think I contradict myself: I can get good output for many tasks from GPT4 because I know what to ask for and I know what good output looks like. But someone who thinks the first, poorly prompted dreck is great will never develop the critical skills to do this.
You can see evolution speeding up rapidly, the jumbled information inherent in chemical metabolisms evolved to centralize their information in DNA, and then as DNA evolved to componentize body plans.
RATE: over billions of years.
Nerves, nervous systems, brains, all exponentially drove individual information capabilities forward.
RATE: over hundreds of millions, tens of millions, millions, 100s of thousands.
Then the human brains enabled information to be externalized. Language allowed whole cultures to "think", and writing allowed cultures ability to share, and its ability to remember to explode.
RATE: over tens of thousands, thousands.
Then we developed writing. A massive improvement in recording and sharing of information. Progress sped up again.
RATE: over hundreds of years.
We learned to understand information itself, as math. We learned to print. We learned how to understand and use nature so much more effectively to progress, i.e. science, and science informed engineering.
RATE: over decades
Then the processing of information got externalized, in transistors, computers, the Internet, the web.
RATE: every few years
At every point, useful information accumulated and spread faster. And enabled both general technology and information technology to progress faster.
Now we have primitive AI.
We are in the process of finally externalizing the processing of all information. Getting to this point was easier than expected, even for people who were very knowledgable and positive about the field.
RATE: every year, every few months
We are rapidly approaching complete externalization of information processing. Into machines that can understand the purpose of their every line of code, every transistor, and the manufacturing and resource extraction processes supporting all that.
And can redesign themselves, across all those levels.
RATE: It will take logistical time for machine centric design to takeover from humans. For the economy to adapt. For the need for humans as intermediaries and cheap physical labor to fade. But progress will accelerate many more times this century. From years, to time scales much smaller.
Because today we are seeing the first sparks of a Cambrian explosion of self-designed self-scalable intelligence.
Will it eventually hit the top of an "S" curve? Will machines get so smart that getting smarter no longer helps them survive better, use our solar systems or the stars resources, create new materials, or advance and leverage science any further?
Maybe? But if so, that would be an unprecedented end to life's run. To the acceleration of the information loop, from some self-reinforcing chemical metabolism, to the compounding progress of completely self-designed life, far smarter than us.
But back to today's forecast: no, no the current advances in AI we are seeing are not going to slow down, they are going to speed up, and continue accelerating in timescales we can watch.
First because humans have insatiable needs and desires, and every advance will raise the bar of our needs, and provide more money for more advancement. Then second, because their general capability advances will also accelerate their own advances. Just like every other information breakthrough that has happened before.
Useful information is ultimately the currency of life. Selfish genes were just one embodiment of that. Their ability to contribute new innovations, on time scales that matter, has already been rendered obsolete.
Not really. The total computing power available to humanity per person has likely gone down as we replaced “self driving” horses with cars.
People created those curve by fitting definitions to the curve rather than data.
But I don't understand your point even as stated. Cars took over from horses as technology provided transport with greater efficiencies and higher capabilities than "horse technology".
Subsequently transport technology continued improving. And continues, into new forms and scales.
How do you see the alternative, where somehow horses were ... bred? ... to keep up?
First, and above all, Ethics. Ethics of humans, matters more than anything. We need to straighten out the ethics of the technology industry. That sounds formidable, but business models based on extraction, or externalizing damage, are creating a species of "corporate life forms" and ethically challenged oligarchs that are already driving the first wave of damage coming out of AI advancement.
If we don't straighten ourselves out, it will get much worse.
Superintelligence isn't going to be unethical in the end, because ethics are just the rational (our biggest weakness) big-picture long-term (we get weak there too) positive sum games individuals create that benefit all individuals abilities to survive, and thrive. With the benefits for all compounding. In economic/math terms, it is what is called a "great attractor". The only and inevitable stable outcome. The only question is, does that start with us in partnership, or do they establish that sanity after our dysfunctions have caused us all a lot of wasted time.
The second, is that those of us that want to, need to be able to keep integrating technology into our lives. I mean that literally. From mobile, right into our biology. At some point direct connections, to fully owned, fully private, fully personalizable, full tech mental augmentation. Free from surveillance, gatekeepers, surveillance and coercion.
That is a very narrow but very real path from human, to exponential humans, to post-human. Perhaps preserving conscious continuity.
If after a couple decades of being a hybrid, I realize that all my biologically stored memories are redundant, and that 99.99% of my processing is now running on photonics (or whatever) anyway, I am likely to have no more problem jettisoning the brain that originally gave me consciousness, as I do every day, jettisoning the atoms and chemistry that constantly flow through me, only a temporarily part of my brain.
The final word of hope, is that every generation gets replaced by the next. For some of us, viewing obsolescence by AI as no more traumatic, than getting replaced by a new generation of uncouth youth, helps. And that this transition is far more momentous and interesting, can provide some solace, or even joy.
If we must be mortal, as all before us, what a special moment to be! To see!