I don't know when (or if) AI will implode or succeed with any degree of provable certainty, because that's not my area of expertise. Rather, I can point out and discuss flaws in the common booster and doomer arguments, and identify problems neither side seems willing to discuss. That brings me cold comfort, but it's not enough to stake my money on one direction or another with any degree of certainty - thus I limit my exposure to specific companies, and target indices or funds that will see uplift if things go well, or minimize losses if things go pear-shaped.
I also think relying on such mathematics to justify a position in the first place is kind of silly, especially for technical people. Mathematical models work until they don't, at which point entirely new models must be designed to capture our new knowledge. On the other hand, logical arguments are more readily adapted to new data, and represent critical, rather than mathematical, thinking and reasoning.
Saying AI is going boom/bust because of sigmoids or Lindy's Law or whathaveyou is not an argument, it's an excuse. The real argument is why those things may or may not emerge, and how do we address their consequences within areas inside and outside of AI through regulation, innovation, or policy.
Basically a lot of people say "but isn't it also pretty likely that we DON'T get superintelligence?" And, yes, it is. But superintelligence being even a remotely plausible outcome is a big fucking deal. Your investment choices in that context are not important.
People really struggle to think rationally in the face of this shape of uncertainty.
So, his point with all the demand for rigor is to end on a hand-waved jump of faith from "improved AI models" to the mythical "superintelligence"?
That's the problem with 'singularity' arguments. The people making them ignore the fact that the mathematical definition of the word means 'the model of outcomes collapses to a single value' therefore the model stops being useful, yet they somehow claim to be able to make predictions beyond the singularity. It's like those shitty Facebook math posts where they divide both sides of the equation by 0 (the fact hidden by some sleight of hand), to 'prove' that 2=1.
The formulation of the singularity involves putting outrageous values into the parameters of the model of reality, and denominator ignorance, and then claiming 'rationally' determining that the consequences are too severe to ignore.
The singularity framing is really tough here, right? It comes from black hole physics. Essentially, at the event horizon, the way we know how to do physics stops working, and we rightly conclude that we can't currently say anything about the other side of the event horizon. It is not saying that nothing is occurring there. Matter, time, space, energy, whatever, that still is there (maaaaybe?) and is still undergoing something. It's just that we don't know what that is.
The same is true with using these tech singularity arguments. Like, in the age of superintelligence (if that happens), there will still be thing happening, the dawn will still come every day and the dusk will still too. It's just that we say our current ideas about that new day aren't that applicable to that new age (God, this sounds like a hippie).
However, unlike with black hole physics where we aren't even sure time can exist like we know, we are likely all going to be there in that new superintelligence age. We're still going to be making coffee and remembering bad cartoons from our youth. Like, the analogy to black hole physics breaks down here and maybe does a disservice to us. It's not a stark boundary at the Schwartzchild radius, it is a continuous thing, a messy thing, a volatile thing, and very importantly for the HN userbase, a thing that we control and have the choice to participate in.
We are not passively falling into the AGI world like the gnawing grinding gravity of a black hole.
Why would that be? Nothing about Lindy's Law makes that promise. And even the SOTA in 2026 is over-estimated thanks to a trillion dollar industry trusted to not influence benchmarks.
My thought process RE: superintelligence/AGI is generally this:
* I personally don’t believe it’s likely to happen with silicon-based computing due to the immense power and resource costs involved just to get to where we are now; hence why I invest broadly to capitalize on what gains we actually attain using this current branch of AI research across all possible sectors and exposure rates
* If we do achieve AGI using silicon-based computing, its limited scale (requiring vast amounts of compute only deliverable via city-scale data centers) will limit its broader utility until more optimizations can be achieved or a superior compute platform delivered that improves access and dramatically lowers cost; again, investing broadly covers a general uplift rather than hoping for a specific winner
* If AGI is achieved, nobody - doomer or booster alike - will know what comes next other than complete and total destruction of existing societal structures or institutions. The stock market won’t explode with growth so much as immediately collapse from the disintegration of the consumptive base as a result of AGI quite literally annihilating a planet’s worth of jobs and associated business transactions. In this case, a broad spread protects me from harm by spreading the risk around; AGI will annihilate the market globally, but not all at once barring a significant global catastrophe instigated by it
* Which brings me to the worst outcome, where AGI follows the “if anybody builds it everyone dies” thought process: investment is irrelevant because we’re all fucked anyway.
And that’s just my investment approach. I’m too pragmatic to believe we’re at the bottom of the sigmoid curve, but too wise to begin guessing where we actually exist on it at present or how much is left in the current LLM-arm of AI research; I’m an IT dinosaur, not an AI scientist.
What I can point to is the continued demand destruction of consumer compute through higher costs and limited availability due to rampant AI speculation as proof that the harm is already here in a manner most weren’t predicting, while at the same time actual job displacement by AI is limited to the empty boasting of executives using it as a smoke screen for layoffs after RTO mandates failed to thin headcount sufficiently.
In the USA in particular, we’re facing a perfect storm of:
* consumer confidence collapse leading to a decline in spending on all goods, especially luxury ones, by all but the most monied demographics
* data center-driven cost increases (energy) and resource destruction (land, water, fossil fuel use)
* the eradication of government support for renewable energy that would’ve kept these costs in check
* the widening wealth gaps creating a new underclass not seen since before WW2
In other words, most of the discourse continues to revolve around hypotheticals of tomorrow rather than realities of today. That would be the lesson I’d hope more people take away from something like this, so we can finally begin addressing issues themselves rather than empty online circle jerking about who is right or wrong.
Add
Total collapse in government quality AND public trust to politicians
Total collapse of news media to slop and paid-for
Total collapse of culture
(Not just the US either)
> the widening wealth gaps creating a new underclass not seen since before WW2
I go back and forth on this. I think the reality is that "underclass" is a moving target. AI and automation makes things so cheap that today's underclass lives better than kings ever did.
My "plan" is hope for a benevolent intelligence that establishes a post-human government and then enjoy poat-scarcity society doing wood working or something.
Billionaires should probably be more worried.
If we don't understand the fundamental limits to any particular kind of trend, our default assumption should be that it will continue for about as long as it has gone on already.
We can, in fact, easily put a confidence interval on this. With 90% odds we're not in the first 5% of the trend, or the last 5% of the trend. Therefore it will probably go on between 1/19th longer, and 19 times longer. With a median of as long as it has gone on so far.
This is deeply counterintuitive. When we expect something to last a finite time, every year it goes on, brings us a year closer to when it stops. But every year that it goes on properly brings the expectation that it will go on for a year longer still.
We're looking at a trend. We believe that it will be finite. Our intuition for that is that every year spent, is a year closer to the end. But our expectation becomes that every year spent, means that it will last yet another year more!
How can we apply that? A simple way is stocks. How long should we expect a rapidly growing company, to continue growing rapidly?
For example, take something like a fad or trend; they don't have a hard end date like human lifespan, so it should follow Lindy's law.
However, the likelihood, on average across the population, that you observe a trend is going to be higher at the end of a trend lifecycle than at the beginning. This is baked into the definition - more and more people hear about a trend over time, so the largest quantity of observers will be at the end of the lifecycle, when the popularity reaches its peak.
In other words, if you are a random person, finding out about a trend likely means it is near the end rather than the middle.
"The Lindy effect applies to non-perishable items, like books, those that do not have an "unavoidable expiration date"."
And later in the article you can see the mathematical formulation which says the law holds for things with a Pareto distribution [2]. I'd want to see some sort of good analysis that "the life span of exponential growth curves" is drawn from some Pareto distribution. I don't think it's completely out of the question. But I'm also nowhere near confident enough that it is a true statement to casually apply Lindy's Law to it.
The argument given is the same as the one that I first ran across, not by that name, in https://www.nature.com/articles/363315a0. https://en.wikipedia.org/wiki/Doomsday_argument claims that it was a rediscovery of something that was hypothesized a decade article.
I hadn't tried to give it a name, or thought to apply it outside of that context.
As for the mathematical qualms, I'm a big believer in not letting formal mathematical technicalities get in the way of adopting an effective heuristic. And the heuristic reasoning here is compelling enough that I would like to adopt it.
The law only applies for certain types of processes, and is completely wrong for other types (e.g. a human who has lived 50 years may live 50 more, but one who has lived 100 years will certainly not live 100 more). So the question becomes: what type of process are you looking at? And that turns out to be exactly the question you started with: is there a fundamental limit to this growth curve, or not.
Did you even read the post? It’s an estimate in the context where you have zero information on which to base a more accurate. The author’s point is that if you’re making a different estimate you need to actually say what information is I no informing that.
But that's the entire idea of Bayesian reasoning. Which has proven to be surprisingly effective in a wide range of domains.
I'm all for quantifying my ignorance, and using it as an outside view to help guide my expectations. Read the book Superforecasting to understand how effective forecasters use an outside view to adjust their inside view, to allow them to forecast things more precisely.
So for example, the longer a time bomb ticks, the less likely it is to go off any time soon. (Assuming the timer isn't visible.) :)
We expect fresh processes to terminate quickly and long running processes to last for a while longer.
I don't think you can use lindy on trends as if trends are static objects, but that's another conversation.
I mean, that's called "having an opinion".
Yes, that's called "having an opinion". Typically people writing argumentative pieces are doing so because they have a belief about the matter. I'm not sure what exactly you expect here.
> if he's wrong I would hope he owns up to it
I think Scott Alexander is pretty good about that.
Edit: in particular I don’t agree with
But if someone claims that the trend toward increasing AI capabilities will never reach some particular scary level...
One has to agree that the benchmark results are getting “scarier”, which is not automatically implied by finding more goals to optimize forThe important thing we can show it in hindsight only. We don't know which other tasks we are currently mistaken about requiring intelligence. Maybe none of them are?
We don't know. We don't know what intelligence is. If we look at decades and even centuries of attempts to define intelligence, it is all looks like a goalposts moving. When a definition of intelligence starts to include people or things we don't like to think as of intelligent ones, we change the definition.
All exponential eventually becomes a sigmoid because exponential growth always expose limiting factors that weren't limiting at the beginning. Silicon manufacturing had lots of room for high-margin customers like Nvidia even a year ago (by the mere virtue of outbidding lower-margin customers), but now it is mostly gone, and no amount of money will make fabs build themselves overnight.
[1]: https://stockanalysis.com/stocks/nvda/metrics/revenue-by-seg...
The naive expectation is that AI will slow down b/c Moore's law is coming to an end, but if you really think about the models and how they are currently implemented in silicon, they are still inefficient as hell.
At some point someone will build a tensor processing chip that replaces all the digital matmuls with analogue logamp matmuls, or some breakthrough in memristors will start breaking down the barrier between memory and compute.
With the right level of research funding in hardware, the ceiling for AI can be very high.
I'm pretty sure there's a 3 year design goal starting this year that'll do that to any of the qwen, deepseek, etc models. There's a lot you could do with sped up models of these quality.
It might even be bad enough that the real bubble is how much we don't need giant data centers when 80-90% of use cases could just be a silicon chip with a model rather than as you say, bloated SOTA
If there's a breakthrough in memristors, you could end up with another 20x reduction in circuit elements (get rid of memory bottlnecks, start doing multiplication ops as log transform voltage addition)
The ceiling is ultra high for how far AI can go.
All the easily verifiable domains such as mathematics, coding, and things that can be run inside a reasonable simulation are falling very very fast.
By next year if not sooner, mathematicians will be wildly outpaced by LLMs for reasoning.
So it's not impossible to have things that seem orthogonal, like generation speed or context length, have an impact on quality of result.
>My understanding is that this represents 3-4 “generations” of different technology (propellers, turbojets, etc). Each technology went through normal iterative improvement, then, when it reached its fundamental limits, got replaced by a better technology. The last technology, ramjets, reached its limit at about 3500 km/h, and there wasn’t the economic/regulatory will to develop anything better, so the record stands.
You don't have one sigmoid, you have multiple each stacked on top of each other. Airplanes aren't just one technology they are multiple technologies that happen to do the same thing.
Each one is following a sigmoid perfectly. It only looks exponential(ish) because of unpredictable discoveries that let you switch to another sigmoid that has a higher maximum potential.
The same is true in AI. If you used the same architecture as GPT2 today you're in for a bad time training a new frontier model. It's only because we have dozens of breakthroughs that the capabilities of models have improved as much as they have.
That said exponential and sigmoids are the wrong model to use for growth. Growth is a differential equation. It has independent inputs, it has outputs and some of those outputs are dependent inputs again through causal chains of arbitrary complexity. What happens depends entirely on what the specific DE that governs the given technology is. We can easily have a chaotic system with completely random booms and busts which have no deep fundamental rhyme or reason. We currently call that the economy.
That said... if the exponential is made of stacked sigmoids, it's still an exponential on the whole! The fact that it's made of stacked sigmoids is relevant to the engineers making it, but not so relevant to the users or those otherwise affected by it.
In this model, the exponential growth that everybody is freaking out about is only the realization of the modular software dream ("we'll only have to write an ORM once for all of human history!") and the sheer amount of knowledge in libraries.
It's at least falsifiable.
The idea is simply that the basic idea behind LLMs, that you're distilling the entropy out of the entire available world of text, is antithetical to creativity.
Further developing on the theme of self-play, humans have the ability to sense what we want (intellectually) and reach for it communally over thousands of years. It's an innate quality, and if AI starts participating (contrast to giving people psychosis) we will all be able to tell.
Well.. that's not true is it. There are human cultures that haven't reached for anything for thousands of years even though they clearly saw what western culture was doing and that they were being left behind badly.
Short of a third sigmoid appearing in the ML CompSci space, perhaps in the form of ongoing, repeated step-optimisations which will also have diminishing returns, intelligence growth is now limited a few scaling problems that have already been worked on for a very long time.
Transistors, which have been doubling for almost a century now, but Moores Law has already plateaued and reached limits on energy efficiency, and simply building new fabs is not something that we can do exponentially. And the other growth limiter is electricity - there is no exponential supply of fossil fuels or power plants. Although manufacturing has scaled, PV tech improvements are also plateauing - and while storage is getting cheaper, it's still not economical vs fossil fuels (meaning: when we have to switch to it, the growth slows down further) and we are unlikely to see battery efficiency sigmoid enough to maintain the AI sigmoid.
I don't mean to be bearish here. There's so much money sloshing around that we can afford to put the smartest people, using unlimited tokens, on the task of finding small, incremental gains on the CompSci side of things that will have large monetary payoffs - hopefully allowing further scaling and increased emergent abilities of LLMs. Maybe we can squeeze the algos for quite a while. But I don't see that maintaining the same level of exponential as unlocking unlimited data or maxxing out the world's energy/fab capacity for long.
And I don't see why this is a massive issue except for the people who want to have some god-like super AI? Frontier LLMs are genuinely magic. Not "won't delete your production database" magic, but definitely a massive productivity gain for competent knowledge workers.
For example: the flight airspeed plot, it starts at ~1900. Now of course it should start there bc we did not have planes before that. But let's change the plot heading from airspeed to human speed a.k.a at what speed could humans move. now you can change the origin meaningfully as we had chariots, horses, bicycles, ships before that.
If you instead create a plot for last 5000 years you would see that the speed at which humans are able to move is rising exponentially, from walking on foot in a radius of 1000-2000feet in a thick jungle 5000 years ago to reentering earth at 25000 miles/hour in 1969 (yeah, read that again). Even for AI, if you zoom out the plot to last 70 years it will look exponential, if you zoom in to last 2 months it will look absolutely flat. The point is that the whole sigmoid/exponential argument is a function of the origin (0,0).
My mental model has been 3D computer graphics: doubling the polygon count had huge returns early on but delivered diminishing returns over time.
Ultimately, you can't make something look more realistic than real.
I don't know what the future holds, but the answer to the question "can LLMs be more realistic than real" will determine much about whether or not you think the curve will level off soon.
In the same fashion, LLMs have to pay for themselves to keep the trendlines going. In a whole-systems -sense, mind, not "$2000/month is cheaper than hiring a developer" while the rest of the economy collapses.
This is the crux of the article. To a large extent continued progress depends on a stable increase in compute, an increase in training data, and an increase in good ideas to squeeze more out of both of them.
One calculation you could do is a survival function: for each of the above, how long before it is disrupted? For example, China could crack down on AI or invade Taiwan. Or data centers become politically unpopular in the US. Or, we could run out of great ideas. Very hard to predict.
The situation is drastically different for problems that require interaction was the physical world to determine success.
As soon as you add a powerful simulator for physical problems to the self learning experience of the AI, you are extremely hampered by the large amount of needed computation.
For example, When a car starts, it's speed and acceleration become more than zero. But what about rate of change in higher degrees? It suddenly doesn't change from zero acceleration to non-zero. That means the car has a non-zero derivative at all degrees. In other words, the movement is exponential. The same thing happens in reverse when the car reaches a constant speed.
All positive growth eventually flattens out and becomes sigmoid, but a lot of phenomena experience negative growth and nose dive. No gentle curve, but a hard kink and perfect flat line at zero. Forever. I think it would be a stretch to categorize that pattern as sigmoid. Predicting a sigmoid pattern for negative growth implies some sort of a soft landing (depending on your definition of soft).
We can think of many populations that are no longer with us. So just a caution about over applying this reasoning in the negative case.
2. What's even worse than predicting that some growth curve flattens before X happens is predicting it will flatten before X happens but after Y happens, which is what we see when it comes to AI in software development. Too many people predict that AI will be able to effectively write most software, replacing software engineers, yet not be able to replace the people who originate the ideas for the software or the people who use them. I see no reason why AI capability growth should stop after the point it's able to write air-traffic control or medical diagnosis software yet before the point where it's able to replace air traffic controllers and doctors.
3. While we don't know much about AI (or, indeed, intelligence in general), we do know something about computational complexity. Some predictions about "scary things" happening (the ones I'm guessing Alexander is alluding to, though I can't be certain) do hit known computational complexity limits. Most systems affecting people are nonlinear (from weather to the economy). Predicting them requires not intelligence but computational resources. Controlling them, similarly, requires not intelligence but either computational resources or other resources. It's possible that people choose to give control over resources to computers (although probably not enough to answer many tough, important questions), although given how some countries choose to give control to people with below-average intelligence (looking at you, America), I don't see why super-human intelligence (if such a thing even exists) would be, in itself, exceptionally risky.
This is kinda laughable. Scott has been thinking and writing about AI for a long time
But what's going on in here is not that - it's reading tea leaves for maximum dramatic effect.
Strictly speaking, the original paradigm of scaling laws doesn't work any more. The assumption that we could achieve better performance simply through "vertical scaling" ie infusing models with exponentially more parameters and pre-training data, is no longer the driving force of AI progress.
Instead, the industry has pivoted toward inference-time scaling. Rather than relying solely on a massive, static neural network, modern architectures allocate more compute during the actual generation process, allowing the model to "think" and verify its logic dynamically.
Furthermore, the latest state-of-the-art models are no longer pure LLMs; they are compound neuro-symbolic systems that integrate external tools like REPLs, databases, and structured skill documentation to archive things pure LLM vertical parameter scaling was not able to do.
Strictly speaking, "Why do scaling laws work?" is a question about the theoretical reasons the asymptotic decay takes the particular mathematical shape that it does.
Is the "capability" number on these LLM strengh graphs as tangible?
I think it would be interesting to visit a reality that obeys arbitrary abstractions, but I would personally never go there.
All exponentials eventually become sigmoids? Don’t think this can be true without qualifiers.
The issue is that the exponential-looking part of the sigmoid might contain all of human history, sure, but most folks who espouse this theory probably agree that over time everything reaches a steady-enough state to be considered non-exponential, or become oscillatory.
https://xcancel.com/peterwildeford/status/202963666232244661...
Ofc "full labor automation" has a certain spread of meaning. A sliver of population will always find ways to hold to a job or run one or many businesses. But there will be "enough" labor automation for it to be a social ticking bomb. That, in fact, does not depend on better models nor better AI than we have today. By 2045 there will be a couple of generations that has been outsourcing their thinking to AI for most of their adult lives. Some of them may still work as legal flesh of sorts, but many won't get to be middle man and will find no job.
Also, if you could replace your senator today by an untainted version of a frontier model (of today), would you do it? Would it be a better ruler? What are the odds of you not wanting to push that button in the next twenty years, after a few more batches of incompetent and self-serving politicians?
Going to need a big citation for that claim
Yeah well my prophet says he can beat up your prophet in a fight.
---
Here in reality, I'm not accustomed to taking random predictions without backing evidence as if they were truth.
Lol
This is not the context in which I hear about sigmoids vs exponentials. I hear it in regards to “the singularity”, not that AI won’t reach some pre-specified level. You may get AGI, you aren’t getting a singularity.
In Scott's mind, dangers from AI are not a known fact, but are somewhere between highly probable and a near-certainty. In his mind, there are well-grounded justifications for believing that AI poses substantial future dangers to the public. Therefore he also believes he should inform people about this, and strives to convince skeptics, so that we might steer clear.
It's easy to understand why someone who believes what you believe about AI would of course not warn people about AI. It's also easy to understand why someone who believes what Scott believes about AI would want to warn people about AI. Your contention is with his confidence for being worried about AI, not his reason for wanting to warn people.
1. If you're not treating my claim as a black box, explain explicitly what is your model of what the article was about? Are you aware, for example of the last paragraph of the article? I think that WAS what the article was about. Do you have specific opinions on e.g. how I went wrong and where my model differs?
2. If you are treating it as a black box, what's your default expectation based on the law of Nothing Ever Happens?
Just kidding, you don't need to explain anything. A"I" fearmongers should though.This does *not* imply the inevitability of AGI. It does not imply AGI is necessarily bad.
It does mean that "the capabilities of AI will eventually plateau" offers no meaningful predictive power or relevance to the overall AI discussion.
The entire plot of the Lord of the Rings could probably be compressed into less than 10 kB of text too.
Edit: this seems to be a controversial comment, but IMHO a blog of Scott Alexander's type is an art form, not just a communication channel.
It's better to look at the underlying factors. Money sources are drying up, nobody is making a profit outside of nVidia, most blackwell GPU's are likely not even installed yet and will probably be 2 generations behind when they finally are being used, data centers are hitting all sorts of obstacles getting built and powered and they're getting built slowly, most AI researchers seem to think that LLMs are a dead end, the newer models seem to be getting more expensive and sometimes worse, or even potentially are showing signs of model collapse (goblins..), the supposed productivity gains are not materializing.. AI has worse public sentiment than congress.. I could keep going. Some obscure "law" seems to pale in comparison to the hard evidence that the status quo is utterly unsustainable and none of these companies seem to have a realistic plan other than trying to become too big to fail essentially.
I like some of this guy's writing on other topics, but to me this is a prime example of what happens when you get public "intellectuals" talking about subjects far outside of their area of expertise. It's not as bad as Richard Dawkins latest fall into psychosis but it's basically the same phenonmenon.
Lindy's Law is not actually a law and many exact minds will be provoked by the very name; it also fails spectacularly in certain contexts (e.g. lifetime of a single organism, though not necessarily existence of entire species).
But at the same time, I am willing to take its invocation in the context of AI somewhat seriously. There is an international arms race with China, which has less compute, but more engineers and scientists. This sort of intellectual arms race does not exhaust itself easily.
A similar space race in the 1950s and 1960s progressed from first unmanned spaceflight to a moonwalk in mere 12 years, which is probably less than what it takes to approve a bicycle lane in Chicago now.
I keep seeing this. Where did it come from? Has China said that they intend to attack other countries using AI? Have other countries declared that they intend to attack China with AI?
Also, why does anyone believe that AI could actually be that dangerous, given it's inherent unpredictable and unreliable performance? I would be terrified to rely on AI in a life or death situation.
BTW your handle is an actual Czech word, minus a diacritic sign ("křupan"), and a bit amusing one. It basically means hillbilly. Not that it matters, just FYI.
While we're at it, the "exponentials are actually sigmoïds" meme is not necessarily true. While exponentials are never exponentials, sigmoids are not guaranteed. Overshoot-and-collapse examples also happen in tech, e.g. the dotcom bubble, or the successive AI winters.
Except innovation. When one sigmoid tapers off we keep finding new ones to keep the climb going.
Good example of this is number of submissions to neurips/icml/iclr. In 2017 that curve was exponential.
I could probably make increasingly larger fires for years if I was willing to burn the entire world.
> But if someone claims that the trend toward [X] will never reach some particular scary level, then the burden is on them to explain either:
> If they’re not treating [X] as a black box, and claim to be modeling the dynamics explicitly, then what is their model? Have they calculated the obvious things…
> If they are treating [X] as a black box, why isn’t their default expectation based on Lindy’s Law?
Like, the whole point is that in real life we do actually know things about situations and can model them; we fall back to Lindy's law when we know nothing at all. Further, arguments have justification to deviate from Lindy only when they give specifics about the situation they're modelling.
This doesn't say much, and the author fights their own points a couple times, suggesting that they maybe didn't think through what they wanted to write until they were in the middle of writing it and started realizing their assumptions didn't match what they expected the data to say.
I really don't get the point of what I just read.
Model reasoning is on an s-curve, which is improving.
Model intelligence is not the same as reasoning. It's a different axis, and one I have not seen much movement on.
See, humans have a recursive form of intelligence which is capable of self-reflection and introspection. LLMs can only reason about tokens which have already been emitted. Humans and LLMs do not share the same form of reasoning, and general human-like intelligence will not arise from the current architecture of LLMs. Therefore it is a mistake to assume that continual improvement on the reasoning scale will result in something that is equivalent enough to humans on the intelligence axis to replace all labor.
No definitely not saying this and I don’t quite know what it means
> Model reasoning is on an s-curve, which is improving.
Is this saying two different things? I think I might agree with this in principle as in maybe there is some sort of s curve or something like it but do we see evidence of this? Where?
> Model intelligence is not the same as reasoning. It's a different axis, and one I have not seen much movement on.
Can you clarify this? What is the distinction and what makes you say you have “not seen much progress?”
> See, humans have a recursive form of intelligence which is capable of self-reflection and introspection. LLMs can only reason about tokens which have already been emitted
LLMs do self reflection and introspection in context, and tweaks such as value functions (serving a similar purpose to intuition or emotion) may make this better? Why do you feel self reflection and introspection are a fundamental limitation here? Models reason over tokens they have emitted and also with their own sense and learned behavior already. Are you just talking about continual learning? Also I feel people just latch onto LLMs as if this is all of AI. Why? SSMs, memory networks, recurrent neural networks etc etc etc are all part of AI but aren’t as popular because they can’t yet compete with LLMs in terms of scaling laws and training efficiency due to e.g. hardware and software optimization and investment being focused on LLMs. If something else comes along that works better we’ll just start scaling that.
> Humans and LLMs do not share the same form of reasoning, and general human-like intelligence will not arise from the current architecture of LLMs.
Very strong statement, any theoretical or experimental basis for this? I also don’t particularly care personally other than as a point of curiosity. Why does it matter if AI systems will develop equivalent reasoning mechanisms as humans? In fact it may be much better not to.
> Therefore it is a mistake to assume that continual improvement on the reasoning scale will result in something that is equivalent enough to humans to replace all labor.
Idk I didn’t say this explicitly but I also dont think it matters if we have a system “equivalent to humans” or one that “replaces all labor”.
> But those skeptics are initially responding to the constant AI hype claims that we are exponentially growing to AGI.
This is a meaningless statement or at best just strawmanning.
Attention is all you need took us by surprise and we don't know how big the wave is let alone if there are other waves behind it.
- Making connections to other subjects that an expert would miss. The hall of fame of sigmoid predictions is just excellent, I already know I'm going to be reminded of it some time in the future. Very entertaining way to get the point across.
- Writing about tricky concepts in a very accessible and elegant way, which experts are notoriously bad at doing themselves - they are often optimizing for other specialists.
- Being able to write with an air of speculation and experimentation with ideas that experts and institutions often can't afford. Experts have to maintain their track record; Scott Alexander can say "lol just double the timeline"
it doesn't help that sCotT aLexAndEr is also as close as you can come to the modern dressed up version of a eugenicist (again, not based on any actual expertise)
but I rest my case
Allowing slop articles like this literally prints them evaluation money.