I think one of the reasons is that LLMs are very good at what the execs do every day for a living and because of that, use as a way of assessing each other’s mental capacity: producing coherent-sounding speech (1).
Now that the LLM can do what any VP has dedicated their entire life to mastering, they assume that the system will be able to handle any task that they delegate, including programming, system design, project management, etc. Since the people doing this are paid less than them, surely it must be simply easier.
By this intuition, LLMs have now become intelligent, and are capable of handling anything at all. It is a must we find a way to integrate them in all of our products. And so, they’re now AI.
(1) That speech doesn’t necessarily have to carry a lot of meaning. Its main purpose is establishing the competence of the speaker.
Eliezer Yudkowsky is up in arms about it because it can bullshit better than he can and never has to take a break. It can replace the authors of The New York Times opinion page if not the actual reporters. Can it replace the CEO?
p.s. only slightly tongue in cheek. Use of AI by state to manage social perception is on my bingo card. It will be very effective.
AFAICT, LLMs are stochastic predictors of words within a large context. If you transfer (pun intended) that behavior over to humans, you would call a person like that a bullshitter, or at best a salesguy :-) A bullshitter as a person or team member is useful, but not scalable in the singularity sense: you can scale its output in terms of quantity, but not quality: the stochastic parrot may move prospects through the funnel (necessarily to a higher-IQ actual salesperson) but it will not create a patent from scratch (and probably will not close a deal).
So, we're not getting AGI. Given that the stochastic approach has hit scalability limits (there's no more data to feed), we need a new approach. Are there approaches that can bring AGI outside of LLMs? (AlphaZero?) Or is our industry just a bunch of stochastic parrots that complete every sentence with "eventually we'll have AGI and everything will be either great or destroyed" (which is exactly what an LLM would say at this point)?
>...The smartest people I have ever met—and they are the ones building this technology.
That's how you go beyond current LLMs. They'll figure something.
"You can see the future first in San Francisco"
and while the following paragraphs make a good, positive upbeat point, reading it out of context I cannot help thinking about homeless people in the streets and a pretty dark and dystopian future.
It's interesting how a place can be so radically different things at the same time.
Reminds me of William Gibson's opening line in "Neuromancer":
The sky above the port was the color of television, tuned to a dead channel.
So then, accurate?
There are the very rich, that can afford to be upbeat and positive, living in a bubble.
And the poor, living in a dystopia.
Seems like it is already happening, and this a correct statement. San Francisco is a model for the future.
He's convinced and it's going to be terrible and it's going to be important and he's going to be part of it.
I guess for sure in 5 years it will be much clearer.
My understanding of the models and linear algebra combined with my experience of their performance improvements make me think he’s likely right. We have some people wanting to pooh-pooh these machines. They will point to current limitations of the models and weak spots as though the underlying mode of the design rather than the current implementation is the limiting factor. They act as though real intelligence is always right.
I have been wrong before but I am pretty convinced. Everyone’s arguments against it happening is that it hasn’t yet happened.
No one knows what’s on the other side of AI being a better AI researcher than humans.
It's an addictive thought pattern, because it feeds the ego, provides a sense of purpose, allows escape from mundane problems, and is simple to sustain (keep believing in the Thing!)
> I remember the spring of 1941 to this day. I realized then that a nuclear bomb was not only possible — it was inevitable. Sooner or later these ideas could not be peculiar to us. Everybody would think about them before long, and some country would put them into action.... And there was nobody to talk to about it, I had many sleepless nights. But I did realize how very very serious it could be. And I had then to start taking sleeping pills. It was the only remedy, I’ve never stopped since then. It’s 28 years, and I don’t think I’ve missed a single night in all those 28 years.
-- James Chadwick (corrected Chadwich -> Chadwick)
That said, I'm not sure the parallel between AGI and nuclear weapons is really that strong. Nuclear weapons are a game-changer on their own in that once you have the warhead and some kind of a delivery mechanism then you can affect events. AGIs are different in that they will manipulate and organise only information and knowledge, not physical matter directly.
Aschenbrenner sort-of-considers this point, for example
> Improved sensor networks and analysis could locate even the quietest current nuclear submarines (similarly for mobile missile launchers). Millions or billions of mouse-sized situational awareness 130 autonomous drones, with advances in stealth, could infiltrate behind enemy lines and then surreptitiously locate, sabotage, and decapitate the adversary’s nuclear forces.
...but he doesn't really consider the wider issues of, y'know, capturing the additional information that locating those nuclear submarines would require, or the technological jumps in actually creating and manufacturing those mouse-sized drones.
Perhaps the AGI is going to solve all those practical and logistical issues, but that really does remain to be seen.
https://thezvi.wordpress.com/2024/06/07/quotes-from-leopold-...
https://thezvi.wordpress.com/2024/06/10/on-dwarkshs-podcast-...
https://forum.effectivealtruism.org/posts/zmRTWsYZ4ifQKrX26/...
This is a characteristic of modern rationalist/EA discourse: It’s more about opining on a subject in a heavily hedged manner that only dances around the topic. Making points too directly opens one up to being wrong, and therefore is avoided. The goal is more to wax philosophically about the subject to flex your understanding while touching multiple sides of the argument to demonstrate your breadth of knowledge, not to actually make a point.
In hindsight, the winters don't look as long as they seemed living through them.
IF we want to say the last winter ended in 2000, that was 24 years ago now.
It does seem like right now, if there isn't any breaking news for a couple weeks, people think a new Winter is starting.
If the tech is going to transform humanity into a new economic paradigm and give the country controlling it a decisive advantage to be the sole superpower, being an investor is either futile or meaningless. Even doing nothing and just enjoy the ride seems to be more rational.
Even he needs a nest egg.
Neither will more advanced work be replaced by 2027. End of story.
High schoolers are already not widely employed for knowledge-based work. It's almost all physical labor. The few knowledge jobs among my peers in high school were tutoring and call centers, both of which are arguably replaceable with LLMs today.
But "replaced" isn't really the right frame either. In my very physically-constrained industry I can't point to any specific job titles that have been replaced by "AI" but I do see a meaningful volume of labor being taken on by ML-dependent technologies like robotics and AR, which leads to increased productivity overall which leads to bottom line results that the users are seeking.
So to me it seems reasonable that we will see this with more advanced work. The better the AI tools get the more work they will be able to assist with, including the making of better AI tools.
And regarding your other point:
> both of which are arguably replaceable with LLMs today
Arguably, but they are not. That‘s my point.
But is the "-ed" in worked a problem?
My beef is with the Dwarkesh Patel podcast. While he has some very good interviews (Carl Shulman, or even Zuckerberg) he seems to have a lot of rambling-on conversstion by very young employees of AI startups (openAI). I don't get value from those because they don't really say anything other than patting each other's back about how awesome they are for having been hired by their companies. I think he should focus on people with actual contribution to the science who have meaningful things to say
Aschenbrenner et al are clearly smart, but they have no real world experience to base their (entirely biased) predictions on. It's all very on-brand for the EA and EA-adjacent folks, who spend way too much time on the internet and not enough time out in the real world.
This is basically a self-assessed IQ test.
I ask because ever since InstructGPT I've been noticing that people not only don't agree on any letter of that initialism, but also sometimes mean by it something not present in any letter.
So for me, even InstructGPT counts as a general purpose AI; for OpenAI, it's not what they mean by AGI (they require it to be economically significant); and it definitely isn't superhuman (which some people need, I'd call that ASI not merely AGI); and we still can't agree on what consciousness is let alone what it would take for an AI to have it, but some people include that in their definition of AGI and I never have.
This is why the AGI in X years is such an interesting perspective.
(And why I'm deeply concerned about my downvoters.)
Predictions at this time scale are bullshit, at least in modern technological civilization. It would have been different in the Stone Age, but we don't live in the Stone Age, when things barely changed for millennia.
For some perspective:
100 years ago, radio was barely a thing, airplanes were made of wood and canvas, Africa was thinly populated, antibiotics didn't exist, semiconductors weren't useful yet and the British Empire reached its largest extent ever.
200 years ago, railway wasn't a thing, electric telegraphy wasn't a thing, germ theory wasn't a thing, photography wasn't a thing, most parts of Europe were still feudal, Oregon and California were still Amerindian country, the US was an unimportant country on the fringe of the developed world, and official science of the day denied the very existence of meteors as space rocks, because only dumb hicks would believe in rocks falling from the sky, ya know?
Don't tell me that people back then could make accurate predictions about the technological level of 2024. Neither can your smartest person, even though I don't doubt their smarts. Making long-term predictions is about as reliable as making long-term weather forecasts. (Not climate. Weather.)
200 years ago you could predict that your life would be about exactly the same as your parents' life so you had a lot of shared experience which facilitated culture transfer from one generation to the next. These days our lives are very different from our parents' life so it is difficult to share a perspective on common experiences and to transfer that wisdom. This implies there is a self-limiting factor to accelerationism.
My own prediction for AGI is a millennia at least. ;-)