https://80000hours.org/2025/03/when-do-experts-expect-agi-to...
>One way to reduce selection effects is to look at a wider group of AI researchers than those working on AGI directly, including in academia. This is what Katja Grace did with a survey of thousands of recent AI publication authors.
>In 2022, they thought AI wouldn’t be able to write simple Python code until around 2027.
>In 2023, they reduced that to 2025, but AI could maybe already meet that condition in 2023 (and definitely by 2024).
>Most of their other estimates declined significantly between 2023 and 2022.
>The median estimate for achieving ‘high-level machine intelligence’ shortened by 13 years.
Basically every median timeline estimate has shrunk like clockwork every year. Back in 2021 people thought it wouldn't be until 2040 or so when AI models could look at a photo and give a human-level textual description of its contents. I think is reasonable to expect that the pace of "prediction error" won't change significantly since it's been on a straight downward trend over the past 4 years, and if it continues as such, AGI around 2028-2030 is a median estimate.
I suppose some are genuine materialists who think that ultimately that is all we are as humans, just a reconstitution of what has come before. I think we’re much more complicated than that.
LLMs are like the myth of Narcissus and hypnotically reflect our own humanity back at us.