For every example where someone over predicted the time it would take for a breakthrough, there are at least 10 examples of people being too optimistic with their predictions.
And with AGI, you also have the likes of Sam Altman making up bullshit claims just to pump up the investment into OpenAI. So I wouldn’t take much of their claims seriously either.
LLMs are a fantastic invention. But they’re far closer to SMS text predict than they are to generalised intelligence.
Though what you might see is OpenAI et al redefine the term “AGI” just so they can say they’ve hit that milestone, again purely for their own financial gain.