I think that was part of the LessWrong eschatology.
It doesn't make sense with modern AI, where improvement (be it learning or model expansion) is separated from it's normal operation, but I guess some beliefs can persevere very well.
Modern AI also isn't AGI. We seem to get a revolution at the frontier every 5 years or so; it's unlikely the current LLM transformer architecture will remain the state of the art for even a decade. Eventually something more capable will become the new modern.