Alternatively, what are you imagining this “AGI” you speak of to be?
ChatGPT is not autonomous or capable of doubling global GDP.
The founders of OpenAI were drawn from an intellectual movement that made very specific, falsifiable predictions about the pipeline from AGI (original definition) to superintelligence, predictions which have since been entirely falsified. OpenAI talks about AGI as if it were ASI, because in their minds AGI inevitably leads to ASI in very short order (weeks or months was the standard assumption). That has proven not to be the case.
General: able to solve problem instances drawn from arbitrary domains.
Intelligence: definitions vary, but the application of existing knowledge to the solution of posed problems works here.
Artificial. General. Intelligence. AGI.
As in contrast to narrow intelligence, like AlphaGo or DeepBlue or air traffic control expert systems, ChatGPT is a general intelligence. It is an AGI.
What you are talking about is, I assume, a superintelligence (ASI). Bostrom is careful to distinguish these in his writing. Bostrom, Yudkowsky et al make some implicit assumptions that led them to believe that any AGI would very quickly lead to ASI. This is why, for example, Yudkowsky has a very public meltdown two years ago, declaring the sky is falling:
https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-annou...
(Ignore the date. This was released on April 1st to give plausible deniability. It has since become clear this really represents his view.)
The sky is not falling. ChatGPT is artificial general intelligence, but it is not superintelligence. The theoretical model used by Bostrom et al to model AGI behavior does not match reality.
Your assumptions about AGI and superintelligence are almost certainly downstream from Bostrom and Yudkowsky. The model upon which those predictions were made has been falsified. I would recommend reconsidering your views and adjusting your expectations accordingly.