Well... not exactly. The progress of technology has been inspiring, but besides the engineering side of things not much has really changed since the 8086 and Smalltalk days. We're still using computers with roughly the same architecture, limitations and scaling flaws. Nobody has reinvented the wheel (in my lifetime) to obsolete the old one.
I see a lot of flaws in this speculation, even given the benefit of "long term" doubt. For starters, this "created" AI is not a sovereign human; it is the property of whatever private interests built it. In that scenario, your AGI does not "win" the Nobel peace prize. The party that owns the AI would assume responsibility for the findings, for no reason other than the fact that the AI is their property. Even if they disowned it, the only responsible way to hold it accountable is to correlate the AI with it's creator. If you think AI won't be required to be "street legal" in such a future, you're not thinking hard enough.
...and then there's the technical angle. We straight-up cannot engineer a human body. We can try to replicate biological processes in a pragmatic, mechanical fashion, but we cannot 'build' the entirety of a human body. The only way to you to create an AI the way you've described is to destroy a human consciousness and replace it with AI, which is just about the pinnacle of human rights violations attainable. God forbid we make it that far, we now have to out-engineer biology in the same power profile as a human mind. It's a suicide mission in every sense of the word; tangible human lives would be lost in the pursuit of computational nihilism.