It hasn't worked for the airline industry, pharmaceutical companies, banks, or big tech to name a few. I don't think its wise for us to keep trying the same strategy.
Which is also the probable fate of an AGI super intelligence being regulated by humans.
People often get wrapped up around an AGI's incentive structure and what intentions it will have, but IMO we have just as much chance of controlling it as wild rabbits have controlling humans.
It will be a massive leap in intelligence, likely with concepts and ways of understanding reality that either never considered or aren't capable of. Again, that's *if* we make an AGI not these LLM machine learning algorithms being paraded around as AI.