At the very least the system would need extensive training. No reason to believe that the initial versions will have some super-human superspeed self-training ability to absorb a lifetime of information in a very short period.
Also OpenAI seems to have their main strategy for ensuring it's safe as just being the first group to progress towards it and then witholding their research except for select "safe" partners. This seems like it can only make the deployment less democratic rather than necessarily safer.
As far as made up versus real problems, my guess is for someone like Altman who is benefiting so much from the system, it is hard for his worldview to really acknowledge extreme flaws such as fundamental corruption.