This presumes that the AI has access to objective reality. Instead, the AI has access to subjective reports filed by fallible humans, about the state of the world. Even if we could concede that an AI might observe the world on its own terms, the language it might use to describe the world as it perceives it would be subjectively defined by humans.
"Led by the Department of Commerce (DOC) through the National Institute of Standards and Technology (NIST), revise the NIST AI Risk Management Framework to eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change."
Clearly there are terrible governments but if it's not government tackling these issues then there will be limited control by the people and it will simply be those with the most money define the landscape.
As I understood the original premises of the US gov, it was to be constitutionally limited in scope. Now I know that ship has sailed a long time ago, but I don't think it follows that we have a gov. to centrally plan AI content as right or wrong.