Problem is, if it only takes one person to end the world using AI in a malevolent fashion, then I think human nature there is unfortunately something that can be relied upon.
In order to prevent that scenario, the solution is likely to be more complicated than the problem. That represents a fundamental issue, in my view: it's much easier to destroy the world with AI than to save it.
To use your own example: currently there's far more nukes than there are systems capable of neutralizing nukes, and the reason for that owes to the complexities inherent to defensive technology; it's vastly harder.
I fear AI may be not much different in that regard.
Then, you could say the exact same thing you're saying now... but in that case, nukes-slash-nuclear-energy still shouldn't be distributed to everyone.
Even nukes-slash-anti-nuke-shields shouldn't be distributed to everyone, unless you're absolutely sure the shields will scale up at least as fast as the nukes.