It is a tool. If I use a tool for illegal purposes I have broken the law. I can be held accountable for having broken the law. If the laws are deficient, make the laws stronger and punish people for wrong deed, regardless of the tool at hand.
This is a naked attempt to build a regulatory moat while capitalizing on fear of the unknown and ignorance. It’s attempting to regulate research into something that has no external ability to cause harm without the use of a principal directing it.
I can see a day (perhaps) when AIs have some form of independent autonomy, or even display agency and sentience, when we can revisit. Other issues come into play as well, such as the morality of owning a sentience and what that entails. But that is way down the road. And even further if Microsoft’s proxy closes the doors on anyone but Microsoft, Google, Amazon, and Facebook.