It seems that gun control—though imperfect—in regions that have implemented it has had a good bit of success and the legitimate/non-harmful capabilities lost seem worth it to me in trade for the gains. (Reasonable people can disagree here!)
Whereas it seems to me that if we accept the proposition that the vast majority of code in the future is going to be written by AI (and I do), these valuable projects that are taking hard-line stances against it are going to find themselves either having to retreat from that position or facing insurmountable difficulties in staying relevant while holding to their stance.
It is the conservative position: it will be easier to walk back the policy and start accepting AI produced code some time down the road when its benefits are clearer than it will be to excise AI produced code from years prior if there's a technical or social reason to do that.
Even if the promise of AI is fulfilled and projects that don't use it are comparatively smaller, that doesn't mean there's no value in that, in the same way that people still make furniture in wood with traditional methods today even if a company can make the same widget cheaper in an almost fully automated way.
Why would the AI-fans even care if others who decide not to use it fall behind? Wouldn't they get to point and laugh and enjoy the benefits of "keeping up"? Their fervor should be looked at with suspicion.
There are many others like me who share this expectation, and, while we certainly may be wrong, it's not because of some sinister plan to make the prophecy come true. (There are certainly some who do have sinister/profit-seeking motives, of course!)
This is even true despite the fact that there are bad actors only a few minutes drive away in many cases (Chicago->Indiana border, for example).
(as an aside - this reminds me of the trend of Object Oriented Ontology that specifically /tried/ to imbue agency onto large-scale phenomena that were difficult to understand discretely. I remember "global warming" being one of those things - and I can see now how this philosophy would have done more to obscure the dominion of experts wrt that topic)
But post Sandy Hook, it's clear which side prevailed in this argument.
Those in favor of gun control aren't trying to lower human responsibility, they're trying to place stricter limits on the guns than the status quo. Those against gun control are trying to loosen limits on the guns.
Here this person is proposing making individual responsibility stricter compared than what it is today. And they're not arguing for loosening limits on the tech either.
Isn't that practically the opposite of your analogy?