My lane is this: I think we're producing new and powerful tools. Some of those tools can be used as weapons. Some of those tools seem to be footguns. But they're relatively broad tools that can be used for many purposes, and the idea that the toolmakers need special oversight because one day someone might figure out how to get the tool to produce a plan for a chemical weapon (Sec 3 (n)(1)(A)) seems misguided to me.
Lots of kinds of tools are powerful. SMT solvers, logic engines, finite element simulators, are all powerful, and it seems about as plausible that a team of smart people with a giant cluster of GPUs could use 10^26 operations to design a weapon with one of these other approaches ... or perhaps they could discover something really valuable! I don't think ML is actually in principle more dangerous than any of these other flexible computational tools, it's just changing the most and getting the most resources right now.
You do realize you're undermining your own point here? Photoshop isn't remotely similar to your description here...
But also, note that I'm not arguing this is a great bill. Again: I haven't had a chance to read it yet. Maybe it's terrible regardless. I'm just saying that if I take the (terse...) commentary I'm reading about it online at face value, they very much achieve the opposite of "convince the reader this is a terrible idea".
Your initial reaction, which I think comes off as snide and flippant, is that it's fine for AI developers face criminal penalties for building AI, presumably because it's capable of doing harmful things, even if those developers do not use them for harmful purposes. You don't seem to attempt any positive argument for why those developers in particular should be criminally liable, but developers of other broadly applicable tools should not.
In any case, perhaps it's all moot bc the person that was repeatedly downvoted to death was correct: I did look at the text and don't actually see criminal penalties anywhere. There are civil penalties though, and I don't think AI developers in particular should have to convince regulators that their work is 'safe' to avoid fines, but at least the stakes are lower.