While I agree in a general sense, it might make sense to regulate the tools themselves.
An analogy is gun control laws. Murder is a crime, whether it happens with a gun or without, so in theory there would be no need to regulate guns ("guns don't kill people etc."). But in most countries in the world we still regulate them.
Maybe it makes sense to regulate AI similarly? Requiring that some guardrails are built in to prevent it being used to generate fake information or information that helps to commit crimes. Making sure that it does not leak private information (and being able to prove this somehow). Regulating if and how it can be used as a therapist.
Although I wonder if it isn't too soon still for such regulation.