Alright, you hooked me in. What are they?
A dramatic reduction in cost and increase in effectiveness of some undesirable behavior is exactly when you should look for new ways to address it. The goal of making things illegal is to prevent their occurrence, and if they get suddenly much cheaper and more effective, then your prior methods of deterring them will no longer work.
The AIs like Copilot et al are trained on code poorly written with bad security practices (there is a lot more than you think), hence reproducing these bad practices on produced code.
Because also AI are fallible, the spreading of misinformation more than we already have. The retrieval of credentials with prompt hacking, because people push their credentials.
Because they are generated by AI, the misuse of deepfakes, for example a spanish girl was blackmailed with alleged naked pictures of her, but could be used for far worse.
And I did not scratch the copyright/artistic side of AI.
It's not the AI per se the risk, but what people can do with it. Everything is not beautiful. But there are also good things with AI, I agree.
I think there is the need for some form of regulation in a way or another, the sooner the better. I don't expect the regulation to restrain creativity, but to help prevent bad stuff happening.