At least the cases that I’ve seen in the news have been of the form “person with established mental health issues enters feedback loop with sycophantic AI”. There may be cases that don’t fit that bill, but I haven’t seen them make headlines yet.
It’s also worth noting I don’t think we need a license or a ton of surveillance here. I think we can do a better job of moderating AI output to catch the AI telling people their family is plotting to murder them, and then send them a crisis hotline number instead. Sort of like what search engines do when you start googling methods of self-harm.