The obvious issue is that it can confidently output the wrong answer. This will be less of a problem though as model accuracy continues to improve.
Really the bigger problem are the ones that doctors and therapists already have: patients are not always reliable narrators and may mis-describe symptoms, insert their own prejudices, and outright conceal important information. In particular, many patients may not even possess the proper writing and reading skills necessary to properly respond to even the most accurate of prompts.
This is now the muddy domain of "sorry, you just need a human available". There's too much risk online, too many edge cases to accomodate, and since the tech itself actually provides a _convenience_ over doctor visits; you could be causing more harm than you might think if people decide to ignore their doctors for the "cheaper" option. This is particularly relevant to me as an American, where health care is a mess, and mental health care is far worse.
A better approach might be to assume that this type of tool is available for patients who are currently receiving some kind of treatment; whether that is a doctor's appointment or in-patient care at a behavioral health facility. There's probably a chance to reduce friction and maybe improve patient outcomes if such a tool could be provided as an early survey, perhaps. Or - as you have done here, a way to teach some coping skills that are highly individualized to each person.
Really though yeah, I think a qualified professional should be in the room and making sure things go well.