To an extent, but the words you give an LLM are the entirety of what it has to go on, and again they are sycophantic. If you tell it you have depression, and you insist, it will do its best to agree. In the exact same conversation, you could later convince it you have schizophrenia instead. A human wouldn't buy it.
A trained psychologist is going to use their procedural training to diagnose you. Not text input, they are asking you questions with subtext, and you may not even realise what they learned about you from your answer. With an LLM, you are loading it's context with your world view and it will go off that.