Well, that's kind of the point: if you have actually used LLMs for any amount of time, you are bound to find out that you can't have a fulfilling, empathetic relationship with them. Even if they offer a convincing simulacrum of a thinking being at first sight, you will soon find out that there's not much underneath. It generates grammatically perfect texts that seem to answer your questions in a polite and knowledgeable way, but it will happily lie to you and hallucinate things out of thin air. LLMs are tools, humans are humans (and animals are animals - IMHO you can have a more fulfilling relationship with a dog or a cat than you can have with an LLM).