I went home for holidays last month. One day, my mom had a complaint about her food delivery and raised a ticket in the app. She was assigned "someone" on chat, and she carefully typed her issue. Then, she got a call from the same "person" who asked her to explain her issue in detail. After the call, she came to me confused and frustrated. She said the "person" on the other end kept giving unrelated solutions, and signed off saying they were happy to have resolved her issue.
Of course, you know this "person" on the other end was an LLM, which I figured once she handed over her phone. I was livid, and despite having better things to do, wasted the next few hours sending a notice to the legal team. They paid a small change to shut down the issue.
Looking back, if the app had at least stated she was talking to a machine and given her an option to escalate to human support, the situation would not have deteriorated.
I feel LLMs can never be used for negative interactions like complaints, or transactional interactions like placing orders. Scope should be limited to answering factual, generic questions, like "What's my order's ETA?", etc.