Hmm, maybe. Though my initial reaction is the response isn't "emotional". An LLM isn't capable of emotion. Sure it's capable of assessing a quantitative score of sentiment to words/phrases...though that's not the same as an actual emotion.
If the tool being used generates fantastical fiction that isn't supported by factual data or verifiable systems, then eventually that falsehood will bubble to the surface; whether that is immediate (parsed through my own bullshit-meter) , the near future (during an agent-session that reveals itself to be a hallucination) or in the long-run (production bug/tech debt).
It's not my job to get an ideal "emotional" response from a machine. It's my job to deliver deterministic results with minimal fuck ups.
Emotion has no place in this exchange. If I don't know something, aren't I expected to admit it? And then do the work to subdue the knowledge to bring it under my domain?
Factual knowledge does not cease to exist because someone's in a bad mood....