With an input context that contains words that excite certain human emotions, the output of the core LLM function will generate a token probability distribution that is representative for the human emotions displayed by humans in the training texts.
This is something expected and non-sensational. An LLM mimics the human behavior that was recorded in the training texts, much in the same way as a photographic image of a human face mimics the appearance of that human face.
A photographic image is designed to reproduce the light field created by a face that reflects the ambient light, a LLM is created to reproduce the typical conversational behavior that was recorded in the training texts.
Depending on how it was trained, one should expect a LLM to be affected by the choice of words used in the input in a similar way how a human would be affected.
However, that does not mean that a LLM that shows signs of emotional distress feels some pain because of that. A LLM is designed for mimicry and it does not feel more pain or more happiness than a photograph of a wound feels pain from the wound or a photograph of a smiley face feels happiness.
The fact that the current LLMs do not actually feel the human emotions that they may be able to mimic in an accurate way, does not mean that you could not build a robot which would have some built-in mechanisms for feeling pain and various emotions, which could be made to have similar functions like in an animal, serving a functional purpose and not being used for mimicry. However, for now it does not make any sense to attempt to do such a thing, because in a deterministic program there are better ways to ensure that a robot is "loyal" to its owner and acts in self-preservation when possible.