I agree with you that in principle it will be possible to design an artificial automaton that will have something equivalent with human emotions (though I do not believe that it makes sense to attempt to design such a system).
However, I do not believe that an LLM is such a thing, because the training algorithm just ensures that an LLM will mimic whatever is recorded in the training inputs, with or without human emotions in them. There is nothing in the structure of an LLM that can generate emotions by itself. If you train an LLM, for example, only on programs without comments or only on mathematical formulae, it will never display any kind of emotions.
Regarding human emotions, they are recorded in a static way in a book or in a movie, but we do not say that the book or the movie has human emotions itself.
With an LLM, the behavior is much more complex, because it does not just play a sequential recording of human emotions, but it can combine them in various way, while responding to various stimuli that are similar to those that had elicited emotions in the training texts.
But regardless of this behavioral complexity, the human emotions are not generated somehow intrinsically by the LLM, but they correspond to those previously recorded in the texts used for training, so they just mimic humans.