Nothing an LLM says can in itself, right now, be used as evidence of what they 'feel'. It is not established that there is any linking of their output to
anything else than the training process (data, loss function, optimizer, etc.). And definitely not to qualia.
On the other hand, it is well know that we can (and commonly do) make them come up with any output we choose. And that their general tendency is to regurgitate any kind of sequence that occurs sufficiently often in the training data.