Such an argument is valid for a
base model, but it falls apart for anything that underwent RL training. Evolution resulted in humans that have emotions, so it's possible for something similar to arise in models during RL, e.g. as a way to manage effort when solving complex problems. It's not all that likely (even the biggest training runs probably correspond to much less optimization pressure than millenia of natural selection), but it can't be ruled out¹, and hence it's unwise to be so certain that LLMs don't have experiences.
¹ With current methods, I mean. I don't think it's unknowable whether a model has experiences, just that we don't have anywhere near enough skill in interpretability to answer that.