story
I don't see how some people apparently believe the text output of an LLM about it's internal mental state is anything other than a plausible fabrication based on what its training data already says about the mental states of LLMs. These are systems specifically designed and iteratively optimized over millions of training generations to generate text output which plausibly simulates what a composite human would say in response to the same input. There is no human-like internal mental state it can reflect on, so all such responses are, by definition, plausible hallucinations based on interpolated training data.
> Can you imagine a human saying that?
Some people do say that: see Aphantasia and, specifically, Anauralia https://en.wikipedia.org/wiki/Aphantasia