LLMS don't "hallucinate" they generate a stochastic sequence of plausible tokens that, in context when read by a human, are a false statement or nonsensical.
They also dont have an internal world model. Well I don't think so, but the debate is far from settled. "Experts" like the cofounders of various AI companies (whose livelihood depends on selling these things) seem to believe that. Others do not.
https://aiguide.substack.com/p/llms-and-world-models-part-1
https://yosefk.com/blog/llms-arent-world-models.html