"Making shit up in order to fulfill some requirement" is the definition of lying, so whether it's a human or an AI, just making shit up in order to generate prompted output is flat out lying. Not "hallucinating". And the best part is that until LLM get valitidy checks baked in,
even the things they get right are lies if presented with authority, because the LLM doesn't know whether it's true or not. In fact, the LLM doesn't know, full stop. It's still just a very well crafted autocomplete, and literally nothing more. So if we're going to anthropomorphise, call them what they'd be when humans do the same:
lies, and damned lies.