I'd settle for a poor fascimillie, and argue strongly against human-like.
Human-like would be, let's say, a Cylon skin-job. ChatGPT is just a fraking toaster, at best.
But I agree with GP's eager-anthropomorphization complaint. When algorithms produce verifiably wrong output we call them errors.
Hallucinations are a mistake in perception.