LLMs are not an opportunity to teach humans how to avoid hallucinations because they don't work even the slightest bit like human brains and we don't even know how to make them stop hallucinating in the first place.
Ahh but we known that both humans and LLMs 'hallucinate'. The study of the latter potentially provides insight to former; after all, isn't the core question we're asking in all of these discussions is 'what is intelligence?'