He's right but do people really misunderstand this? I think it's pretty clear that the issue is one of over-creativity.
The hallucination problem is IMHO at heart two things that the fine article itself doesn't touch on:
1. The training sets contain few examples of people expressing uncertainty because the social convention on the internet is that if you don't know the answer, you don't post. Children also lie like crazy for the same reason, they ask simple questions so rarely see examples of their parents expressing uncertainty or refusing to answer, and it then has to be explicitly trained out of them. Arguably that training often fails and lots of adults "hallucinate" a lot more than anyone is comfortable acknowledging.
The evidence for this is that models do seem to know their own level of certainty pretty well, which is why simple tricks like saying "don't make things up" can actually work. There's some interesting interpretability work that also shows this, which is alluded to in the article as well.
2. We train one-size-fits all models but use cases vary a lot in how much "creativity" is allowed. If you're a customer help desk worker then the creativity allowed is practically zero, and the ideal worker from an executive's perspective is basically just a search engine and human voice over an interactive flowchart. In fact that's often all they are. But then we use the same models for creative writing, research, coding, summarization and other tasks that benefit from a lot of creative choices. That makes it very hard to teach the model how much leeway it has to be over-confident. For instance during coding a long reply that contains a few hallucinated utility methods is way more useful than a response of "I am not 100% certain I can complete that request correctly" but if you're asking questions of the form "does this product I use have feature X" then a hallucination could be terrible.
Obviously, the compressive nature of LLMs means they can never eliminate hallucinations entirely, but we're so far from reaching any kind of theoretical limit here.
Techniques like better RAG are practical solutions that work for now, but in the longer run I think we'll see different instruct-trained models trained for different positions on the creativity/confidence spectrum. Models already differ quite a bit. I use Claude for writing code but GPT-4o for answering coding related questions, because I noticed that ChatGPT is much less prone to hallucinations than Claude is. This may even become part of the enterprise offerings of model companies. Consumers get the creative chatbots that'll play D&D with them, enterprises get the disciplined rule followers that can be trusted to answer support tickets.