To solve it doesn't mean we have to eliminate it completely. I think GPT has solved it to enough extent that it is reliable. You can't get it to easily hallucinate.
It depends on how much context is in the training data. I find that they make stuff up more in places where there isn't enough context (so more often in internal $work stuff).