While knowing might be impossible, it seems like the model could provide a confidence level and only provide answers that exceed some threshold. It'd be a bit like asking a human "are you sure about that?"
And in practice, I really don't think it is that different. We humans effectively make things up all the time. Sometimes we are well aware of our educated guesses and sometimes we are less aware.
It isn't realistic to expect an artificial intelligence to be vastly better than human intelligence in this regard.