> It tends to land in the right ballpark and capture the tone of a thing.
One weird trick is to use this text to search the web. Because it is already in the ballpark it will be a very good query. The model would bullshit "The height of Everest is 8,230 m", you take this string and search it, find "height of Everest is 8,849 m" -> correct the original response.
Now the problem is, how do you decide what information to trust in search?
AI needs a big push to index and do consistency checks for all facts. Let's have the model write a billion wiki entries and knowledge base concepts. Check for support, consistency, competing explanations - everything should be in there, we don't decide what is true. Then a language model can say when a topic is controversial, or when it doesn't know something.
Of course the goal is to find the truth, but we know it is going to be tricky. In the meantime we can have models that know when they don't know, or when the information is not certain.