story
It doesn't matter if the output is correct or not, the process for producing it is identical, and the model has the exact same amount of knowledge about what it's saying... which is to say "none".
This isn't a case of "it's intelligent, but it gets muddled up sometimes". It's more of the case that it's _always_ muddled up, but it's accidentally correct a lot of the time.