Which is why we can create the counterfactual that "The Cowboys should have won last night" and it has implicit meaning.
Current LLM models don't have an external state of the world, which is why folks like LeCunn are suggesting model architectures like JEPA. Without an external, correcting state of the world, model prediction errors compound almost surely (to use a technical phrase).