> To be clear, I am not arguing that it would be impossible to show a theory of mind in a system that can only interact through text
I think you are, because
> a model with greater capabilities than responding to prompts
interacts in other ways than text.
Even then, I don't see what's so special about language that it needs to be separated from other ways of interaction. If language is not enough to derive empirical answers, why should physical movements or radio emissions be?
Even if you don't assume that it's necessarily impossible to get the answers empirically for a text-based model, you must keep in mind that that option is open. Perhaps we will never find out if language models have a theory of mind.
However, judging by the discussions around the topic, very few people highligh the unknowability. If I have to choose between "yes" or "no" while the reality is "maybe", I'd choose a "yes" purely out of caution.