> Something that is able to simulate having a theory of mind sufficiently well does actually have a theory of mind.
That presupposes that our existing tools for detecting the presence of ToM are 100% accurate. Might it be possible that they are imprecise and it’s only now that their critical flaws have been exposed?
But if our understanding of ToM is so flawed in practice, what does it say about all the confident proclamations that AIs "aren't real" because they don't have it?
Your question aligns with the argument I'm trying to make which is: If it turns out that our understanding of ToM is wrong, should we be making proclamations about--whether for or against--the real-ness of our current AI implementations?