More seriously, that it can actually understand and wield abstract concepts. Can it accurately and repeatedly understand that "the foot attaches to the shin bone, which attaches to the thigh bone, which attaches to the hip bone...", and that these have certain degrees of freedom, but not others, and that one foot goes in front of the other, and to easily and reliably distinguish a normal walk from a silly walk . . .
Yes, these are different levels of abstraction, especially the last one, and they need to be very accurate to even reach a young child's level of understanding, and this is just one branch of a branch of a branch in the entire fractal pattern of understanding that is necessary for a more general intelligence.
Once that is in place, and it can show evidence that it can model it's own mind, then it might be able to model someone else's mind.
While the statistical 'abstraction' and remixing seen in these "AI" systems is sometimes impressive and useful, it is frequently revealed that there is utterly no conceptual understanding beneath it. It is merely a statistical re-mixer abstracting patterns of words that occur near other words, remixing them and filtering for grammatical output.
It hasn't got a theory of anything, nevermind a theory of mind.