I'm not sure how you consider this to be an anthropomorphic fallacy, the comparison to the situation with a human exists only because people are prepared to stipulate that humans can reason. That does not assume something about AI behaviour to be like a human's. It is showing the same test applied to a human.
Your statement that AI knows the rules would be considered anthropomorphising by many, I take it more to mean it 'knows' in the same sense that an election 'wants' to be at a lower energy level.
That said, humans who have written entire books on chess have been known to play illegal moves. That should count as proof by counterexample that your reasoning as to why humans fail at tasks is false.