That’s misleading, because the reason Claude answers that way is almost certainly due to reinforcement learning that deliberately prevents models from claiming they’re conscious.
That’s not a valid reason for saying they fail the Turing test. By most normal standards, they can definitely pass the Turing test. See e.g. https://arxiv.org/abs/2503.23674