The problem the Turing test was meant to solve is that we had, and still have, no means of recognizing a conscious mechanism. We lack a theory of consciousness that can be used to make a better test than "It could fool me", so the Turing test accepts that as the test.
In other words, the mechanism may be what consciousness is about, but we can't say anything useful about this as relates to consciousness.