story
Look up the term "philosophical zombie".
In a nutshell, you can simulate a conscious being using a non-conscious (zombie) being. It is possible to simulate it so well that an outside observer can't tell the difference. If this is true, then the corollary is that you can't really know if other people are conscious. You can only tell that you are.
For all intents and purposes I might be the only one who has consciousness in the universe, and I can't prove otherwise.
That being said, I don't think those counter arguments really invalidate the philosophical zombie thought experiment. Let's say that it is not possible to simulate a conscious being with 100% accuracy. Does the difference really matter? Does a living organism need consciousness as an evolutionary advantage?
Isn't it reasonable to assume that all human beings are conscious just because they all pass the Turing test, even if they are not?
In general, behaviorism wasn't a very productive theory in humans and animals either.
It would only be unfounded if the robot is programmed in a way that seemingly appears to be self-aware but actually isn't (It would need to occasionally act in a non-self aware way, like a manchurian candidate). But if you keep increasing scrutiny, it converges on being self aware because the best way to appear self-aware is to be self-aware.
It's not clear to me what the intrinsic goals of a robot would be if it did practice self-awareness in the first place. But in living things it's to grow and reproduce.