So, with AI, is it fair to say that anyone really cares whether it will develop qualities that make it seem as though it is an emergent consciousness? Why would we treat digital consciousness any better than we treat organic consciousness? What is the point of pontificating whether or not the type of thinking an AI does crosses an arbitrary threshold when that threshold only exists as a tool for creating useful outgroups?
However sophisticated the thing that our thinking is, it exists on a scale and we sit at an arbitrary spot. We treat thinking that occurs further down the scale as functionally irrelevant not because of any real distinction but because doing so has a high utility for our species.
So, the question of how we will treat a "truly conscious and sentient" AI has already been answered. Look at how we treat pigs. Good luck out there, HAL.