That is incorrect. From the Wikipedia page you references:
as the appearance of a robot is made more human, some observers' emotional response to the robot becomes increasingly positive and empathetic, until it reaches a point beyond which the response quickly becomes strong revulsion. However, as the robot's appearance continues to become less distinguishable from a human being, the emotional response becomes positive once again and approaches human-to-human empathy levels
> The thing about AI/CV and other interpretation simulators is that there is always a quantization of nature in the end result.
That's just not true in any meaningful sense. Computer vision can process images with a higher resolution than the human eye can distinguish.
> So, "construct an internal 3D model of [something in the natural world]" will always be deficient, and any conclusions derived from these models will always have inherent errors
Humans do this too (hence optical illusions). There's no reason to think that machine models can't surpass human models (and in some domains they already do).
All models are wrong, but some are useful.