There is an often unaddressed risk in robotics because there is a lack of theory-of-mind. We’ve evolved to intuit what others humans are thinking (based on words, body language and other context) which helps us predict behavior and mitigate risk. Unfortunately we can’t do the same with robuts so there is a potential for more latent risk (same as dealing with “crazy” humans where our mental models fail to predict behavior).
IMO this means we won’t be comfortable with robuts and safety critical applications until they are well, well beyond human capabilities. This is where I think the crowd that aims for “human-level performance” is wrong; society won’t trust robuts until they are much, much better than humans.