If we’re making them smart enough to decide whether to shoot on their own, why wouldn’t we make them smart enough to refuse an unethical order (e.g. there’s like 80% civilians there, I’m not shooting a rocket at the target)?
Obviously they won’t, but because the problem here is that the humans do not want the machine to question them, it’s only a matter of time before the bad guys have them anyway. I’m inclined to say that it’s better the good guys (at least from my perspective) have them first.