story
This entire lecture series on Human Behavioral Biology is worth watching from the beginning, but I've linked to a moment where Sapolsky describes tit-for-tat strategies arising in animals. First example: Vampire Bats.
[]: https://www.youtube.com/watch?v=Y0Oa4Lp5fLE&feature=youtu.be...
Evolution defaults to aggression as that is how it squeezes out fitness, and cooperative behavior is continually at odds with that and only seems to survive on one level up every so often, where evolution just starts treating it as giant agents anyway and the cycle starts again at a higher level. Similarly, we humans still have countries and borders and are only cooperating one level up. Cooperation is still merely being used as a survival tool, rather than an end in itself.
I.e., two people working together are working against another two people, those people if they somehow manage to combine are working against another collective, multiple collectives may combine and then work against other collective, etc... such developments may potentially be worse than just individuals fighting each other.
Similar to the idea that in a first contact situation, there may be an advantage in shooting first, and that often also implies only one iteration. I think shooting first is the default, and needs to be actively fought against.
Cooperation is not the default or preferred state for evolution, even though it's more efficient. To get there, it takes a lot of suffering and bloodshed. A few thousand years AI-caused suffering before it figures out that cooperating is useful more than one level up (if it ever does, as humans have failed so far) is not really what I have in mind when I talk about cooperative ethics. Cooperative ethics should be fundamental, not derived from short-term RoI computed in the moment.
But the best part about evolution is that we don't need to replicate blind mutation and strict fitness functions, we can use the proven-to-work strategies as our springboard. And the best part about AI is that we have no ethical issues simulating millions of evolutionary iterations of "bloodshed" until we arrive at an AI that is acceptable to our ethics.
If you don't want ethical issues, don't create something that needs a code of ethics. We haven't even figured out how to properly define "acceptable to our ethics" (aka laws and other social structures) for humans.
also, you may enjoy "27" https://www.youtube.com/watch?v=dLRLYPiaAoA
Are we sure this is the case? Once we start attempting to create an AI that follows our modern ethics, we have to start asking questions about AI personhood. And I for one feel there are deep ethical questions regarding forced iteration/simulation, let alone the violent kind.
I guess... Maybe they do, if the story is told vividly enough.