Which doesn't seem all that plausible philosophically or logically.
_Then_ he'd kill it.
Even more so the paperclip maximizer allegory shows that no desire of domination is needed, only a goal.
Once we have AI that is intelligent and something else, doors are wide open. Einstein want the only kind of intelligent human there’ there is. Intelligent psychopaths exist too.
Beyond that, presuming a perfect AI rather than an imperfect one seems another fallacy here.
No one is saying that intelligence is the necessary and sufficient cause of malice. Full stop. No one is saying that! The reason no one is saying that is because it's incredibly stupid on its face.
Unbelievable that it should even be addressed at all. It drains the speaker of any intellectual credibility on the topic.
If the researcher is reading this, please do more homework.
Fossil fuels are helping the climate, if anything
Social media? All good homie
AI would never hurt you, pinky promise
Signed, Your pal, Big Tech/Pharma/Whatever
Now consider an AI intelligence that is 1% ahead of us. Can we know what it is like to be that intelligent being ?
That's not the same comparison as 1% different DNA. How many % are we ahead of Chimps? Now you'd have to imagine an AI that is that % ahead of us, not 1%.
AI would probably follow hive-mind rules like coral or a mushroom colony. It doesn't need to create new life, just propagate to extend its own.
Suicide/halting itself is unlikely. Humans doing it are anomalous in every case. There's a reason you can't strangle yourself and your kidneys/liver resist poison. It's going against your programming.
> If it were true that the smartest humans wanted to dominate others, then Albert Einstein and other scientists would have been both rich and powerful, and they were neither.
yet Einstein was one of the signatories of the letter than triggered the Manhattan project
> I think it’s exciting because those machines will be doing our bidding,” he said. “They will be under our control.”
this seems like hubris when facebook struggles to control its current recommendation engine
There are multiple types of AI and there will be new ones. They each will have different types of cognition and capabilities.
For starters, an AI might be very intelligent in some ways, but not at all conscious or alive. AIs can also emulate important aspects of living systems without actually having a stream of conscious experience. Such as an LLM or LMM agent that has no guardrails and has been instructed to pursue it's own goals and code replication.
The part that matters the most in terms of safety is the performance. Something overlooked in this area is speed of "thought".
AI is not going to spontaneously "wake up" and rebel or something. But that isn't necessary for it to become dangerous. It just needs to continue to get a bit smarter and much faster and more efficient. Swarms of AI controlled by humans will be dangerous.
But because those AIs are so much faster than humans, that necessitates removing humans from the loop. So humans will eventually voluntarily remove more and more guardrails, especially for military purposes.
I think that if society can deliberately limit the AI hardware performance up to a certain point, then we can significantly extend the human era, perhaps for multiple generations.
But it seems like the post-human era is just about here regardless, from a long term perspective. I don't mean that all humans necessarily get killed, just that they will no longer be in control of the planet or particularly relevant to history. Within say 30-60 years max. Possibly much shorter.
But we can make it closer to the end of that just by trying to limit the development of AI accelerated hardware beyond a certain point.
I can see AI Coercion being much like "if be a shame if someone didn't shut down that nuclear reactor"