I'm guessing that you can't imagine what a super intelligence would actually be like, so you imagine the most smart thing you can think of, a famous physicist, and then imagine they are evil or amoral. You're thinking on the wrong order of magnitude.
Maybe, if you're willing, you could try steelmanning the argument that a superintelligence would basically have super powers. What would your steelman look like?
Another example: I can drive a car, but I can't drive two cars at once (even remotely). What makes it probable that an AI could control thousands of, say, robots at once?
Again, I'm not saying an AI catastrophe isn't possible, but I can think of ten other catastrophes that are much more likely.
1. Can we invent AI?
Ok, so to be clear, my belief is that not only can we do it, it is actually inevitable that we will do it.
Fact 1: We definitely have people all over the world working on this tech, and using a variety of approaches to attack the problem. They are very talented and well-funded engineers. If the problem is solvable, it will be solved by someone at some point in the coming years (decades, centuries even, whatever).
Fact 2: Unlike other Very Hard Problems(TM), human level intelligence embedded in a physical substrate is not only theoretically possible, but we have billions of working, self-replicating prototypes.
Like if we were saying "Faster Than Light travel could potentially destroy life on earth" then it would be perfectly reasonable to say: Look, even if that's right, it seems like such a thing isn't even in the realm of possibility, and if it turns out to actually be theoretically possible, then we're a long way from it being a viable technology. I think we're safe on this one."
But that's not it at all. We know human level intelligence is possible, because we have a working example that we can study. We even know enough about the working example to know that there are a bunch of things about it that we could immediately improve upon, given the right foundation.
Conclusion: With a lot of resources working on a problem that is known for a fact to have a solution, it is inevitable that we will eventually stumble upon the solution to this problem, and end up with at least human level AI.
##
2. Can a human level AI create even greater intelligence?
If it is the case that humans can build human level intelligence (which I have argued they can and will), then it is also pretty self evidently the case that humans can improve on that intelligence.
To start, if we have the tech to build a smart machine, say one that has an IQ of 115, that machine is likely to be almost exactly intelligent as the other machines from the same line. That's different from humans with their wide variability. So without actually making a superior model--just one that's marginally better than average--just the consistency alone will make the population of AIs more intelligent on average than meat humans.
On top of that, even naive adjustments off the top of my head could be made. For example, given we have the technology to build a machine with identical mental characteristics to a human, we would also have the requisite knowledge to, say expand that machine's working memory by 1%. Or to create the machine with swappable sensory organs so it can directly absorb lots of different types of data.
Further, these machines that have the same cognitive capacity as we do, won't have the same physical limitations. For example, they won't have to eat to get their energy, and it's likely they won't have to sleep.
I find a lot of other things likely, for example that they will actually think faster or be able to install arbitrary parallel processors to increase their processing power, but that's conjecture, so let me not make a strong claim in that direction.
What I am comfortable making a strong claim about is that if we have these higher-than-average-human-intelligence machines, who are slightly better in a couple specific ways, then we will also have the ability to emulate the hardware of the machine in software. Maybe that won't be true simultaneous with the machine being born, but it will be soon after at least.
In that case, that's where the foom happens. Imagine you, as you are, could spin up an unlimited number of copies of your own brain to work on all the various things you do. You have a software project and instead of working on it alone or trying to coordinate with other developers, you spin up 10 versions of yourself, and all of you hack away at the project with some of the best coordination ever seen (since you have identical brains and communication styles, and preferences, etc). Hell, spin up one to take care of bills and stuff while you're at it, so the others can focus on the task at hand.
You're not actually any smarter, but with 10 of you, you're more productive than any single human being could ever be.
Now imagine that you are working on the problem of improving this intelligent machine, AND you have this brain replication ability. Now you have an unlimited number of yourself to tackle this problem without having the messy physical limitations of things like eating.
Soon "Team You" will have one all the research there is to do, and you'll start making headway into the problem of incremental improvements on your own brain. Since you're emulated in software you can patch these improvements in as you develop them, including sandboxed testing versions of yourself.
Now your ideally-coordinated team of smart machines just got smarter across the board. You might notice you'd like to have thousands or millions of you working in unison, but your communication channels are not good enough yet. So you spin up a new team of smart machines to work on the problem of how to increase attention capacity and communication bandwidth. Soon that team is shipping patches, each one incrementally improving your whole team's ability to communicate, and thereby increasing the total number of team members you can have.
Soon you're millions strong, you have teams working on brain improvement, coordination improvement, science teams of all disciplines, a resource acquisition team playing the stock market, each moment shipping new improvements to your brain and amassing resources and power.
Even if there is a fundamental hard limit on intelligence (although I strongly doubt the hard cap is anything currently fathomable), you top out significantly above an average human, and you have a virtually unlimited hive cluster of those smarter-than-human brains.
Of course, one or many of your teams of hive brains is working on excellent robotics technology that you can use to manipulate physical objects.
You hire contractors over the internet to build the initial versions, and then use those initial versions to build and maintain any physical infrastructure.
All of this can be done invisibly using only the internet. By the time anyone notices anything you're a hive mind with a robot army--if you want to be.
We want that mind to be friendly.
###
3. Could an AI control, for example, thousands of robots at once?
I think the answer is obviously yes.
a. Think of a computer game, let's say a Real Time Strategy game. There you have a computer controlling hundreds or thousands of independent agents. And that's just on a dinky little desktop with current tech.
b. (a) is actually the worst case scenario if you're the AI. Why not just write the software for the robots you control to be mostly autonomous, and give yourself an API through which to issue commands that the robots can mostly execute on their own?
c. Further, why even do (b), when you have the ability to replicate your brain? Just put one of your your brains into every robot, and use the same advanced coordination mechanisms you use for your research to coordinate your army of highly intelligent robots?
##
So, with this narrative of how it could work, do you see how dangerous an AI could be? One that isn't stably goal-aligned with basic things like life on earth?
Are you convinced, or do you have any other objections or clarifications? I really want to have this discussion on record, ideally with a crisp conclusion. I think it's important.