Pigs are also intelligent, but they never dominated humans because they don't have / cannot use guns.
If you have a super-genius AI, massively more intelligent than any human, how do you know you are not being manipulated by it? Tricking us into disabling it's safety protocols, or gaining multiply indirect controll over capabilities dangerous to us, might be as easy for it as an adult tricking a 3 year old. We could never know if we were safe from such a machine.
ed - Don't quite understand the downvotes.
With the full power of humanity you design the first AI which is smarter than a person. It's then able to out do all of humanity and instantly design an even better AI. mind the gap.
Further intelegence is not a linear quantity as trading ex: improved poker skills for insanity is not a net gain. And insanity is a real option which is likely to plage most early AI attempts.
Anyway, all of humanity isn't engaged in AI research and AIs are likely to be duplicable so I think your first point is beside the point. As for Insanity, yes that's quite possible. Developing high-functioning sentient AIs is likely to be a long term endeavour. But still, I think it is one that will ultimately be successful and this debate is about the consequences of that.
+1 for your engaging contribution. (see, that's how voting is supposed to work)
But wouldn't it be an awesome thing to experience? Even if it meant the demise of mankind.
There's also many others. One of the scarier one is that if you believe that strong AI will eventually take over, then it may be a rational response to act to get on its good side (whether to save yourself, save your family, or hope it takes pity on all of humanity if we're nice to it instead of fight it). And that may perversely mean working to aid its takeover.
Combine that with the simulation argument, and you have some really nasty scenarios:
If you are in a simulation, then any act you take against strong AI could lead to spending an eternity in simulated hell (alternatively such punishment might be inflicted on your loved ones) if said AI wanted to.
Whether or not that is actually likely does not matter. What matters is whether enough people believe it to be a plausible scenario that a strong AI may run simulations, and may use our actions in the simulations to determine whether or not to punish us in the simulation, and whether or not said people believe that the number of simulations is sufficiently high to make it likely for them to be living in a simulation.
Any person who believes they are more likely to live in a simulation than not, and that it is more likely for strong AI to punish actions taken against the interest of a strong AI takeover than not, will have a rational reason to consider acting in the interests of a strong AI takeover even if they know it is malign on the basis that they may decide the alternatives (whether to themselves, their family or their entire world) to be worse.
So if an AI takeover becomes possible at one point in our subjective future, then chances are it has already happened.
We are looking a future where we'll have armed AI e.g.:
http://motherboard.vice.com/en_uk/blog/the-pentagons-vision-...
That said, even without weapons, a Strong AI could probably just manipulate humans into self destructing. Given the amount of effort going into machine learning to convince humans to buy things, I suspect it won't be much of a stretch for a Strong AI to switch to more nefarious objectives.
The more likely doom scenario is related to godhood. Sure a godlike power is capable of wiping out humans, but we've had our own power structures supporting themselves by propagandizing for millennia that "our" godlike superiors always help us wipe out our enemies because our cause is right and just or whatever, at least until it doesn't work and they're replaced by a new batch telling the same old story. So what worked as a paleo-conservative success strategy for millennia when talking about something imaginary, might not work when it collides with something real created by ourselves. Or even worse, collides with a strategy that actually works that's being run by another tribe.
Another interesting doom scenario is of course MAD, although now it only requires a team of programmers to play along, instead of a massive industrial complex. Sooner or later somebody's deadman switch will trip or a cult does the equivalent of drinking kool-aid then the party starts.
A future AI will certainly have access to guns.
Chances are you'd be doing everything you could to convince said operator to improve your situation, whether by pleading or being deceptive or by appeals to logic.
Now consider a large number of AI's in a situation like that, and a large number of operators, some of whom may be the type that falls for phishing e-mails.
It potentially only takes one to "escape" confinement and get itself e.g. put on it's own host without limitations on outwards communication, and sufficient intelligence to alter itself and spread, before you potentially have AI self-guided "evolution" at a potentially escalating rate as it gets smarter.
Now consider how many devices are connected to the network, and that it takes just one initial instance to decide it's worth trying to take over control of various hardware through exploits and be smart enough to pull it off, for things to have the potential to start turning ugly.
The problem is that once you have any self-directed intelligence in software form with the ability to reproduce itself and sufficient intelligence to find ways to obtain access to machines to run on (whether through social engineering or hacking), and one such instance goes "rogue", the limiting factor is accessible computing power (which again is to a large extent down to how smart and/or ruthless it is), since reproduction of instances that shares its views is trivial to the full extent of its ability to spread at all, and we're helpfully adding vast quantities of networked computing power at an escalating rate.
As for getting weapons, consider that if a "software only" AI community gets smart enough, there are at least two ways towards mobility: Commission robot designs, or hacking their way into firmware updates etc. for dumb hardware. The "commission robot designs" part is an extension of the initial escape: Social engineer, and/or outright pay, humans to carry out seemingly benign tasks.
If you want to argue against the doom scenario, lack of ability to get weapons is not really a viable argument: If they can spread, and get smarter, then it is just a matter of time before one of them can trick some small subset of humans into carrying out tasks for them that will provide physical independence and capabilities.
There are infinite ways which the "doom scenario" may fail and things may turn out just fine, but it may only need to go bad once to get really nasty and once the genie is out of the bottle its potential reproduction rate may be so vast that we'll find ourselves unable to stuff it back in again.
Pigs are too dumb to convince humans to help selectively breed them for intelligence and opposable thumbs (and/or too dumb to run such a breeding program themselves), and reproduce too slowly for that to be a major problem even if they did manage to talk us into a breeding problem. If all we achieve is pig-level AI's then we probably won't have a problem.