A "mind" and being able to "control its parameters" could very well be mutually exclusive. So far, the minds we know are quite intractable, and can be affected in very crude ways (i.e. things that make you less violent also make reduce cognitive skills).
> If you were to suddenly conquer the world keeping the capitalistic system in place would make the most sense.
There is no "capitalistic system", but many systems, some rather different from others. Besides, why does it matter that keeping the existing order "makes more sense". Who says that an AI will be especially rational? It is quite possible that being intelligent and completely rational are mutually exclusive. And even if it is rational, I'm not sure we can fully predict the interests of intelligent beings so different from ourselves. Much of what makes sense to us is a result of us being accustomed to living in a society, with all its consequences. The society the AI will live amongst might be very different from ours, and so its assessment of what makes sense. I spent some time writing about people living in poor, high-crime neighborhoods. Their society was very different from the one I grew up in, and as a result their behavior was different even when it was rational. For example, turning to violence was often a very rational, sensible choice on their part.
> Sadly, I imagine these high-profile attacks on AI are causing a chilling effect from a funding and research point of view.
I think this is all a PR stunt. We're many, many decades, if not a century or more, away from what people imagine when they say AI. Unfortunately, software can be very dangerous to society even without being considered AI[1], and unfortunately few people talk about that.
[1]: http://www.slate.com/articles/technology/bitwise/2015/01/bla...