It's important to think on a longer timescale when dealing with ai.
My personal feeling is that nobody has any idea what they're talking about. And when I say nobody, I mean NOBODY including Ng, Musk, et al.
The problem is with those who read others and just assume they're "experts" or their opinions have value. In some areas they do, sure. In this area they most likely don't.
That, or those of us who are skeptical are uninformed.
Or it's somewhere in the middle.
In any case, I'll stick to my skepticism for one main reason, which is, general intelligence is supposedly modelled after human intelligence. And human intelligence is something we're JUST BEGINNING to scratch the surface of understanding while at the same time, really have ZERO idea how deep the rabbit hole goes.
And as any competent engineer here should know, when trying to emulate a natural model, you first need to understand the model. Until we do that, we will not create a general AI. Like any other computer program (WHICH IT WILL BE! in this conception), it needs to be programmed. We need to know what to program before we can do it! Computer programs don't emerge spontaneously.
Is it a given an AI would share them?
I think we're not really talking about AI at all - we're talking about our current economic and political systems, which appear to have many of the properties we're imputing to evil AIs, but for some reason are far less criticised and debated than hypothetical machine monsters.
Agi sort of means, no matter what endeavor, you will be worse than the ai. There is no way to know how we will be treated. Maybe we are useful in some way. Maybe they'll kill us off but simulate our lives so we don't 'really' die. Maybe that's their morality.
Maybe they ignore us. I don't care about ants. I walk around and step on them, but not intentionally. On the other hand, if there are ants in my house, I eradicate them.
Who knows.