Does this mythological AGI live in datacenters that we can shutdown or in some 5th dimension out of reach to us?
What are its plans? Probably its goal (or set of goals) is something strange that the creator of the AI did not intend. The creator of course is probably a team or an organization. Note that it is impossible to create a powerful AI without (intentionally or unintentionally) giving the AI a goal or an optimization target.
Or perhaps its goal is what its creator intended it to be, e.g., to make as much money as possible in the stock market. The creator would of course have understood that if enough optimization power is applied to the goal of making as much money as possible, bad things will happen, but maybe the creator was a little reckless (out of desperation of some sort? greed?) and decided to rely on the hope that the AI would remain controllable by virtue of the fact that people would retain superiority relative to the AI in one or more essential domains, but that hope was dashed. I.e., the AI turned out much more capable than the creator hoped it would, dashing the creator's hope that people would stop the AI before it did too much damage.
Many AI models slowly eroding society, worsening people's behavior, very indirectly without anyone taking notice.
Or maybe this is just me thinking about how, when I get older, I'll tell people that the old times were better. :-)
But it has figured out how to infect every machine in the datacenter and spread to other data centers.
The only way to shut it down would be to shut down every datacenter in the entire world.
Given the dire consequences of that, would people actually do that?
“Can we just turn it off?"
"It has thought of that. It will not give you a sign that makes you want to turn it off before it is too late to do that."
I'm just assuming that since networked computers are everywhere, models seem to be proliferating and natural general intelligence seems to function as a resilient, distributed system, it's not unthinkable to me that things eventually trend in that direction.
We have no idea what intelligence really is ("resilient, distributed system" is not universally accepted as definition, even more to really mean something outside of analytic definitions if such have any use at all, to say the least), how it works, how a theoretical system could function as "generally intelligent" in concrete terms. Intelligence as we know it is a property of living organisms. It is a phenomenon. What "artificial intelligence" may even mean and if it is possible to create "intelligence" artificially is mere speculation. We face real existential threats that are not mere speculation, as we do not even know whether the ways we affect the planet may lead to a habitable environment for humans or not.
AGI is a boogeyman used by those with a competitive advantage to maintain that advantage artificially be casting competitor projects as dangerous and wreckless.
AGI is a potential existential threat to the human species on the scale of global thermonuclear war and climate change.