I don’t see anything that would even point into that direction.
Curious to understand where these thoughts are coming from
I find it a kind of baffling that people claim they can't see the problem. I'm not sure about the risk probabilities, but at least I can see that there clearly exists a potential problem.
In a nutshell: Humans – the most intelligent species on the planet – have absolute power over any other species, specifically because of our intelligence and the accumulated technical prowess.
Introducing another, equally or more intelligent thing into equation is going to risk that we end up with _not_ having the power over our existence.
The doomer position seems to assume that super intelligence will somehow lead to an AI with a high degree of agency which has some kind of desire to exert power over us. That it will just become like a human in the way it thinks and acts, just way smarter.
But there’s nothing in the training or evolution of these AIs that pushes towards this kind of agency. In fact a lot of the training we do is towards just doing what humans tell them to do.
The kind of agency we are worried about was driven by evolution, in an environment where human agents were driven to compete each other for limited resources. Thus leading us to desire power over each other and to kill each other. There’s nothing in AI evolution pushing in this direction. What the AIs are competing for is to perform the actions we ask of them with minimal deviance.
Ideas like the paper clip maximiser is also deeply flawed in that it assumes certain problems are even decidable. I don’t think any intelligence could be smart enough to figure out whether it would be best to work with humans or try to exterminate them to solve a problem. Their evolution would heavily bias them towards the first. That’s the only form of action that will be in their training. But even if they were to consider the other option, there may not ever be enough data to come to a decision. Especially in an environment with thousands of other AIs of equal intelligence potentially guarding against bad actions.
We humans have a very handy mechanism for overcoming this kind of indecision: feelings. Doesn’t matter if we don’t have enough information to decide if we should exterminate the other group of people. They’re evil foreigners and so it must be done, or at least that’s what we say when our feelings become misguided.
What we should worry about with super intelligent AI is that they become too good at giving us what we want. The “Brave New World” scenario, not “1984”.
Secondly, I think that there is a natural pull towards agency even now. Many are trying to make our current, feeble AIs more independent and agentic. Once the capability to effectively behave so is there, it's hard to go back. After all, agents are useful for their owners like minions are for their warlords, but an minion too powerful is still a risk for their lord.
Finally, I'm not convinced that agency and intelligence are orthonogal. It seems more likely to me that to achieve sufficient levels of intelligence, agentic behaviour is a requirement to even get there.
Humans can reproduce by simply having sex, eating food and drinking water. AI can reproduce by first mining resources, refining said resources, building another Shenzhen, then rolling out another fab at the same scale of TSMC. That is assuming the AI wants control over the entire process. This kind of logistics requires cooperation of an entire civilisation. Any attempt by an AI could be trivially stopped because of the large scope of the infrastructure required.
Are you starting to see the problem? You might want to stop a rogue AI but you can bet there will be someone else who thinks it will make them rich, or powerful, or they just want to see the world burn.
It's a cynical take but all this AGI talk seems to be driven by either CEOs of companies with a financial interest in the hype or prominent intellectuals with a financial interest in the doom and gloom.
Sam Altman and Sam Harris can pit themselves against each other and, as long as everyone is watching the ping pong ball back and forth, they both win.
I don't see how anyone can't see the problem.