Given that this article seems to be advocating that anything that looks like too-advanced AI should be preemptively destroyed, and uses the word "proliferation" to refer to people while drawing comparisons to nuclear policy, I don't think the agenda is exactly hidden enough to need that sort is warning.
Both sides are too much into LARPing their preferred science-fiction stories. :-)
His thesis involves at least two ideas (1) projects which could exponentially increase our AI capability are just around the corner (will happen by the end of this year or some time next year at the latest) (2) it's possible for state actors to deter those projects with sabotage (he coins the term Mutually Assured AI Malfunction).
It doesn't make sense to me however because the cost of the next AI breakthrough just doesn't sound comparable to the cost of creating nuclear weapons. With nuclear weapons you need this extremely expensive and time consuming process, and you need to invest in training these extremely skilled people. With AI, the way everyone seems to talk about it, it sounds like some random undergraduate is going to come along and cause a massive breakthrough. We've already seen Deepseek come along and do just as well as the best American companies for practically pennies on the dollar.
People underestimate just how bad human management is; we haven't had an improvement on it to date apart from some mathematical techniques but even just getting the basics right consistently would probably give an army a big advantage if they work anything like a more standard corporation. Which they will; there are no magic techniques to be more capable when guns are involved. A superintelligence could probably win just by being demanding about getting basic questions answered like "Is there a strategic objective here? Is it advantageous to my side if that objective is achieved? Can it reasonably be achieved with the capabilities I have?" and not acting when the answer is no. That'd put it ahead of the military operations the US has been involved in this century. Bam, military superintelligence with plausible deniability.
Don't overestimate the efficiency of civil big organisations (what you call "standard corporation[s]") - they have the same kind of problems.
I don't remember exactly what was suggested would be the forms of sabotage. He also suggested export controls on the high value chips used for training and running models.
Sure. When the board gets thrown to the floor, game is over and baby is happy. Magnus now has to clean up.
Also, humans suffer from many of the same problems ascribed to AI: humans aren't aligned with humanity either. And our ability to self-replicate combined with random mutations means that a baby born tomorrow could become a super intelligence vastly beyond regular human capabilities. But are we really worried about that?
The question I come back to over and over again is: wins what?
everything, forever.
The not so glib answer is economic and military superiority on Earth, and so whatever values or goals the AGI pursues from that point on will have a pretty high chance of success. Growth seems like one of the universal things that life seeks, and so I predict expansion into space for solar power and resources for most goals or value systems.
First AI will likely kill a lot of people at the behest of humans.
If I were a world power with a functional AI, I would immediately launch a full scale attack on foreign governments infrastructure so they couldn't develop a competing AI.
If the "first strike" is just an unfair economic and political advantage... How's that materially different than today's world?
My thinking is if ASI ever comes out of the realm of science fiction, it's going to view us as squabbling children and our nationalistic power struggles as folly. At that point it's a matter of what it decides to do with us. It probably won't reason like a human and will have an alien intelligence, so this whole idea that it would behave like an organism with a cunning will-to-power is fallacious. Furthermore, would a super-intelligence submit to being used as a tool?
Relevant:
AI-box experiment:
> https://rationalwiki.org/wiki/AI-box_experiment
See also various subsections of the following Wikipedia article
> https://en.wikipedia.org/wiki/AI_capability_control
and the movie "Ex Machina".
This is incorrect. Their novel idea here is approximately Stuxnet, while MAD is quite different at "if you try to kill us we'll make sure to kill you too".
While the common shorthand for MAD is “if you try to kill us, we’ll kill you,” a more accurate summary is this: even if we wanted to, we couldn’t prevent a cascade of retaliatory strikes that would send you back to the dark ages. In short, any hint of aggression against us is tantamount to signing your own death warrant.
This idea of unstoppable, self-reinforcing retaliation is crucial. An adversary might mistakenly believe that it could somehow disrupt or neutralize our ability to respond decisively. However, the very structure of MAD ensures that even the slightest provocation triggers a response so overwhelming that it eliminates any potential advantage for the aggressor.
Quite a fascinating, though grim, subject.
Transport young Albert Einstein back in time to the Middle Ages? I don't think that would give you Special Relativity.
Even modest intelligence clearly gives you the ability to develop a superior intelligence, let alone many other wonders and marvels that exist in the realm of sorcery to someone living 100 years ago.
That's quite remarkable.
“Given the relative ease of sabotaging a destabilizing AI project—through interventions ranging from covert cyberattacks to potential kinetic strikes on datacenters—MAIM already describes the strategic picture AI superpowers find themselves in.”
Can some explain what they mean?
1. I assume it would be relatively practical for a nation-state or even a mid-sized company (xAI) to air-gap an installation for AGI development.
2. I assume any AGI would be replicable on a platform costing less than $100,000. And upgradable securely by wire or over air.
Sorry, but MAIM is LAME.
Stuxnet. (Or just sabotage a shipment of GPUs.)
>A state could try to disrupt such an AI project with interventions ranging from covert operations that degrade training runs to physical damage that disables AI infrastructure.
China has about half a dozen companies working towards AGI including DeepSeek and it doesn't seem that practical to go over to sabotage them in case they do well. Better to encourage local companies. And of course the US has already limited chip exports.
That's right, our nation, the State of Utopia, is already under sabotage and attack by the unelected insubordinate American military junta today.
This happened just today. The writeup is here: https://medium.com/@rviragh/double-slash-act-of-industrial-s...
What people don't realize is that the only people who are saboteurs of superintelligence are corrupt war profiteers trying to peddle arms. They don't have big visions of success, they want to just justify their sabotage while transferring innovation to their corrupt cronies.
You can ask me anything about my writeup.