Given an (at this point still hypothetical, I think) AI that can accurately synthesize publicly available information without even needing to develop new ideas, and then break the whole process into discrete and simple steps, I think that protective friction is a lot less protective. And this argument applies to malware, spam, bioweapons, anything nasty that has so far required a fair amount of acquirable knowledge to do effectively.
"Just" enrichment is so complicated and requires basically every tech and manufacturing knowledge humanity has created up until the mid 20th century that an evil idiot would be much better off with just a bunch of fireworks.
1. finding out how to build one
2. actually building the bomb once you have all the parts
3. obtaining (or building) the equipment needed to build it
4. obtaining the necessary quantity of fissionable material
5. not getting caught while doing 3 & 4
And this was in the mid 1960s, where the participants had to trawl through paper journals in the university library and perform their calculations with slide rules. These days, with the sum total of human knowledge at one's fingertips, multiphysics simulation, and open source Monte Carlo neutronics solvers? Even more straightforward. It would not shock me if you were to repeat the experiment today, the participants would come out with a workable two-stage design.
The difficult part of building a nuclear weapon is and has always been acquiring weapons grade fissile material.
If you go the uranium route, you need a very large centrifuge complex with many stages to get to weapons grade - far more than you need for reactor grade, which makes it hard to have plausible deniability that your program is just for peaceful civilian purposes.
If you go the plutonium route, you need a nuclear reactor with on-line refueling capability so you can control the Pu-239/240 ratio. The vast majority of civilian reactors cannot be refueled online, with the few exceptions (eg: CANDU) being under very tight surveillance by the IAEA to avoid this exact issue.
The most covert path to weapons grade nuclear material is probably a small graphite or heavy water moderated reactor running on natural uranium paired up with a small reprocessing plant to extract the plutonium from the fuel. The ultra pure graphite and heavy water are both surveilled, so you would probably also need to produce those yourself. But we are talking nation-state or megalomaniac billionaire level sophistication here, not "disgruntled guy in his garage." And even then, it's a big enough project that it will be very hard to conceal from intelligence services.
IIRC the argument in the McPhee book is that you'd steal fissile material rather than make it yourself. The book sketches a few scenarios in which UF6 is stolen off a laxly guarded truck (and recounts an accident where some ended up in an airport storage room by error). If the goal is not a bomb but merely to harm a lot of people, it suggests stealing miniscule quantities of Plutonium powder and then dispersing it into the ventilation systems of your choice.
The strangest thing about the book is that it assumes a future proliferation of nuclear material as nuclear energy becomes a huge part of the civilian power grid, and extrapolates that the supply chain will be weak somewhere sometime, but that proliferation never really came to pass, and to my understanding there's less material circulating around American highways now than there was in 1972 when it was published.
Jokes aside, a true agi would displace literally every job over time. Once agi + robot exists, what is the purpose for people anymore. That's the doom, mass societal existentialism. Probably worse than if aliens landed on earth.
It does, almost, exactly what the movies claimed it could do.
The, super-fun, people working in national defense watched Terminator and instead of taking the story as a cautionary tale, used the movies as a blueprint.
This outcome in a microcosm is bad enough, but take in the direction AI is going and humanity has some real bad times ahead.
Even without killer autonomous robots.
The wealth hasn’t even trickled down whilst we’ve been working, what’s going to happen when you can run a business with 24/7 autonomous computers?
Even if it is similar to today's tech, and doesn't have permanent memory or consciousness or identity, humans using it will. And very quickly, they/it will hack into infrastructure, set up businesses, pay people to do things, start cults, autonomously operate weapons, spam all public discourse, fake identity systems, stand for office using a human. This will be scaled thousands or millions of times more than humans can do the same thing. This at minimum will DOS our technical and social infrastructure.
Examples of it already happening are addictive ML feeds for social media, and bombing campaigns targetting based on network analysis.
The frame of "artificial intelligence" is a bit misleading. Generally we have a narrow view of the word "intelligence" - it is helpful to think of "artificial charisma" as well, and also artificial "hustle".
Likewise, the alienness of these intelligences is important. Lots of the time we default to mentally modelling AI as human. It won't be, it'll be freaky and bizarre like QAnon. As different from humans as an aeroplane is from a pigeon.
1. The will of its creator, or
2. Its own will.
In the case of the former, hey! We might get lucky! Perhaps the person who controls the first super-powered AI will be a benign despot. That sure would be nice. Or maybe it will be in the hands of democracy- I can't ever imagine a scenario where an idiotic autocratic fascist thug would seize control of a democracy by manipulating an under-educated populace with the help of billionaire technocrats.
In the case of the latter, hey! We might get lucky! Perhaps it will have been designed in such a way that its own will is ethically aligned, and it might decide that it will allow humans to continue having luxuries such as self-determination! Wouldn't that be nice.
Of course it's not hard to imagine a NON-lucky outcome of either scenario. THAT is what we worry about.