I also find any argument saying 'don't worry about AI' as completely illogical, and unlike the author i don't mind stating why. I have yet to hear any arguments that are sufficiently persuasive to convince me that there is zero risk from AGI. I am an AI researcher, and while I think a lot of the risk are over blown, I cannot prove that AI is not some sort of existential risk. Even if you put the likelihood of that at less than 1%, that's still warrants a lot of research and effort to help prevent that from happening. There are no second chances, once the world ends that it. Life is not a video game. This is true of any sufficiently powerful technology, if it gets into the wrong hands or is abused, it can be very dangerous. Einstein didn't think up general relativity to develop nuclear weapons.
The steam engine made hash of the civilizational moral imperative to work for one's living. Suddenly, instead of N people's labor being needed to support N people, 0.1N or 0.2N people's labor could support N people. What was there for the others to do? So all -- exactly all -- of the political debates of the long 19th and 20th Centuries (where politics is taken to include war) were based on the pretense that the steam engine had no such consequence.
Or how about radio? The invention of radio made it impossible to secure national boundaries. Ask any dictator (and every nation has one): if information can cross borders, then it does not even matter whether persons or money or goods also can. School's out, game's over. Do you want to seal your border? What (in principle) a fine idea! Just round up all radio-based technology and put it beyond use. But "all" means everywhere in the world, and in space.
Briefly, the human species has the rational capacity to invent toys for itself but absolutely no emotional capacity to think through or confront the consequences of possessing them or the implications of using them.
This is the doom of AI, just as it was and still is the doom of the printing press, the steam engine, radio, television, social media, etc., etc., etc.
And his solution is for them to realize that they are net-negative and just... quit?
If somebody sincerely believes a comet is about to hit earth and kill everyone unless we all get together and send a bunch of miner-astonauts to Armageddon it to smithereens, I don't think telling them "stop trying to fix it, you're just causing a panic and wasting resources, sit back and let everyone else work on their problems that have a higher chance of being resolved" would be very convincing. To them, you would look quite stupid. And mentioning in glancing that you think everything they believe is silly sci-fi without justifying it wouldn't help much.
Edit: ok, off topic, but apparently Robert Morris is friends with Paul Graham and a part of YC? I had no idea lol.
Openai investors had only the leverage of "we won't invest more and you're running an operating loss", they did not actually have a case in court. The investment contract they signed stated "view your investment as a donation" and "this investment confers no fiduciary duties from the board".
The board had the leverage of "you employees and investors value your stock holdings, and if the company fails, those stock holdings to to zero".
The resolution I think was (could be wrong) just to initially reduce the size of the board to three. So it isn't completely guaranteed which side won the battle for control: it's not impossible that the AI safety people might still control the board. Maybe one or both of the new board members is secretly convinced by the AI safety movement.
The battle for public opinion is a different battle, and it looks extremely likely that the AI safety people lost that battle. Maybe it was still worth it for them, if they did in fact win or at least stalemate the battle for control of openai.
So, the reasoning for my disagreeing is largely based on the lack of evidence that as a society we just can't help but fire the footguns
Edit: also, let's also look at the precedence on the decisions made by the socials on how companies will use tech/data/manipulation/etc. not good there either
I am not interested in news that somebody died because of a hammer.
I want to know who weilded the hammer.
Happened with MTV
Happened with video games
Happened with the internet
Happened with smartphones
etc.
The accel/decel debate is the same; midwits rushing to find a tribe so they can go to war online and in the media. There's certainly a discussion worth having about AI and 'safety' but it's not taking place in headlines or on the socials.
Thankfully there's a lot of builders too busy building to engage with the commentariat.
Yes, these people exist, but they are so few that this argument is essentially a strawman. "AI could take our jobs" is a valid concern, and it is something that our political structures are wholly incapable of dealing with.
Most "AI Doomers" aren't worried about Skynet, they're worried about 90% unemployment.
There are also people worried about extinction, that call themselves "AI Alignment" people (more recently some have taken to call themselves "AI notkilleveryoneists" because the felt like every word they invented to describe their worries was soon applied to worries about jobs, systemic racism etc').
Importantly, these people and their worries are responsible for the special structure of OpenAI, which seems to have been founded at least partially by people who believe this. Importantly, it seems to be the core reason behind the sama drama. That is what the article discusses, and that is about the notkilleveryoneists, not the jobs doomers, though those exist too.