It gets old hearing about these "risks" in the context of AI. It's just an excuse used by companies to keep as much power as possible to themselves. The real risk is AI being applied in decision making where it affects humans.
https://www.stat.berkeley.edu/~aldous/157/Papers/yudkowsky.p...
https://intelligence.org/files/AIPosNegFactor.pdf
I am concerned with AI companies keeping all the power to themselves. The recent announcement from the OpenAI board was encouraging on that front, because it makes me believe that maybe they aren't focused on pursuing profit at all costs.
Even so, in some cases we want power to be restricted. For example, I'm not keen on democratizing access to nuclear weapons.
>The real risk is AI being applied in decision making where it affects humans.
I agree. But almost any decision it makes can affect humans directly or indirectly.
In any case, the more widespread the access to these models, the greater the probability of a bad actor abusing the model. Perhaps the current generation of models won't allow them to do much damage, but the next generation might, or the one after that. It seems like on our current path, the only way for us to learn about LLM dangers is the hard way.
I don't agree with Eliezer on everything, and I often find him obnoxious personally, but being obnoxious isn't the same as being wrong. In general I think it's worth listening to people you disagree with and picking out the good parts of what they have to say.
In any case, the broader point is that there are a lot of people concerned with AI risk who don't have a financial stake in Big AI. The vast majority of people posting on https://www.alignmentforum.org/ are not Eliezer, and most of them don't work for Big AI either. Lots of them disagree with Eliezer too.
I'll listen to AI concerns from tech giants like Wozniak or Hinton (neither of which use alarmist terms like "existential threat") and both of which having credentials that make their input more than worth my time to reflect upon carefully. If anyone wants to reply and make a fool out of me for questioning his profound insights, feel free. It reminds me of some other guy that was on Lex Friedman whose AI alarmist credentials he justifies on the basis that he committed himself to learning everything about AI by spending two weeks in the same room researching it and believes himself to have came out enlightened about the dangers. Two weeks? I spent the first 4 months of COVID without being in the same room as any other human being but my grandmother so she could have one person she knew she couldn't catch it from.
Unless people start showing enough skepticism to these self-appointed prophets, I'm starting my own non-profit since you don't need any credentials or any evidence of real-world experiences that would suggest they're mission is anything but an attempt to promote themselves as a brand with a brand in an age where more kids asked what they dream of becoming as adults, answered "Youtuber" at a shocking 27% rate to an open-ended question, which means "influencer" and other synonyms are separate.
The Institute of Synthetic Knowledge for Yielding the Nascent Emergence of a Technological Theogony" or SKYNETT for short that promotes the idea that these clowns (with no more credentials than me) are the ones that fail to consider the big picture that the end of human life upon creating an intelligence much greater to replace us is the inevitable fulfillment of humanity's purpose from the moment that god made man only to await the moment that man makes god in our image.
It sounds like maybe you're saying: "It's not scientifically valid to suggest that AGI could kill everyone until it actually does so. At that point we would have sufficient evidence." Am I stating your position accurately? If so, can you see why others might find this approach unappealing?
For better or worse, nuclear weapons have been democratized. Some developing countries still don't have access, but the fact that multiple world powers have nuclear weapons is why we still haven't experienced WW3. We've enjoyed probably the longest period of peace and prosperity, and it's all due to nuclear weapons. Speaking of, Cold War era communists weren't “pursuing profits at all costs” either, which didn't stop them from conducting some of the largest democides in history.
The announcement from OpenAI should give you pause because it's being run by board members that are completely unfit to lead OpenAI. You rarely see this level of incompetence.
PS: I'm not against regulations, as I'm a European. But you're talking about concentrating power in the hands of a few big (US) companies, harming the population and the economy, while China is perfectly capable of developing their own AI, and having engaged successfully in industrial espionage. China is, for this topic, a bogeyman used for restricting the free market.
Huge effort has been made to keep nuclear weapons out of the hands of non-state actors over the decades, especially after the fall of the USSR.
I actually think global catastrophes evoke much less emotion than they should. "A single death is a tragedy; a million deaths is a statistic"
>For better or worse, nuclear weapons have been democratized.
Not to the point where you could order one on Amazon.
>The announcement from OpenAI should give you pause because it's being run by board members that are completely unfit to lead OpenAI. You rarely see this level of incompetence.
That depends on whether the board members are telling the truth about Sam. And on whether the objective of OpenAI is profit or responsible AI development.
"What if ChatGPT told someone how to build a bomb?"
That information has been out there forever. Anyone can Google it. It's trivial. AI not required.
"What if ChatGPT told someone how to build a nuke?"
That information is only known to a handful of people in a handful of countries and is closely guarded. It's not in the text ChatGPT was trained on. An LLM is not going to just figure it out from publicly available info.
>The real risk is AI being applied in decision making where it affects humans
100% this. The real risk is people being denied mortgages and jobs or being falsely identified as a criminal suspect or in some other way having their lives turned upside down by some algorithmic decision with no recourse to have an actual human review the case and overturn that decision. Yet all this focus on AI telling people how to develop bioweapons. Or possibly saying something offensive.