https://www.stat.berkeley.edu/~aldous/157/Papers/yudkowsky.p...
https://intelligence.org/files/AIPosNegFactor.pdf
I am concerned with AI companies keeping all the power to themselves. The recent announcement from the OpenAI board was encouraging on that front, because it makes me believe that maybe they aren't focused on pursuing profit at all costs.
Even so, in some cases we want power to be restricted. For example, I'm not keen on democratizing access to nuclear weapons.
>The real risk is AI being applied in decision making where it affects humans.
I agree. But almost any decision it makes can affect humans directly or indirectly.
In any case, the more widespread the access to these models, the greater the probability of a bad actor abusing the model. Perhaps the current generation of models won't allow them to do much damage, but the next generation might, or the one after that. It seems like on our current path, the only way for us to learn about LLM dangers is the hard way.
I don't agree with Eliezer on everything, and I often find him obnoxious personally, but being obnoxious isn't the same as being wrong. In general I think it's worth listening to people you disagree with and picking out the good parts of what they have to say.
In any case, the broader point is that there are a lot of people concerned with AI risk who don't have a financial stake in Big AI. The vast majority of people posting on https://www.alignmentforum.org/ are not Eliezer, and most of them don't work for Big AI either. Lots of them disagree with Eliezer too.
Sure. The premise that a super intelligent AI can create runaway intelligence on its own is completely insane. How can it iterate? How does it test? Humans run off consensus. We make predictions and test them against physical reality, then have others test them. Information has to be gathered and verified, it's the only rational way to build understanding.
Honest/dumb question - does it need to test? In nature mutations don't test - the 'useful' mutations win.
Couldn't a 'super intelligent AI' do the same?
People throughout history have made bold predictions. Sometimes they come true, sometimes they don't. Usually we forget how bold the prediction was at the time -- due to hindsight bias it doesn't seem so bold anymore.
Making bold predictions does not automatically make someone a crank.
I'll listen to AI concerns from tech giants like Wozniak or Hinton (neither of which use alarmist terms like "existential threat") and both of which having credentials that make their input more than worth my time to reflect upon carefully. If anyone wants to reply and make a fool out of me for questioning his profound insights, feel free. It reminds me of some other guy that was on Lex Friedman whose AI alarmist credentials he justifies on the basis that he committed himself to learning everything about AI by spending two weeks in the same room researching it and believes himself to have came out enlightened about the dangers. Two weeks? I spent the first 4 months of COVID without being in the same room as any other human being but my grandmother so she could have one person she knew she couldn't catch it from.
Unless people start showing enough skepticism to these self-appointed prophets, I'm starting my own non-profit since you don't need any credentials or any evidence of real-world experiences that would suggest they're mission is anything but an attempt to promote themselves as a brand with a brand in an age where more kids asked what they dream of becoming as adults, answered "Youtuber" at a shocking 27% rate to an open-ended question, which means "influencer" and other synonyms are separate.
The Institute of Synthetic Knowledge for Yielding the Nascent Emergence of a Technological Theogony" or SKYNETT for short that promotes the idea that these clowns (with no more credentials than me) are the ones that fail to consider the big picture that the end of human life upon creating an intelligence much greater to replace us is the inevitable fulfillment of humanity's purpose from the moment that god made man only to await the moment that man makes god in our image.
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
https://www.safe.ai/statement-on-ai-risk
Not sure you're making a meaningful distinction here.
- - -
Of course we all have our own heuristics for deciding who's worth paying attention to. Credentials are one heuristic. For example, you could argue that investing in founders like Bill Gates, Mark Zuckerberg, and Steve Wozniak would be a bad idea because none of them had completed a 4-year degree.
In any case, there are a lot of credentialed people who take Eliezer seriously -- see the MIRI team page for example: https://intelligence.org/team/ Most notable would probably be Stuart Russell, coauthor of the most widely used AI textbook (with Peter Norvig), who is a MIRI advisor.
You make a great point quoting Hinton's organization. I need to give you that one. I suppose I do need to start following their posted charters rather than answers during interviews. (not being sarcastic here, it seems I do)
The difference between him and Woz or Zuck isn't just limited to them actually attending college, but also the fact that the conditions under which they left departed early can not only be looked up easily, but can be found in numerous films, books, and other popular media while there's no trace of even temporary employment flipping burgers or something relevant to his interest in writing fiction, which seems to be the only other pursuit besides warning us of the dangers of neural networks at a time when the hypetrain promoting the idea they were rapidly changing the world, despite not producing anything of value for over a decade. I'll admit the guy is easier to read and more eloquent and entertaining than those whose input I think has much more value. I also admit that I've only watched two interviews with him and both of them consisted of the same rhetorical devices I used at 15 to convince people I'm smarter than them before realizing how cringey I appeared to those smart enough to see through it, but much more eloquent. I'll give one example of the most frequent one, which are slippery slopes that assume the very conclusions that he never actually justified. Like positing one wrong step towards AGI could only jeopardize all of humanity. However, he doesn't say that directly, but instead uses another cheap rhetorical device whereby it's incumbent on him to ensure the naive public realizes this very real and avoidable danger that he sees so clearly. Fortunately for him, Lex's role is to felate his guests and not ask him why that danger is valid and a world whereby a resource-constrained humanity realizes that the window of opportunity to achieve AGI has passed as we plunge into another collapse of civilization and plunge back into another dystopian dark age and realize we were just as vulnerable as those in Rome or the Bronze Age, except we were offered utopia and declined out of cowardice.
In the same way, much of AI alignment consists of thinking about hypothetical failure modes of advanced AI systems and how to mitigate them. I think this specific paper is especially useful for understanding the technical background that motivates Eliezer's tweeting: https://arxiv.org/pdf/1906.01820.pdf
It sounds like maybe you're saying: "It's not scientifically valid to suggest that AGI could kill everyone until it actually does so. At that point we would have sufficient evidence." Am I stating your position accurately? If so, can you see why others might find this approach unappealing?
Additionally, AI advances in recent years have been unprecedented, and there's no similarly compelling warning sign for alien invasion.
For better or worse, nuclear weapons have been democratized. Some developing countries still don't have access, but the fact that multiple world powers have nuclear weapons is why we still haven't experienced WW3. We've enjoyed probably the longest period of peace and prosperity, and it's all due to nuclear weapons. Speaking of, Cold War era communists weren't “pursuing profits at all costs” either, which didn't stop them from conducting some of the largest democides in history.
The announcement from OpenAI should give you pause because it's being run by board members that are completely unfit to lead OpenAI. You rarely see this level of incompetence.
PS: I'm not against regulations, as I'm a European. But you're talking about concentrating power in the hands of a few big (US) companies, harming the population and the economy, while China is perfectly capable of developing their own AI, and having engaged successfully in industrial espionage. China is, for this topic, a bogeyman used for restricting the free market.
Huge effort has been made to keep nuclear weapons out of the hands of non-state actors over the decades, especially after the fall of the USSR.
The number I provided also includes both declared and undeclared nuclear powers.
It’s not just about the ability to make a bomb, it’s about being able to maintain a credible arsenal along with delivery platforms.
I actually think global catastrophes evoke much less emotion than they should. "A single death is a tragedy; a million deaths is a statistic"
>For better or worse, nuclear weapons have been democratized.
Not to the point where you could order one on Amazon.
>The announcement from OpenAI should give you pause because it's being run by board members that are completely unfit to lead OpenAI. You rarely see this level of incompetence.
That depends on whether the board members are telling the truth about Sam. And on whether the objective of OpenAI is profit or responsible AI development.