https://www.lesswrong.com/posts/TxcRbCYHaeL59aY7E/meditation...
Yann LeCun is one of the loudest AI open source proponents right now (which of course jives with Meta's very deliberate open source stab at OpenAI). And when you listen to smart guys like him talk, you realize that even he doesn't really grasp the problem (or if he does, he pretends not to).
"""
But what we are seeing too often is a calorie-free media panic where prominent individuals — including scientists and experts we deeply admire — keep showing up in our push alerts because they vaguely liken AI to nuclear weapons or the future risk from misaligned AI to pandemics. Even if their concerns are accurate in the medium to long term, getting addicted to the news cycle in the service of prudent risk management gets counterproductive very quickly.
## AI and nuclear weapons are not the same
From ChatGPT to the proliferation of increasingly realistic AI-generated images, there’s little doubt that machine learning is progressing rapidly. Yet there’s often a striking lack of understanding about what exactly is happening. This curious blend of keen interest and vague comprehension has fueled a torrent of chattering-class clickbait, teeming with muddled analogies. Take, for instance, the pervasive comparison likening AI to nuclear weapons — a trope that continues to sweep through media outlets and congressional chambers alike.
While AI and nuclear weapons are both capable of ushering in consequential change, they remain fundamentally distinct. Nuclear weapons are a specific class of technology developed for destruction on a massive scale, and — despite some ill-fated and short-lived Cold War attempts to use nuclear weapons for peaceful construction — they have no utility other than causing (or threatening to cause) destruction. Moreover, any potential use of nuclear weapons lies entirely in the hands of nation-states. In contrast, AI covers a vast field ranging from social media algorithms to national security to advanced medical diagnostics. It can be employed by both governments and private citizens with relative ease.
"""
Let's stop contributing to this "calorie-free media panic" with such specious analogies.
I think the true fear is that in an AI age, humans are not "useful" and the market and economy will look very different. With AI growing our food, clothing us, building us houses, and entertaining us, humans don't really have anything to do all day.
"Hackers aren't a problem because we have cybersecurity engineers". And yet somehow entire enterprises and governments are occasionally taken down.
What prevents issues in redteam/blueteam is having teams invested in the survivability of the people their organization is working for. That breaks down a bit when all it takes is one biomedical researcher whose wife just left him to have an AI help him craft a society ending infectious agent. Force multipliers are somewhat tempered when put in the hands of governments but not so much with individuals. Which is why people in some countries are allowed to have firearms and in some countries are not, but in no countries are individuals allowed to legally possess or manufacture WMDs. Because if everyone can have equal and easy access to WMDs, advanced civilization ends.
What on earth could this possibly mean in practice? Two elephants fighting is not at all good for the mice below.
Do we have reason to believe that giving the ingredients of AGI out to the general public accelerates safety research faster than capabilities research?
It’s no different than inviting an advanced alien species to visit. Will it go well? Sure hope so, because if they don’t want it to go well it won’t be our planet any more
People who are saying AI is dangerous are saying people are dangerous when empowered with AI, and that's why only the right people should have access to it (who presumably are the ones lobbying for legislation right now).
What does that buy us? An extra decade?
I don't know where this leaves us. If you're in the MIRI camp believing AI has to lead to runaway intelligence explosion to unfathomable godlike abilities, I don't see a lot of hope. If you believe that is inevitable, then as far as I'm concerned, it's truly inevitable. First, because I think formally provable alignment of an arbitrary software system with "human values," however nebulously you might define that, is fundamentally impossible, but even if it were possible, it's also fundamentally impossible to guarantee in perpetuity that all implementations of a system will forever adhere to your formal proof methods. For 50 years, we haven't even been able to get developers to consistently use strnlen. As far as I can tell, if sufficiently advanced AI can take over its light cone and extinguish all value from the universe, or whatever they're up to now on the worry scale, then it will do so.
I mean, once the discussion goes THIS far off the rails of reality, where do we go from here?
If we are talking about AI stoking human fears and weaknesses to make them do awful things, then ok I can see that and am afraid we have been there for some time with our algorithms and AI journalism.
at best, maybe it adds a new level of sophistication to phishing attacks. That's all i can think of. Terminators walking the streets murdering grandma? I just don't see it.
what I think is most likely is a handful of companies trying to sell enterprise on ML which has been going on since forever. YouTubers making even funnier "Presidents discuss anime" vids and 4chan doing what 4chan does but faster.
It doesn't take a whole lot of imagination to come up with scenarios like these.
More banally, state actors can already use open source models to efficiently create misinformation. It took what, 60,000 votes to swing the US election in 2016? Imagine what astroturfing can be done with 100x the labor thanks to LLMs.
[1] dx.doi.org/10.1038/s42256-022-00465-9
So you're saying that:
1. the religious nut would not find the same information on Google or in books
2. if someone is motivated enough to commit such an act, the ease of use of AI vs. web search would make a difference
Has anyone checked how many biology students can prepare dangerous substances with just what they learned in school?
Have we removed the sites disseminating dangerous information off the internet first? What is to stop someone from training a model on such data anytime they want?
this is a conspiracy theory that gained popularity around the election but in the first impeachment hearings, those making allegations regarding foreign interference failed to produce any evidence whatsoever of a real targeted campaign in collusion with the Russian government.
It’s true we are fucked if bioweapons become easy to make, but that is not a question of ”AI”.