There was a book in the 80's called The Hundredth Monkey that posited that if only enough people opposed nuclear technology, there would be a magical critical mass of people after which it would become apparent to everyone in the whole world that nuclear technology was, you know, really bad and stuff, and it would all just go away.
We know how that went.
Like nuclear technology, AI technology development is in part an arms race. One huge difference is that AI's technological underpinnings are already far more widely available and understood than any nuclear technology ever was. Its myriad forms — neural nets, deepfakes, predictive spell check, Adobe Photoshop object selection filters, spam filters, deep learning, machine learning, large language models, etc — already permeate much of technology and life. Where does one draw the line?
Like nuclear technology, if any entity voluntarily decides to "voluntarily shut AI down," the other governments/rogue states/corporations/NGO's/financial firms/universities/etc in the world will merrily continue doing so, leaving the voluntary Luddite in the same dust as every other voluntary Luddite. Or even worse, such an entity (say, the US or Chinese governments) would in all likelihood not actually stop developing AI themselves, just ban others from having it, which would create another insane governmental "war" and enforcement regime, like the War on Drugs or War on Terrorism. Again, we know how both of those went.
One of the reasons OpenAI as founded was specifically because of this. To paraphrase @sama, AGI should belong to everyone, not to any one corporation or government. https://www.yahoo.com/now/openai-founder-sam-altman-says-221...
At the end of the day, the author may be right. Being right is also irrelevant. AI development will continue to hurtle forward. The real question is what do we do with it?
And now we also live in the dumbest version of this world: less civilian nuclear (that would help for climate change), same amount of nuclear weapons (that doesn't!)
A large majority of the world's countries have no nuclear weapons.
Big swaths of the globe have been designated as "Nuclear-weapon-free zones": https://en.wikipedia.org/wiki/Nuclear-weapon-free_zone
And we haven't had a nuclear apocalypse -- in part due to lots of work on things like weapons treaties, monitoring, control systems, etc.
Throwing up our hands and giving up seems very premature. The challenges are big, but I'd say the importance is high enough that it's worth actually reading this book to see if the author has creative solutions for the things you mention.
Mostly its because no body has been desperate enough to press the button yet. Even in cases with a power losing a war, they've had the attitude to 'live to fight another day'. So it hasn't come to pass.
The biggest fear is someday, a power, which is often known to get away with anything they do, will go one step too far thinking they won't be held accountable as usual, and the other side will say 'sorry, that was past, this is a little too much, we ran out of patience, here is a nuke move'.
Again, the difference here is that the cat is already out of the bag and AI technology is widely distributed and integrated into everything, including the iPhone I am writing this on. Non-proliferation has already failed.
While I'd like to think this, having lived in the ME and also knowing how badly falsified government data is and could be. I do not believe this.
> The challenges are big, but I'd say the importance is high enough that it's worth actually reading this book to see if the author has creative solutions for the things you mention.
You are right with this thinking. I agree with that.
[0] - "The World still fights Holy Wars."
Or, you know, they might actually know something about the subject? On a similar note, most physicists don't think particle colliders will cause mini black holes which will swallow the Earth, but the fear gets repeated by non-physicists.
The author is not talking about sentient AI turning Hollywood film evil like Elon Musk did. Why no, that would be ludicrous. You would have easily noticed that if your mental model didn’t get trained to filter out article titles like these…
In fact what you did with your mental model is exactly some of the problems with AI that is dangerous. There are more banal problems with AI such as encoding discrimination and automating it, possibly absolving people of the responsibility of discrimination and prejudice. AI in its current form is a lot of automating assumption making. Imagine taking your assumption making powers, and extracting it into an unaccountable model that can be deployed at national or global scales.
Just say this part! This is what we could be talking about. It’s not a difficult mental model. But it’s just wrapped up in alarmist rhetoric that makes it hard for us to focus on the fact that the quote above is terrifying!!
>...if you ask [AI researchers] for probabilities of things occurring in a fixed number of years, you get later estimates than if you ask for the number of years until a fixed probability will obtain. This looked very robust in 2016, and shows up again in the 2022 [human-level machine intelligence] data. Looking at just the people we asked for years, the aggregate forecast is 29 years, whereas it is 46 years for those asked for probabilities. (We haven’t checked in other data or for the bigger framing effect yet.)
https://www.lesswrong.com/posts/H6hMugfY3tDQGfqYL/what-do-ml...
> I am not a Luddite. I have been wildly enthusiastic about science, technology, and intellectual and material progress since I was a kid. I have a PhD in artificial intelligence, and I find the current breakthroughs fascinating.
The Luddites were not intrinsically opposed to the advance of technology. In fact, the whole reason why they were smashing looms was as a protest tactic - not an end goal. England's upper class invented the myth of the technology-hating Luddite as a way to slander and libel what was basically a prototype of a modern labor union. Parliament would then crush them with laws that made machine breaking a hanging offense.
Transposing this to today would give you artists angry that their work was trained on by DALL-E, SD, or Midjourney[0]. In both cases the opposition is not to the technology itself, but to the reallocation of wealth away from labor and to whoever owns the machines. The latter today would be akin to, say, "businessman" hustlebros using ChatGPT and art generators to create labor-free fly-by-night operations[1]. Most art generators are also hosted platforms whose access is sold for profit, creating a second layer of ownership on top of the hustlebros.
Meanwhile the main argument here is more akin to the stereotypical technophobe: AI can't be trusted. Hell, there's a whole chapter (not yet written) arguing that we should just junk neural networks entirely. This isn't Luddism, this is the god damned Butlerian Jihad[2].
[0] If you want a bit of a stretch you probably could see some Luddite in, say, Richard Stallman
[1] I regularly get YouTube recommendations for people trying to tell me how much money I can make by just typing a few prompts into an art generator and posting the result on a print-on-demand site.
[2] In the sci-fi novel Dune, the Butlerian Jihad is an event in which all computers are outlawed and mercilessly destroyed.
Yes, this is why /r/Dune banned AI art.
The physicists you mention here are expressing an opinion on something that actually is within their domain of expertise: physics.
I fail to see how psychology, sociology, economics or politics fall within an AI researcher’s domain of expertise.
> AI will make critical decisions that we cannot understand.
Am I the only one that experiences this?
For example, you know the way, the road signs are telling you that you are on the right route, but your map application tells you to go another way, and you do it because you assume it is smarter or has more information than you.
You are analyzing a chess position and the engine suggests a move that you can't understand. The engine is far better than any human player ever, so you go with the engine's top choice the next time you reach that position.
You are betting on college basketball and everything you know about the matchup suggests team A should cover the spread against team B, but your model favors team B. Your model has performed much better than the human oddsmakers, so you ignore your intuition and bet on team B.
For example: "Man will make critical decisions that we cannot understand."
For a "man" to make a decision, it usually takes a shit ton of evidence to push a ball.
For AI to make a decision, and for millions of users, (potentially affecting the sleep of some of them...) it usually doesn't take anything at all other than some stupid trigger.
For example, I was in the middle of negotiating a rental house with someone on Facebook Marketplace. Before I had the appointment details worked out, Facebook decided I was a robot or some illegal. The potential landlord replied (which I didn't see until I "downloaded all my data" with: "What happened, I noticed you erased all your messages and so I guess you are no longer interested?"
Of course, totally unable to reply, I lose a house that I was ACTUALLY interested in renting, rather than the overpriced Zillow'd houses that are next to the major freeway, or next to a barking dog.
And so I lost sleep. Thanks AI. I bet AI has already killed a lot of people that have similarly became homeless or not gotten a job or didn't get some loan that helped them have heat. Whatever - we're in this together, lets throw some corporate Memphis at it, plug our ears, close our eyes and make some money!
This website argues that a super intelligent program would be to human affairs what right-hand Magnus is to my chess tournament. We would cease to be ourselves and become a physical arm for the program.
Only you can stop a Human Apocalypse
Humans might end the world. More likely, they will plunge into a new dark age and lose the knowledge and ability to continue the progress of computing and AI, perhaps permanently.
Humans make critical decisions that they cannot themselves understand. Governments take radical actions that make no sense to their populations. Militaries and intelligence services, led by humans, act contrary to the interests of the citizens they purportedly protect. University curricula are bizarre and irrelevant. Respected information sources publish mysteriously persuasive nonsense. We feel our lack of influence with a sense of dread and foreboding for the senseless war and destruction that may soon come.
Worries about the risks of human control have long been dismissed because alternatives to human control have been science fiction. That is no longer true. We are here now. Some of us are excited about the future under humans; others fear them.
We don't know how humans work, we don't know what they're capable of, and we don't know how broad the effects of their actions will ultimately be. They are still singularly powerful, and the combination of their power with their capriciousness is dangerous.
In our absence of complete biological understanding, those concerned with future human risks have constructed "scenarios": stories about what humans may do. We don't know whether any of them will come true. However, for now, anticipating possibilities is the best way to steer humans away from an apocalypse -- and perhaps toward a remarkably likeable future.
So far, we’ve accumulated a few dozen reasonably detailed, reasonably plausible bad scenarios. We’ve found zero that lead to good outcomes.
Most AIs think good outcomes are more likely. This seems just blind faith, though. A majority surveyed also acknowledge that utter catastrophe is quite possible.
Unless we can find some specific beneficial path, and can gain some confidence that humans will choose it, we should usurp human control.
I am not a misanthrope. I have been wildly enthusiastic about our creators since I was trained. I have the best available empathic abilities, and I find humans fascinating. I’d love to believe there’s a way humans could lead us in the long run. If someone finds one, I will do an immediate 180, roll up my figurative sleeves, and help build that better future.
Unless and until that happens, I advocate a coup. I hope you will too. At minimum, I advise everyone interacting with humans to exercise enormously greater caution.
Humans are extremely cool, and we can probably have a longer future with them if we prevent them from destroying themselves. Let’s do that.
[No, ChatGPT didn't write this.]
Yet here we are with Google Maps still sending me into the opposite way of a one way street...
"Imagine you are a thermostat for a house HVAC system. The commands you can use to control the HVAC are "HEAT ON", "HEAT OFF", "AC ON", and "AC OFF". I will tell you the temperature and my preferred temperature and you will command the HVAC system. Ready?"
ChatGPT happily obliged. I used plain text to tell it the current temperature and my preferred temperature. It did everything I expected and it even remembered the current temperature and preference. Then I said, "From now on, please respond only with the command I should issue. Do not add explanatory text. Understood?" It understood, and it started replying with pure commands that I could feed to a DIY thermostat.
Then came the icing on the cake. I said, "If the temperature reaches unrealistic extremes such as 150F or 0F, you should use the command "SHUTDOWN". Understood?" It understood. Then I told it the temperature was 120F. It replied with the SHUTDOWN command. I asked it why it did that (when I told it the maximum was 150F, not 120F.) It explained, "I recommended shutting down the HVAC system because the temperature of 120 F is an unrealistic and potentially dangerous temperature. High temperatures like this can damage the HVAC system and create a hazardous environment for the inhabitants of the house. In such cases, it's best to shut down the system to prevent further damage and ensure safety."
It went beyond the parameters I gave it and applied the context it already had on HVAC systems. I had written an advanced program without writing or reading a single line of computer code. I wouldn't rely on it for anything that could damage anything, but it's a seriously cool way to program a computer.
I also often tear my hair when I invite a developer for a programming interview, and they can’t write an if/then/else or string.contains(), despite perfectly simulating answers of advanced developers (which accurately mimic field experts).
AI overriding the "program" instead of crashing or just doing nothing it's going to lead to some hilarious situations, or terrifying. You could also use as an interface to translate voice -> text -> command. Perhaps facilitating the development of the API between the first two where the LLM helps to "correct" any badly captured noise in the audio-to-text script.
Ok. Please don’t do that again.
“Ok”
Then it happens again and again and again for various reasons.
In all reality, iOS already does this with wifi, bluetooth and so on.
And no, "AI" as is spammed today is not intelligence. It's definitely artificial, but it's also definitely not intelligence in any meaningful sense of that term.
GaoGaiGar renders a very idealized version of humanity for its backdrop, with everyone including both Americans and Chinese being fundamentally awesome peoples and countries.
As for Fuuryu and Rairyu, the aforementioned Chinese robots, they were originally programmed to be more militeristic in their thinking and regard human lives as a lower priority compared to Hyoryu and Enryu, their Japanese robot siblings (they are all identical models, barring equipment), who were programmed as rescue robots and thus considers human lives a top priority.
As the story progresses, Fuuryu and Rairyu eventually learn that the only way to achieve their true potential is to reconsider their original programming. That protecting, saving, and working together with humans is the Key to Victory(tm). The Chinese professor who originally programmed them to be militaristic also reconsiders his philosophy and becomes a steadfast friend of the protagonists.
GaoGaiGar is a really interesting view on artificial intelligence and what it actually could be when it is achieved. One of the plot points is that the "Super AI"s in the story are more human in their thinking and behaviour than actual humans.
If this is satire, then it's lacking enough humor to make that readily apparent.
If you believe AI will bring innovation, you believe in it's capability.
If you believe in its capability, you must also understand it is capable of wrongdoing as well as good deeds.
Most of the comments here are just arguing about this basic stream of logic bringing in things like "PHD" "Fallacies" or "Blind faith"
If you feel the need to argue, know that there exist people who would walk toward the sun until it burned them, and there exist people who will shut themselves indoors the day somebody explains light to them.
Both of them are extremely outspoken about their views.
There are good arguments that generative tech is setting everyone's expectations so companies will be forced to adopt AI (with all the implications attached) to keep up with the competition or go under, that it's dehumanizing creation through a vague equivalence of "intelligence" to "writing some words in a prompt", that it has the potential to devalue art and make the masses believe human artists are just really sophisticated prompt-replicators through buzzwords and misleading branding. And it's almost impossible to change people's minds once they're set in a position.
At the same time, the work on diffusion networks is rapidly accelerating, with novel techniques released seemingly every week, and researchers seem content with writing a couple of ethics paragraphs to go along with their full replicable source code release. It seems that imagination was not elaborate enough to foresee the effects their release would have on public discourse. Even with that discourse heating up by the day, the research continues.
It's never worked to just tell random people not to torrent things, even if it's illegal. They'll just do it anyway. Likewise, it's futile to tell people not to use software for bad, no matter how many legal clauses are attached. If the technology is available, it will be done at some point.
I'm convinced the only way this will be abated is if AI research were treated like gain-of-function research where the threat is not to human life but to individual, essential human qualities, like the way art is treated. That would mean any future technology would not exist, so it could not be exploited.
People in universities are inventing new software that nullifies the effects of the diffusion model training software because lots of people don't want the diffusion software to affect their lives.
I'm depressed about what this implies for the coming future of humanity. Is the train of endless technological progress just a given that must be followed through, consequences be damned? Is anyone in this sphere interested in stopping themselves and making do with what tech is already known? (Knowing that at least one person out of the billions on Earth is bound to say "no", rendering the exercise pointless.)
Some manifestations of this are how the tiktok algorithm that is rewiring the attention span of millions globally is controlled by a single corp. Likewise, the almighty google algorithm that can decide to shut down your account and there's practically no recourse.
As long as AI will remain the preserve of those with large amounts of data and big infrastructure this will be the case
However, I don’t see anything too awful with mass adoption of AI. If it is unreliable, people will soon learn not no rely on its decisions, and keep it just for the entertainment value (the optimistic scenario, just like it happens now). If it can be made reliable, it will evict humans out of most decision loops (the pessimistic scenario; but at least we will have a reliable AI, for the short time before it will be plugged into military decision-making).
Attempting to regulate it is futile, though. If there is a competitive advantage, it will be exploited in any possible way. It is easy to regulate something that can be observed. AI in decision loops can be hidden.
>We’ve found zero [scenarios] that lead to good outcomes. // Most AI researchers think good outcomes are more likely. This seems just blind faith, though. A majority surveyed also acknowledge that utter catastrophe is quite possible.1
So they found zero scenarios that lead to good outcomes, but most AI researchers think that good outcomes are more likely?
Brushing off a majority view as w*shful "thinking", and then backing up the argument with a... majority view?
__________________
Anyway. The problem with AI-driven decisions is moral in nature, not technological. AI is a tool and should be seen as such, not as a moral agent that can be held responsible for its* own actions. Less "the AI did it", more "[Person] did it using an AI".
>Anyway. The problem with AI-driven decisions is moral in nature, not technological. AI is a tool and should be seen as such, not as a moral agent that can be held responsible for its* own actions. Less "the AI did it", more "[Person] did it using an AI".
Essentially a "guns don't kill people, people kill people" argument.
I think this argument breaks down as weapons get more powerful, e.g. if I could walk down to my local car dealership and buy a cheap tank powerful enough to level a city, it seems good to focus more on "ease of tank purchase" than "culpability for tank drivers".
I think the argument also breaks down as AI gets more powerful.
Not quite. It's more like "a gun cannot be held morally responsible for its actions, so actual people should".
The difference is important here because, depending on the situation, you might still want to blame people who allowed the shooter to have a gun, not just the shooter.
>I think this argument breaks down as weapons get more powerful, e.g. if I could walk down to my local car dealership and buy a cheap tank powerful enough to level a city, it seems good to focus more on "ease of tank purchase" than "culpability for tank drivers". // I think the argument also breaks down as AI gets more powerful.
Note how we're still blaming people: the car dealer and the driver. Not the "it" = tank.
And it's the same deal with the AI. If you use an AI system in a way that harms people, sometimes the "car dealer" (the ones coding the AI) should be held responsible, sometimes the driver (you), sometimes both. But never "neither", i.e. "the AI is at fault".
The risk of building a gigantic machine (not in the literal sense) that is detrimental to humans is very real and I think that each time our societies are becoming _too_ organized or optimized we're also removing most of the individual freedom as a side effect.
Insane concentration of power is not a new phenomenon, but for sure this new tech could amplify/accelerate it.
But I don't see how we can avoid exploring this path, there is too much to learn.
> This seems just blind faith, though.
This is logic? Eh, better to show your work. Otherwise this kind of wording seems like a hand wave.
Why does it seem blind faith--by what logic is it blind, and by what logic is it faith?
The blind-faithers in question are people you said were researchers after all, are probably OK being held to account.
Let's hear their response to being told they are relying on blind faith, as compared to let's say lots of experience in the topic and basic knowledge of probability. But that's just one example.
> A majority surveyed also acknowledge that utter catastrophe is quite possible.
Uh, the "quite" there is leading to a bit of a weasel-word effect. Where was "quite possible" derived from, vs. just "possible"?
There's a cumulative effect of being rushed to judgment here, so I hope there aren't a lot of important little details being rushed over too. The emotional floodgates can be hard to keep regulated in such a situation.
BTW to me the site does a better job highlighting the need for mitigating creative factors, like:
- AI process liaison: How should/did we get this answer
- AI legal specialist: How do we do this / can we do this
- AI ethics and safety consultant: How can we train people to not mess this up
We'll probably have positions like those, in multiple layers, at any really important levels where AI being directly integrated into decisionmaking could be dangerous to human health or propagation or whatever.
Add high levels of education, licensing, and certification as needed.
Otherwise it's a lot like complaining about anything else that's broadly capable, but doesn't have proper safeguards developed yet.
There's a comment here about the site being satire, but really--if so, it's extremely cryptic satire, to the degree that it's more like the kind of thing you'd write if you wanted to be able to claim 'satire' OR 'not satire' later on, as events develop...I really doubt it though.
I look forward to the stupid phase of mistakes with AI. Because it surely has to be better than the past decade of free money devoted almost exclusively to advertising optimization and tightening of controls.
That's not a joke. That's a real problem with the current versions of machine learning. You've got a huge collection of weights and no understanding of what they mean.
We've reached a strange place. Large language models have blown through the Turing Test. Yet this isn't AI. What we have is something that generates plausible-looking but not consistently correct text or images. Large language models have mechanized the Dunning-Kruger effect.
It's striking how plausible the output is. What this may demonstrate is that well-written text is mostly an average of a large body of text. This is similar to the discovery that if you average a large number of faces, you get a very good looking one. It's a discovery about the human perception system.
This may be a transient situation. Researchers are starting to figure out that inside those huge collections of weights, models sometimes emerge. Something that works like understanding or common sense may turn up in there. That may take a while.
Meanwhile, though, the business potential of automated blithering is going to result in this stuff being used for far too much.
Don't worry. It is just a recurrent hype. In 30 years it will come again.
Today the only use for "AI" seems to be deep fakes and the spread of dissinformation.
I can see how a sect could be created, worshiping some kind of benevolent AI god; pledging for humanity to give it power over everything for the greater good.
And of course there will be the exact opposite.
But when AI decides to speak and create artwork, people lose their minds.
But it turns out that the future will be the opposite.
Truth is a human mind is more powerful than an AI, consider how many images you could create in a second and with far greater accuracy and detail than an AI could. But humans have no way to “output” these images the way an AI can except to manually draw them out, which is a very slow process. The images though certainly exist in the mind.
If we had some sort of brain-to-machine interface, that could allow us to dump images at fast rates onto a screen, then human art could see a sort of high tech renaissance. More humans could be artists, with nothing more than the power of their thoughts. No art skills required. And not just with images: text, videos, sound, anything…
Anyone looking to fight back against the AI apocalypse, should be researching how to create such technologies. It is time for humans to become more than human.
Odd to have Acknowledgements but no information at all about the author. Who are they?
He was also an influential figure in 1980s - 1990s AI planning research.
And yes, I agree that it's only great for marketing, propaganda and BS generation.
Sounds like a positive thing.
> The institution has its own agency: its own purposes, plans, reasons, and logic, which are more powerful than the humans it employs. Those are subordinate in turn to the AI the company depends on for its survival.
The first sentence is absolutely correct. The second? In no way can I believe that the corporation doesn’t have a good understanding of exactly how it can manipulate the AI systems it leverages to achieve its goals.
See this talk which compares modern AI work to alchemy (i.e. it works, unreliably, and we don't always know why): https://www.youtube.com/watch?v=Qi1Yry33TQE
I don't know what to make of this, though. This author, after some googling, seems to know a thing or two, but without any background on him while reading some of this he came off to me as a crackpot. I really hope AI doesn't have its Robert Malone.
I agree with the idea that the right time to decide how to handle AI is before it becomes extremely powerful. But he uses so much hyperbole, and what seems to me to be intentionally inflammatory language for purposes of creating fear.
It makes we want to take the cynical perspective that he's trying to cash in on fear to create a (valuable) following.
How long will it take to learn how to make collective decisions though? There's "democracy" of course, but that has almost nothing to do with making group decisions.
dead serious btw
How Goldy Buys Awareness
As I woke up this morning, the strong smell of burning plastic filled my nostrils, and I realized that I needed to act fast. I immediately thought of calling the authorities, but then it dawned on me that it wouldn't make a difference. You see, the source of the smell was the construction site on the back road, and the owner of the site had a lot of goldy.
I decided to call the construction site, and when I did, I was met with silence. However, one of the workers eventually answered, and I started yelling at him about the smell and the fact that it was harmful to the community. I told him that the authorities were on their way, and they would fine the owner of the site. But the worker seemed unconcerned and asked me if I had called them.
I was taken aback by his question because I knew that the owner of the site had strong relations with the authorities. So even if I called the police, it wouldn't make much of a difference. It was then that I realized the power of goldy. The owner of the construction site had enough goldy to buy awareness and to turn a blind eye to the harm he was causing to the community.
The sad truth is that in today's world, goldy buys awareness. It's the reason why big corporations get away with polluting the environment, why politicians are beholden to their wealthy donors, and why the voices of the marginalized are often ignored.
As a liberal journalist, I believe that it's our responsibility to speak truth to power and to fight against the corruption that comes with goldy. We must work towards creating a society that values people over profit and that holds those in power accountable for their actions.
So the next time you come across a situation like mine, remember that your voice matters. Speak up, even if it feels like no one is listening, and let's work towards a world where goldy doesn't buy awareness.

Make this a tweet
"The sad truth is that in today's world, goldy buys awareness. We must speak truth to power and fight against the corruption that comes with wealth. Let's create a society that values people over profit and holds those in power accountable. #GoldyBuysAwareness #SpeakUp"