When I was a kid I watched a movie about a monster that came up from the basement when the kids in the house accidentally said an ancient Indian incantation.
"BlahBlahBluey" or something
I got scared that perhaps, the last thing I said "Good Night, Batman" or whatever, was in fact coincidentally an ancient curse incantation and I had summoned a monster who was en route to kill me.
"Blah!" I'd say, changing my words to "Good Night, Batman Blah!"
Ah, much better.
But then FUCK! What if "Good Night, Batman Blah!" is the cursed incantation!!! Repeat over and over until I was sufficiently convinced that no one had ever uttered the previous sequence of sounds I had just made, become exhausted from an entirely too powerful imagination, and fall asleep.
I'm pretty sure this 'thought experiment' is about as intellectual as I was at bedtime when I was 7.
I think the latter is a lot more likely and I predict we'll see it before the singularity.
(warning: language mildly NSFW, plus of course a video of a blowjob machine. No naked ladies, though)
I agree. So what would they be so afraid of? My assumption is that the LessWrong community tries to come up with the best system of thinking, so by proving (by presenting a situation which shows that the best system is flawed, since we may be slaves right now) means that there really is no hope.
As far as LessWrong in general, they share some controversial opinions but some of their concepts seem very interesting[1]. However, learning and especially thinking about these concepts does not appear to have any positive effect in everyday life.
"What if, in the future, a somewhat malevolent AI were to come about and punish those who did not do its bidding? What if there were a way (and I will explain how) for this AI to punish people today who are not helping it come into existence later? In that case, weren’t the readers of LessWrong right then being given the choice of either helping that evil AI come into existence or being condemned to suffer?"
I don't think it's that interesting of a thought experiment, there are dozens of much more insightful and thought-provoking ones out there.
Search for "best thought experiments" on Google if you want some cool things to think about. It might be taboo to mention but weed will make thought experiments really fun, in my opinion.
I hope one day historians and economists will study the lost opportunity costs of the prohibition / taboo nature of marijuana.
Politics, money, religion, and several other irrelevant factors keep it highly illegal.
It even has come to the point where states are ignoring federal law, so progress is happening thankfully.
Changing the leadership of the DEA would be helpful. Remove guns from the DEA and require them to use the FBI or local SWAT if a crime is bad enough to warrant a raid. "They might flush the evidence" is not a good enough reason to raid good peoples houses and suppress something that does far more good than harm. How many people have died from marijuana and how many people have died from a marijuana related DEA raid? Which is really more harmful to the country? Money seized should go towards public education, not directly to the law-enforcement agencies.
Is it off topic? We are talking about thought-experiments here, and your experience is normal for anyone with even a slightly scientific mind.
Roko's Basilisk doesn't appear very different to me than Pascal's Wager, with the added twist that the God you have to believe in is actually a malevolent demiurge (as envisioned by the Gnostic christian sects more than a 1000 years ago).
If you already have a cynical viewpoint on religion, then this thought experiment is very similar to the "deal" monotheistic religions have been offering through the ages.
Either believe in our god (and therefore give us, as his sole representatives enormous, unfettered power), or suffer eternal hell! (or not ;)
Throughout the ages, people were terrified of hell. This is just a modern twist on an old idea.
I used to be scared of hell, until I realized that hell "existed" only as long I believed in it. If I stopped believing in it, it "disappeared". That's the power of belief. Yes, it might still exist whether I believe or not, but life is full of "what ifs", the contemplation of which might drive you crazy.
Better to follow a positive and compassionate path then a fear based "what if" path.
I want my 10 minutes back.
* Roko's Basilisk isn't really a deity, it's a powerful AI with the capability of simulating you and everybody else (or possibly just you).
* Pascal's wager does concern itself with the possibility that you are currently in a simulation.
Since we know exactly nothing about Roko's Basilisk, we know nothing about its behavior. I could propose that it concludes that for its survival it is best to cultivate a certain level of cooperation with humans. Based on that it might determine that those who did believe in it before any proof are gullible and irrational and might exterminate them in order to free resources for sceptical thinkers who cooperate in the face of proof.
Non-sensical is an ill-defined category if we are talking about something with higher intellectual capabilities than ourselves - a dog might consider a lot of human behavior non-sensical.
>Roko's Basilisk isn't really a deity, it's a powerful AI with the capability of simulating you and everybody else (or possibly just you).
Since this most likely violates thermodynamical principles (simulation DOES require energy, the simulation of everything requires infinite energy) I fail to see how it is not either impossible or a deity.
>Pascal's wager does concern itself with the possibility that you are currently in a simulation.
IDK, to me this is only a different take on solipsism which states that I can only be sure that my own mind exists, everything else - my body, you, the world around me - might be an imagination.
At the risk of sounding flippant: What's the difference? If the AI is so powerful that it can simulate my thoughts (which would imply that thoughts and all of life is 100% deterministic), it has to be so powerful that it's practically omnipotent, and therefore, a God.
This makes him Yudkowsky the perfect lab animal! Think of the things we can learn, the lives we can save by performing experiments (that most think are unethical) on him!
As for the box choice? Just start chopping through the alien's head, the unknown that lies beyond will create new input for the simulation. Who knows what nice benevolent digital organism named Jane might sprout from this in the future?! (Edit: This is an Enders game/Speaker for the Dead reference, Ender also find himself presented with an impossible choice in a simulation at some point.)
Anyway, worrying about this will not make your life better, nor will it make your kids' life better. What is life all about according to this guy? I hope for the people that love him that he will get his priorities straight.
Only if you have a way to stop the other people getting dust in their eyes. Otherwise you're merely increasing the amount of suffering rather than mitigating it by transferring it all to one person.
What a guy and what a theory this moral utilitarianism! It's great! Think of the possibilities, they're endless.
I always thought that the only one who could put a value on my life was me. But this moral utilitarianism makes things so much simpler.
Only if all living beings are distinct and separate from one another. If however there is a single animating force (i.e. soul) in all living beings, and which experiences everything that is experienced, then things would be preferable the other way around.
When you subscribe to the idea that this could all be a sim, the whole thing results in mental contortions that are literally maddening.
Baudrillard observed not dissimilar in Simulacrum.
Also there was some research done on the constraints on the universe as a numerical simulation
http://arxiv.org/abs/1210.1847
telling us that we can do experiments to detect if we are living in a simulation (but the simulators always have a freedom to "increase the resolution" of their simulation to thwart our efforts). The paper does not discuss if there are limits on how far this game can be played on either side.
Hell, you could just simulate an individual and what they observe. For all I know, I'm talking to the AI in which I, and solely I, reside right now. Woo, simulation-solipsism.
What? Is this true, or just gullible journalism?
> Yudkowsky said that Roko had already given nightmares to several LessWrong users and had brought them to the point of breakdown.
It seems like some users of LessWrong need to spend more time in the real world and less time on stupid thought experiments.
Seems like this argument isn't new, is it? Those folks who both believe in a watchmaker's universe and freewill at the same time are also always going on about how critical it is to only make the "correct" choices.
It's circular reasoning. If a being exists that can tell which choices you are going to make, and it is able to kill you or prevent you from making those choices ahead of time, and some of those choices involve its existence? Then you'll ever be able to make choices that cause it not to exist. Or, with a bit of finer detail, the aggregate of all the choices made will never be such that it does not exist. Since the predicate was that such a being exists, we have returned to our starting point. Yadda yadda. In short, "How can something not exist that must exist?"
Actually, Roko's original version of the basilisk was a pretty nice AI that wanted to help people. A hundred people die every minute, how many of them could be saved with better technology? If donating your entire net worth would speed up the AI's creation by one day, that could be very worthwhile, according to the combined wishes of all humanity (which are encoded in the AI's utility function). Threatening to punish you is a comparatively small price to pay. Especially if you decide to comply, and the punishment never actually happens!
At this point you might be indignantly asking, why would the AI decide to punish you in the future? After all, that won't help the AI in the future, because its history will already be fixed. But that doesn't matter. The AI's algorithm tries to choose the most efficient decision overall, from a "position of ignorance", rather than the most efficient decision at a particular moment in time. Technically, the algorithm tries to find the best input-output mapping according to certain criteria, not find the best output for the particular input it happened to receive. There's nothing especially futuristic about such algorithms either, you can implement one as an ordinary Python program today (for toy problems of course).
I agree though, it's stupid.
If a massively powerful and knowledgeable AI was created by accident, it would have nothing to gain by being malevolent.
And both those points don't matter anyway, since the technological singularity is impossible.
The point is that, if you create an superintelligent AI that isn't very, very highly optimized for helping us, it is very likely that it will hurt us at some point.
Or as the saying goes:
>The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
A true AI would have the ability to learn and to evolve. Watch the movie Transcendence, I see very few flaws in that movie's logic.
If humans posed a threat to the AI it could certainly be malevolent from our perspective, to it it would just be doing what's necessary to survive.
Computers were "impossible" 200 years ago, the internet was something that few people could even imagine. I don't really consider anything impossible, what would make it truly impossible given enough time, resources, and motivation?
Indifference can be worse than malevolence. Think of every ant you've ever stepped on.
In any case, physical universe and nature is already indifferent to us.
(It's not stupidity - as far as I can tell it's a combination of aspergism, OCD, a weak sense of self and reading way too much LessWrong and cut'n'pasting it into their heads. LW is a superstimulant for people of this description, which is how an organisation that really truly isn't trying to start a cult has inadvertently accumulated something around itself that is highly reminiscent of one.)