If I have $400 dollars I can't afford to take 1:1000000 risks that cost $200 each. I will go bankrupt with an enormous likelihood whatever the payoff. There is a minimum cutoff involving cost/probability below which it does not make sense to take up the opportunity.
There are links to similar theoretical ideas from the Pascale's Mugging wiki page - although from the casinos perspective not the gambler's - https://en.wikipedia.org/wiki/St._Petersburg_paradox#Finite_... and then https://en.wikipedia.org/wiki/Gambler%27s_ruin for example.
Most people will not take an 99% risk of going bankrupt in a game that will consume all their resource reserves; expected value as a statistic does not meaningfully capture the risk. Positive expectation, losing strategy.
This particular situation, of someone just pulling "it could be true!" out of their arse, can also be solved by framing things as "the more utility you claim, the less likely it seems and disproportionately so".
IE, if the chance of getting X from the scoundrel is less than 1/(X^2*C), even integral of all the offers together winds up very small.
The mistake is accepting uncritically that expected value is the best metric to optimise. Nobody ever proved that expected value is a strategically superior metric. In fact it would be quite hard to prove that since it is not true. It leaves people vulnerable to making very stupid decisions as illustrated in Pascals Mugging.
Optimium strategy involves at a minimum considering your available opportunities and available resources. Opportunity alone is not enough.
Also, I don't find the Pascal's Mugger example convincing, as the probability that the mugger will return with the money is inversely proportional to the multiple they are promissing (for very large multiples this is because they have finite resources, but even at lower multiples this intuitively feels true).
That can't be reasonably estimated though. Putting aside the fact that we can't really assert the relation you posit, there is also a finite probability that the mugger is some sort of illuminati member with the ability to create an arbitrary amount of money. Ie, there is some tiny-but-positive probability that he can create an arbitrary amount of money.
At that point, the expected return can be made large compared to the probability that the mugger is lying.
His Facebook post was also discussed in detail on Hacker News here: https://news.ycombinator.com/item?id=21530860
Sometimes you go around and wonder various things, but don't look them up or do them in the moment, and then sometime later your subconscious mind serves you up with an answer, perhaps when you are more relaxed you just think up of the answer, or it happens to come up in a certain context, the subconscious lights it up there. It might be a word that you see randomly in a newspaper, or you think of a person that was related when you met them etc.
And here the subconscious did the same wonderous thing, except it wasn't even strictly my personal subconsciousness, it was the group subconscious that found the information and presented it.
In that case, I have to wonder where this estimate comes from? I get that it's just an arbitrary number and that any number would do, as long as it wasn't zero, but that's exactly the point: why can't Pascal place the probability of his mugger being an Operator from the Seventh Dimension at zero?
Is there any evidence at all to support the mugger's claim? Is there any evidence at all that there is such a thing as a "Seventh Dimension" for which the only thing we know is that its "Operators" have magickal, utility-maximising powers?
And does the whole thing only work if we assume that the probability that there is such a place and such people is more than 0?
1: https://www.lesswrong.com/posts/Ap4KfkHyxjYPDiqh2/pascal-s-m...
> A wind begins to blow about the alley, whipping the Mugger's loose clothes about him as they shift from ill-fitting shirt and jeans into robes of infinite blackness, within whose depths tiny galaxies and stranger things seem to twinkle. In the sky above, a gap edged by blue fire opens with a horrendous tearing sound - you can hear people on the nearby street yelling in sudden shock and terror, implying that they can see it too - and displays the image of the Mugger himself, wearing the same robes that now adorn his body, seated before a keyboard and a monitor.
> [...] "Unfortunately, you haven't offered me enough evidence," you explain.
* Helping a googleplex of people immediately vs over a period of time are two different complexities of action.
* Recall that hypotheses are selected from an ambient pool of possibilities. Then we might imagine that some hypotheses dominate others, so that regardless of how much evidence is offered, we always insist that the evidence supports a simpler alternative. To wit:
"Well, if I'm not a Matrix Lord, then how do you explain my amazing powers?" asks the Mugger.
"Street magic," you say. "Very impressive sleight of hand. Perhaps some smoke, mirrors, lasers, assistants."
* A Matrix Lord asking $5 of a person on the street in order to commit miracles is inherently irrational. If they just wanted $5, or wanted to deprive the person of $5, or wanted to humiliate and embarrass the person, or force them to accept certain philosophical truths, then those all could be achieved via Matrix Lordery. Therefore the Lord in this story is being a pointless dick, and it's silly to expect rational arguments to be part of the conversation. To wit:
"Just give yourself $5. Give yourself any reward you like, for helping people; it's not my place to set or fulfill the price of such powerful entities, is it?" you ask.
"But...but don't you want the feeling of doing good?" asks the Mugger.
"Not really, no," you reply. "I have investments and equity already, and those dollars already have ripples that affect people far beyond my direct control. I don't feel much of anything about those investments. And it would be irrational for me to value a $5 investment more than $5. Really, if you can do all of this good, then you should turn yourself into an exchange-traded fund, and let people buy your time to do good in the world," you muse.
"But...but this offer is for you, and you alone," the Mugger insists.
"Okay, but why me? Let's talk about the Self-Sampling Assumption!" you say. The Mugger groans.
The point is that, at the time when the Mugger declares himself to be an Operator from the Seventh Dimension who can offer large rewards etc, there is no evidence to suggest he's saying the truth. No evidence at all. Accordingly, the probability that he's telling the truth must be zero. Where does a non-zero probability value come from?
Are you then saying that the probability of any reward should never be placed to zero because that would not maximise rewards?
Maybe I am not well-versed in Bayesian thinking, but I am unable to understanding assigning probabilities to events that have not occurred before, and to which there is no related numerical data.
Making the probability of Pascal's number being undefined renders any calculation of the risk involved null and solves the problem, while making it possible for future evidence to assign a defined probability (say you were previously approached by 10 Pascal's muggers and 2 turned out to be telling the truth).
For example, at this point in time I believe that the probability that I can fly if I flap my arms up and down is zero. I have no evidence that this is possible and I understand enough of the relevant physics to know that this is not just improbable, it is impossible.
However, if tomorrow I flapped my arms and found that I could fly, there would be nothing stopping me from re-evaluating my belief and assigning a higher probability to the chance that I can fly if I flap my arms.
But I think the problems begin with the misguided ambition to be able to predict the future even when there is no evidence to support any prediction. You can't know what you can't know. You can assign any probability you like to what you can't know, but even if you end up assigning the right probability that will be the result of blind chance, not the result of correct reasoning.
Anyway this is why I prefer logical inference to probabilistic inference. I understand that I'm in a minority on this, but for me it makes a lot more sense to maintain a state of provisional belief with an absolute value (in {0,1}), provisional in the sense that new evidence can always change your belief, than to live in a perpetual state of uncertainty which never resolves itself no matter how much evidence you see, because there is always a chance that you're wrong. There always _is_ a chance that you're wrong but it just seems cumbersome to have to maintain a ledger of competing probabilities for everything that has happened, and everything that hasn't yet happened, just on the off chance that anything can happen, including mutually exclusive events.
In principle, anything might happen. In practice, not everything will. There must be a sensible way to figure out what we need to prepare for and what we can safely ignore. And the whole Pascal's Mugger paradox, while it's meant to attack Pascal's Wager's logic, ends up for me as an illustration of why proabilistic inference is deeply borked.
Given Bostrom's general love for mixing tiny probabilities with enormous outcomes, what's the point of this article? It seems delightfully self-critical. How can the conclusion be anything other than that we should not be taking the AI/singularity crowd too seriously, as doing so would be akin to voluntarily handig over a wallet to a mugger?
It is just a philosophical story created to provide a certain line of reasoning, as certain possible structure of an argument. People are free to apply this argument however they want, it doesn't prove anything by itself, it doesn't say anything about the world, it's up to the user of it. It does not make any conclusions, it's just a story. Carmack used it to illustrate his own beliefs (which are therefore: AI is possible and the payout for the AI is exteremely high, even if probability for it during the next couple of years is low).
Carmack did not mean that you should believe or not believe in AI or anything else based on this argument. He just used it to illustrate what he himself chose to do. He did not base it on this argument alone, he did not just hear the argument and suddenly decide "now because of that I have to work on AI", he based it on his experience and knowledge of the actual field. The mugging argument is just a cute way of quickly explaining it.
> I think he's smarter than me, so what am I missing?
Wisdom and context.
Just taking this seriously pretty much resolves all these problems.
It makes sense to take high cost/reward, low probability events seriously if the expected utility works out. Examples include reducing existential risk substantially, even at cost to short-term utility (say, in the form of well-being) to increase the probability that we can eventually figure out how to optimally arrange matter and energy and trigger a utilitronium shockwave.
I see a similar problem here. Pascal should consider the possibility that the mugger will use his (Pascal's) silly action as the basis for punishment, in place of the promised reward. Slim probability, potentially very high cost. The fact that this risk has gone unstated doesn't mean it isn't there.