The premise is that the predictor is always right. So whether you take one or both boxes, the predictor would have predicted that choice. We know from the setup that if the predictor said you would take the one box, it will have a million dollars. Therefore, if you take the one box it will have a million dollars in it (because whatever you choose is what the predictor predicted).
As an aside, I think whatever this says about free will or if you're actually making a "choice" is irrelevant in regards to if the million dollars is in the box. The way I see both choices is this:
You "decide" to take both boxes -> the perfect predictor predicted this -> the opaque box has zero dollars -> you get a thousand dollars
You "decide" to take the opaque (one) box -> the perfect predictor predicted this -> the opaque box has a million dollars -> you get a million dollars
If you want to consider the version of this where the predictor is almost perfect instead of truly perfect, I don't think that changes anything. Say it's 99% accurate or even 90% accurate.
You take the opaque box -> the predictor has a 90% chance of predicting this -> it follows that there's a 90% chance that the box has a million dollars -> you have a 90% chance of getting a million dollars
Had you picked both boxes, you have a 90% chance of not getting the million.
As near as I can tell, it boils down to this: no matter what the predictor has chosen, one you walk into that room, there's more money in both boxes, then there is in one box.
But it feels like half an analysis—focusing solely on what you decide, while ignoring the fact that the other side is deciding based on what you think they'll decide.
Maybe that's me being unfair, because I'm a solid one boxer.
I also disagree with the linked article—I don't think it matters at all how the predictor makes their decision, because the outcome really doesn't matter if it's 100% accurate or 99% accurate. Or even like, 80% accurate. There's no magic required for the experiment to work.
There's something vaguely similar to the fallacy of proposed (Cooperate,Cooperate) solutions to the Prisoner's Dilemma. The arguments go as follows: (1) if we're both rational agents and we have the same information and same payoffs, we will make the same choice; (2) therefore, (Cooperate,Defect) and (Defect,Cooperate) are out of the question; (3) therefore, the only options are (Defect,Defect) and (Cooperate,Cooperate); (4) so I should Cooperate since it gives the better payoff. It seems to follow logically but (1) and (2) are problematic because you can't assume symmetrical solutions and thus eliminate asymmetrical outcomes, because that is essentially the same as saying "what I choose causally affects what my opponent chooses".
In the same way, one-boxing is irrational (for this argument, anyway; I'm undecided myself) because the prediction has already been made, and so your choice to one-box or two-box cannot have any causal relevance to the contents of the boxes. Even a perfect predictor cannot invert the flow of causality.
So what you really want is to be in a state that will make you chose one box and you want to already be in that state at the time the predictor makes its predictions because the predictor will see this and place the million dollars into the second box. And as we have already said, you can not chose to take two boxes afterwards as that would contradict the existence of the predictor.
I do not think that allowing some prediction error fundamentally changes this, it only means that sometimes the choice may depend on unpredictable true randomness or sometimes the predictor does not measured the relevant state of the universe exactly enough or the prediction algorithm is not flawless. But if the predictor still arrives at the correct prediction most of the time, then most of the time you do not have a choice and most of the time the choice does not depend on true randomness.
Which also renders the entire paradox somewhat moot because there is no choice for you to be made. The existence of a good predictor and the ability to make a choice after the prediction are incompatible. Up to wild time travel scenarios and thinks like that.
Not quite. You did choose your decision making methods at some point in your life, and you could change them multiple times till you came to the setup of Newcomb's paradox. If we look at your past life as a variable in the problem, then changing this variable changes the outcome, it changes the prediction made by the predictor.
> The existence of a flawless predictor means that you do not have a choice after the predictor made its prediction
I believe, that if your definition of a choice stop working if we assume a deterministic Universe, then you need a better definition of a choice. In a deterministic Universe becomes glaringly obvious that all the framework of free will and choice is just an abstraction, that abstract away things that are not really needed to make a decision.
Moreover I think I can hint how to deal with it: relativity. Different observers cannot agree if an observed agent has free will or not. Accept it fundamentally, like relativity accepts that the universal time doesn't exist, and all the logical paradoxes will go away.
Indeed, I think of concepts like "agency", "choice", "free will", etc. as aspects of a particular sort of scientific model. That sort of model can make good predictions about people, organisations, etc. which would be intractable to many other approaches. It can also be useful in situations that we have more sophisticated models for, e.g. treating a physical system as "wanting" to minimise its energy can give a reasonable prediction of its behaviour very quickly.
That sort of model has also been applied to systems where its predictive powers aren't very good; e.g. modelling weather, agriculture, etc. as being determined by some "will of the gods", and attempting to infer the desires of those gods based on their observed "choices".
It baffles me that some people might think a model of this sort might have any relevance at a fundamental level.
For that reason I strongly disagree with the compatibilist view - language is defined by use, and most people act in ways that clearly signal a non-compatibilist view of free will.
But also you’re right that even a pretty good (but not perfect) predictor doesn’t change the scenario.
What I find interesting is to change the amounts. If the open box has $0.01 instead of $1000, you’re not thinking ”at least I got something”, and you just one-box.
But if both boxes contain equal amounts, or you swap the amounts in each box, two-boxing is always better.
All that to say, the idea that the right strategy here is to ”be the kind of person who one-boxes” isn’t a universe virtue. If the amounts change, the virtues change.
No, it does not. Replace the human with a computer entering the room, the predictor analyzes the computer and the software running on the computer when it enters. If the decision program does not query a hardware random source or some stray cosmic particle changes the choice, the predictor could perfectly predict the choice just by accurately enough emulating the computer. If the program makes any use of external inputs, say the image from an attached webcam, the predictor also needs to know those inputs well enough. The same could, at least in principle, work for humans.
We don't know whether or how our actions and thought processes processes might affect the outcome, and so any speculation over odds is meaningless and devolves to making assumptions we can't test, without even knowing whether that speculation itself might alter the outcome, or how.
But I don't need to speculate about the relative value of $1000 and $1000000 to me. Others might opt for the safe $1000 for the same reason.
No matter what you do after you enter the room, the predictor has already made their move, nothing you do now will change it. The only logical thing to do is to take both boxes because whatever the value in the second box is it will be added to the first box. If you only take the second box you are objectively always giving up $1,000 and getting no value in exchange for doing so (since not taking the first box doesn't change what's in the second)
If the predictor is indeed flawless, or almost flawless, if I were to be the type of person likely to pick both boxes, the opaque box would almost certainly be empty. So the winning strategy is not just picking the opaque box, but being the kind of person likely to pick the opaque box only.
You're right that what I do after I enter the room is irrelevant if it is somehow independent of what I have done before. But it can't be independent of what I did before if the predictor is flawless. If the predictor is flawless, then either my actions needs to be deterministic so that it can in fact know what I will do when in the room, or the predictor is supernatural and can know or cause me to act in a certain way for that reason.
Either way, giving any indication that you'd pick both boxes would be a bad idea (so I guess my typo above might screw me over if ever presented with this choice).
Congratulations on your $1,000. I'll use some of my $1,000,000 I got by nonsensically picking one box to toast in your honor and dedication to logic.
https://arxiv.org/pdf/0904.2540
Abstract:
> ...We show that the conflicting recommendations in Newcomb’s scenario use different Bayes nets to relate your choice and the algorithm’s prediction. These two Bayes nets are incompatible. This resolves the paradox: the reason there appears to be two conflicting recommendations is that the specification of the underlying Bayes net is open to two, conflicting interpretations...
• You take one box and get $1000000
• You take two boxes and get $1000
The choice seems quite clear to me.
So the amount of money in the black box don't change whatever you REALLY pick. Either the predicator would have guessed you'd pick both and there is 0$ in black box, in that case you have interest to take both boxes and win $1000 which is better than zero.
Or it predicted you would only take the black box, put $1000000 in it and then again you win more by taking the two boxes.
I'd take the $1000 box without the second box just to mess with the computer.
Free money scenarios are always suspect so why would you ever expect to get a million dollars out of one?