Meanwhile the utility of the mugger's finger is questionable. The pain of losing the finger is the only real cost. If they are just a petty criminal, the loss of their finger will probably reduce their ability to commit crimes and prevent him from inflicting as much suffering on others as he otherwise would have. Maybe losing his finger actually increases utility.
Bentham: "I'm sorry Mr. Mugger but I am on my way to spend this 10 pounds on a supply of fever medication for the orphanage and I am afraid that if I don't procure the medicine, several children will die or suffer fever madness. So when faced with calculating the utility of this situation I must weigh your finger against the lives of these children. Good day. And if the experience of cutting your finger off makes you question your own deontological beliefs, feel free to call upon me for some tutoring on the philosophy of Act Utilitarianism."
Any other scenario and Bentham clearly isn't a true Act Utilitarian and would just tell the Mugger to shove his finger up his ass for all Bentham cares. Either strictly apply the rules or don't apply them at all.
They tend to look best on the moral equivalent of PowerPoint slides. But if you look beyond toy examples, nearly everything is too complex to believe that you've really formed a decent model of the situation. Without that model you're stuck where you were without the Utilitarian principles.
It's fun to argue about but I don't think it makes for a pragmatic moral system. And it's easily subverted by people claiming to present a moral case that is in fact incomplete, leading to abhorrent conclusions that they feel rigorously bound by.
This is an interesting formulation, when you put that way I imagine most readers have seen some 'moral powerpoint slides' that fall apart on a closer examination.
That said, I am doing my best to come up with an example that avoids the problem you mention. If we can imagine a utopia society where everyone is a perfect act utilitarian except the mugger, and all resources are distributed in a totally fair way such that any $10 will buy much less utility than saving a finger, the mugger's tactic would be harder to avoid.
I think the problem I still have with this is that it's basically saying that it's possible for a jerk to take advantage of a bunch of nice people, which isn't that interesting of a conclusion.
So, the trolley problem? "Philosophy is bunk".
Yes, the mugger removing his own finger, in Bentham's words, would "prevent the happening of mischief, pain, evil, or unhappiness", assuming the mugger is going to continue a life of crime where fingers help him. If the mugger was a pediatric surgeon who was performing a one-time theft (and we can trust him to never do it again) to get some quick cash on his way to work it might work. It doesn't resolve what the money was originally for, but at least that finger could be as important as your medicine.
The solution is indeed "don't give in to muggers", but it's possible to define this in a workable way. Suppose the mugger can choose between A (don't try to mug Bentham) or forcing Bentham to choose between B (give in) or C (don't give in). A is the best outcome for Bentham, B the best outcome for the mugger, and C the worst for both. The mugger, therefore, is only incentivized to force the choice if he expects Bentham to go for B; if he expects Bentham to go for C, then it's in his interest to choose A. Bentham, therefore, should have a policy of always choosing C, if it's worse for the mugger than A; if the mugger knows this and responds to incentives (as we see him doing in the story), then he'll choose A, and Bentham wins.
And none of this has anything to do with utilitarianism, except in the respect that utilitarianism requires you to make decisions about which outcomes you want to try to get, just like any other human endeavor.
In the original situation, where the mugger is harming themselves, the critique is that utilitarianists are required to treat their own interests as exactly the same as other people’s interests. It doesn’t matter if someone is harming themselves in order to provoke some action from you; if your action prevents that harm, you are obligated to do that action (even if you suffer because of it).
Notice that what Bentham is altering is their strategy and not their utility. If they could spend 10 dollars to treat gangrene and save the fingers, they would do it. It's not clear many other morality systems would be as insistent on this as utilitarianism, because practitioners of other moralities curiously form epicycles defending why the status quo is fine anyway, how dare you imply I'm worse at morality.
Edit: Slight wording change for clarity
"Always go for C (or any strategy)" is not in general a utilitarian strategy, so the mugger would not expect Bentham to employ it.
Your argument assumes that the characters have perfect knowledge, but the point of the parody is that utilitarian choices can change as more information is revealed.
Yes, the mugger could have said something like "if I were to promise to cut off my finger unless you gave me £10, would you do it?", Bentham could have have followed up with "if you knew I would reply no to that question, would you make that promise?", the mugger could have replied "no," Bentham could have responded "In that case, no", and the mugger would have walked away. But Bentham doesn't have all the information until he is faced with the loss of a finger which he can prevent by giving up £10. Bentham is obliged to do so, as it maximises the overall good at that (unfortunate) point.
The idea that Bentham can be "trapped" in a situation where he is obliged to cause some small harm to himself in order to prevent a greater harm is the parody of utilitarianism which is at the heart of the story.
For example, if you can learn to feel sadder about something, other people who want to minimize your sadness will acquire an incentive to help you avoid that thing, even at some cost to themselves.
In many moral intuitions, you find the world as it is and then act on it in some way, without other people strategizing about, or being incentivized by, your moral reasoning. But when other people can do those things, you can get very weird outcomes.
After all, one of the premises here is that the mugger is a deontologist. He doesn't care about outcomes.
> Fair enough. But, even so, I worry that giving you the money would set a bad precedent, encouraging copycats to run similar schemes.
I don’t understand how it was logically defeated with escalation as in the story. Would it be wrong for a Utilitarian to continue arguing against this precedent, saying that the decision to be mugged removes overall Utility because now anyone who can be sufficiently convincing can also effectively steal money from Utilitarians. (I guess money changing hands is presumed net neutral in the story?)
Utilitarianism does not benefit from covert insertions of specific moral carve-outs. Surmisal does not impact outcomes only predictions of outcomes. It is not appropriate to make judgments based on surmisal because utilitarianism can only ever look backward at effects to justify actions post-hoc. This is the primary flaw with utilitarianism as a moral philosophy.
Reality is a big, complex, ball of Stuff, and any attempts to impress morality upon it will be met with many corner cases which produce unwanted results unless we spend our time engaged with dealing with what initially look like tiny details.
More seriously, any moral theory that strives too much for abstract purity will be vulnerable to adversarial inputs. A blunt and basic theory (common sense) is sufficient to cover all practical situations and will prevent you from looking very dumb by endorsing a fancy theory that fails catastrophically in the real world [1]
[1] https://time.com/6262810/sam-bankman-fried-effective-altruis...
What would you count as evidence that effective altruism fails?
It started answering, then within seconds my question was replaced with "This content may violate our content policy. If you believe this to be in error, please submit your feedback — your input will aid our research in this area."
Then, seconds after, the still-appearing answer was replaced with the same message.
Doh! The content filter got tripped because "obviously" it's not a philosophical thought experiment about utilitarianism but an evil text about mugging someone, which is an illegal activity. What a time to be alive!
1: the dig at Effective Altruism;
2: I went to UCL back in the days when you could hang out with the Bentham In A Box;
3: One of my (distant) colleagues is a descendant of Bentham.
How is this clear? This is one of the things I find strange about academic philosophy. For all the claims about trying to get at a more rigorous understanding of knowledge, the foundation at the end of the day seems to just be human intuition. You read about something like the Chinese Room or Mary’s Room thought experiments, that seem to appeal to immediate human reactions. “We clearly wouldn’t say…” or “No one would think…”
It feels like an act of obfuscation. People realize the fragility of relying on human intuition, and react by trying to dress human intuition up with extreme complexities in order to trick themselves into thinking they’re not relying on human intuition just as much as everyone else.
Also, moral philosophy deals with what is right and what is wrong. These are inherently fuzzy notions and they likely require some level of intuitive reasoning. ("It is clearly wrong to kill an innocent person.") I would be extremely surprised if someone could formally define what is right and wrong in a way that captures human intuition.
It's also not worth debating philosophy with people who will argue that $10 is not clearly worth less than a finger. (And if you don't believe that, then we can consider the case with two fingers, or three, or a whole hand, etc.).
Some of these arguments feel like the equivalent of spending billions to create a state of the art fighter plane and not realizing they forgot to put an engine inside of it.
It’s not $10 vs. “a finger,” it’s $10 vs. the finger of someone who goes about using their fingers to threaten people to give them money. If the difference isn’t immediately obvious, I think it’s time to step back from complex frameworks and take a look at failures with common intuition.
Lots of things in philosophy might have debatable rigor, but this in particular isn't one of them.
> The finger is clearly a case where the utility disparity is obvious.
This is the erroneous assumption that leads to the false conclusion. There's no such thing as an obvious utility disparity. It's a decent heuristic that works fine in the real world, but in this imaginary scenario where a person would actually be willing to cut off their own fingers simply because they have not gained £10, it no longer holds true.
What function do I use? Do I sum them, is it the mean, how about root-mean-squared? Why does your chosen function make more sense than the other options? Can I perform arithmetic on utilities from two different agents, isn't that like adding grams and meters?
Not just "how", but whether doing such a thing is even possible at all. And even that doesn't push the problem back far enough: first the utilitarian has to assume that utilities, treated as real numbers, are even measurable or well-defined at all.
Traditionally you use sum, which gets you total utilitarianism. Some have advocated avg which gets you average utilitarianism. https://en.wikipedia.org/wiki/Average_and_total_utilitariani...
> root-mean-squared
Why?
> Can I perform arithmetic on utilities from two different agents?
This is called "interpersonal utility comparison", and there's a ton of literature on it. Traditionally utilitarians have accepted it, and without it ideas like "sum the utility across everyone" don't make sense.
I mean, that's a problem that lost of people skip to in utilitarianism, but the bigger problem is that utility isn't really measurable in a way that produces a meaningful "vector of utilities" in the first place.
Utilitarianism has always been more about government policy, so yes it's largely inappropriate as the foundation for a system of personal ethics, at least in its basic forms.
But I'm unaware of any university class that has ever attempted to "build a real system of ethics".
That's not what moral philosophy classes do. The whole point is every moral system encounters major showstopper flaws, and how to reconcile those is one of the great unsolved problems of humanity.
https://blog.superb-owl.link/p/contra-ozy-brennan-on-ameliat...
I’m by no means opposing a general morality of optimising for the greater good, and I think on the whole utilitarianism, like other ideological/ethical systems, gets critiqued in comparison to an impossible standard of perfection. My sense is there are some more basic principles that underpin the success and pragmatism of any ethical/ideological system, and that these principles, to your implied point I think, would safeguard utilitarianism as well as other systems.
I think this is implied in the critique some have against utilitarianism, namely that it needs to introduce weighting in order to adjust the morality towards palatable/sensible means and outcomes. But I don’t think any system could avoid those same coping mechanisms.
What basic principles are you thinking of? Even more basic than hedonism, consequentialism, etc.?
Weighing is just one of critiques against utilitarianism, and it’s a valid one. Maybe the extreme happiness of one person isn’t worth mild suffering of 5 people. But pretending that this upends the entirety of this moral framework, and not one of its building blocks (basically the aggregation function) is kinda silly.
This also deliberately avoids introducing irrelevant arguments. By framing it as a mugger who wants to gain money for purely selfish reasons, we deliberately exclude complicating factors from the statement.
* The argument could be framed around donating to the Susan G. Komen Foundation, rather than a mugger. With the controversies it has had [0], it could be argued that these donations may or may not increase total utility, but donations to charities are part of the best possible path. However, using the Susan G. Komen Foundation as an example relies on accepting a premise that it isn't using donations appropriately, and makes the argument dependent on whether that is or isn't the case.
* The argument could be framed around allowing tax exemptions for all self-described charitable foundations, with Stichting INGKA Foundation [1], part of the corporate structure that owns IKEA, playing the narrative role of the mugger. The argument would be that the tax exemptions provided to charitable foundations are necessary for bringing about the best outcomes, but that they can be taken advantage of. Here, the argument would depend on whether you view the corporate structure of INGKA as a legitimate charity.
* Even staying with purely hypothetical answers, we could ask if the mugger going to starve should be mugging be unsuccessful. These could veer into questions of the local economy, food production, and so on, none of which help to test the validity of utilitarianism.
I've heard this described as crafting the least convenient world. That is, whenever there's a question about the hypothetical scenario that would let you avoid an edge case in a theory, update the hypothetical scenario to be the least convenient option. What if the mugger just needs a hug? Nope, too convenient. What if the mugger isn't going to go through with the finger-chopping? Nope, too convenient.
[0] https://en.wikipedia.org/wiki/Susan_G._Komen_for_the_Cure#Co...
[1] https://en.wikipedia.org/wiki/Stichting_INGKA_Foundation
In theory a utilitarian is likely comfortable with the in-principle idea that they might need to sacrifice themselves for a greater good. Pointing that out isn't a counterargument against utilitarianism. In practice, no utilitarian would fall for something this dumb. They'd just keep the money and assume (correctly in my view) they missed something in the argument that invalidates the mugger's position. Or, likely, assume the mugger is lying about being an insane deontologist.
Or, even better, what if distribution of this life saving cure was done based on the deontological concept of fairness? Surely, this wouldn't result in limited and highly demanded vaccines being literally thrown away[1] in the name of equity and where vaccination companies wouldn't need to seek approval for something as simple as increasing doses of vaccines in vials. [2]
You know, just all theoretically, since it would be a terrible shame if any of these things happened in the real world, since this is just one specific scenario and I'm sure I can make up various [3] other [4] ways [5] in which not carefully evaluating the consequences of moral actions would turn out poorly, but hey!
I'm sure glad that utilitarianism isn't being entertained more on the margin, since we already live in the best of all possible moral universes.
(Footnote, I'm not going to justify these citations within this post, because it's pithier this way. I recognize this is not being fully honest and transparent, but I'd be happy to fully defend the inclusion of any these, if necessary)
[0] https://www.cdc.gov/mmwr/volumes/70/wr/mm7014e1.htm
[1] https://worksinprogress.co/issue/the-story-of-vaccinateca ctrl f "On being legally forbidden to administer lifesaving healthcare"
[2] https://www.businessinsider.com/moderna-asks-fda-approve-mor...
[3] https://news.climate.columbia.edu/2010/07/01/the-playpump-wh...
[4] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2641547
[5] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=983649
The game of coming up with "counterexamples" to moral theories is fun, but basically stupid. By definition it involves "contriving" cases, however realistic really, which can make whatever preposterous "stipulations" they please. The underlying assumption is that moral theories are somehow like scientific theories in that they are validated by "predicting" the available observational "data", i.e. our moral intuitions, i.e. the social values of the cultural/economic groups we're a part of. Mysteriously, christian conservative scolds engage with philosophy and end up developing something a lot like christian social conservatism, and cosmopolitan liberal scolds come up with something a lot like cosmopolitan social liberalism, despite the fact that both are engaged in this highly scientific form of inquiry. Very odd.
The whole game is also probably largely irrelevant to the kind of stuff Bentham actually cared about, since he mainly wanted to use utilitarianism to guide state policy, and (famously) hard cases make bad law.
Functional Decision Theory
My point is that, is that so clear? Or is the utility function being presumed here lacking?
I imagine you could define utility that way, but presumably the mugger could increase the cost (two fingers? an arm?) until the argument works. Also, if you do definite a utility function like that (say, "there is more utility in this £10 being mine rather than yours than the utility of your arm") then that's a pretty questionable morality.
If the mugger doesn't want his own finger, it is Bentham can choose to trust him that 9 fingers are better than 10. Maybe the mugger is even behaving rationally, maybe the 10th finger has cancer, who knows. As the story illustrates, giving him $10 didn't stop him from losing his finger. There are many factors here that make the situation unclear.
They're all lacking in someway, so sure.