[content warning: brief discussion of thought experiments involving infanticide, rape, incest, etc.; extensive discussion of thought experiments involving torture]
Scott Alexander wrote an essay a while back arguing that:
Moral dilemmas are extreme and disgusting precisely because those are the only cases in which we can make our intuitions strong enough to be clearly detectable. If the question was just “Which is worse, a thousand people stubbing their toe or one person breaking their leg?” neither side would have been obviously worse than the other and our true intutition wouldn’t have come into sharp relief. So a good moral philosopher will always be talking about things like murder, torture, organ-stealing, Hitler, incest, drowning children,the death of four billion humans, et cetera.
This is totally, completely, flat-out wrong.
Let’s take the Torture vs. Dust Specks thought experiment. Scott praises it as “beautiful in its simplicity; it just takes this assumption [that utility aggregates linearly] and creates the most extreme case imaginable.” He criticizes people who say that philosophers are crappy people for even considering it.
But the thing is that there is actually an alternate formulation of Torture vs. Dust Specks! It’s Alicorn’s essay, Sublimity vs. YouTube:
Suppose the impending existence of some person who is going to live to be fifty years old whatever you do2. She is liable to live a life that zeroes out on a utility scale: mediocre ups and less than shattering downs, overall an unremarkable span. But if you choose “sublimity”, she’s instead going to live a life that is truly sublime. She will have a warm and happy childhood enriched by loving relationships, full of learning and wonder and growth; she will mature into a merrily successful adult, pursuing meaningful projects and having varied, challenging fun. (For the sake of argument, suppose that the ripple effects of her sublime life as it affects others still lead to the math tallying up as +(1 sublime life), instead of +(1 sublime life)+(various lovely consequences).)
Or you can choose “Youtube”, and 3^^^3 people who weren’t doing much with some one-second period of their lives instead get to spend that second watching a brief, grainy, yet droll recording of a cat jumping into a box, which they find mildly entertaining.
Sublimity or Youtube?
Sublimity vs. YouTube gets at the same utility aggregation problem as Torture vs. Dust Specks. It elicits the same emotional response of “how can a single moment outweigh an entire life?” It shows the same fact that people cannot emotionally understand very very large numbers. It does feature pleasure instead of pain, but torture vs. dust specks presupposes utilitarianism, which traditionally treats the two as interchangeable. Literally the only difference is that no one will think you are a terrible person who supports torture.
And yet I predict most of the people reading this have never heard of it.
Taking torture out of the thought experiment has two advantages. First, a certain percentage of the population’s brains shut down as soon as they see words like “torture” and “rape”, and you will not get any arguments out of them other than TORTURE IS BAD RAPE IS BAD HOW DARE YOU SUGGEST THAT TORTURE AND RAPE ARE GOOD THEY ARE BAD. Indeed, that’s what Scott’s post is complaining about. Now, you can argue that these people should not do that thing. I might even agree! But as long as they continue to exist, including torture in your thought experiment is basically saying “It is my belief that everyone in the category ‘people who are totally, irrationally averse to torture’– which, anecdotally, seems to be at least half of the population– has absolutely nothing interesting or important to say about utility aggregation, such that I am willing to entirely shut them out of the discussion.”
(Compare: Everyone should have basic statistical literacy, enough so they can read even misleading graphs. It is still wrong to make misleading graphs.)
Now, some thought experiments are deliberately intended to get at those people’s TORTURE IS BAD RAPE IS BAD sense. “Is there anything wrong with consensual, protected incest that is kept secret and that everyone involved thought was a wonderful experience that brought them closer?” is supposed to contrast our instinctive INCEST IS BAD with the observable fact that that incest had no negative consequences. Pat Robertson’s “atheists would think that someone raping their daughters in front of them is morally wrong” observation is supposed to contrast the idea that there’s no such thing as morality with people’s instinctive moral sense. Fine.
But, first, that is a relatively narrow category. It doesn’t even include things like Peter Singer’s advocacy for infanticide: the question of whether babies have a right to life could be just as easily discussed with the framing “was it okay for classical Athenians to leave unwanted babies to die?” as “is it okay to kill disabled babies?”, but the former is much less emotionally laden. Second, there’s no justification for making it more upsetting than necessary. “Do atheists think rape is wrong?” gets at the issue, you don’t need to include the brutal raping of daughters in front of people.
Furthermore: how confident are you that “TORTURE IS BAD TORTURE IS BAD” is actually an incorrect thing to feel about torture? The correct utilitarian rule about torture is “don’t torture people, even if it’s the right thing to do; it is more likely you are mistaken than that torture is morally right.” Being repelled by torture to the extent that you can’t even consider that it’s correct in a thought experiment seems to me like the way that your emotions and intuition internalize that rule. By developing your capacity to be okay with torture in thought experiments, you are practicing being okay with torture. Even if your rational mind still endorses the rule “don’t torture people, even if it’s the right thing to do”, your intuition has shifted to “don’t torture people, unless it’s right– and since we keep thinking about times when it’s right, there are a lot of those times accessible by our availability heuristic.”
(Remember that your intuition doesn’t understand big numbers. That’s part of the purpose of torture vs. dust specks to begin with.)
I think a lot of torture vs. dust specks arguers aren’t really interested in the paradoxes of utility aggregation. They’re interested in signaling that they are hard-headed people who bite bullets and come to counterintuitive ethical conclusions. And, you know, if you want to optimize your thought experiments for signaling hard-headed contrarianism, that’s your business. But you really shouldn’t pretend that it’s just a product of the tragic constraints of moral philosophy and there’s nothing you can do about it.
David Chapman said:
Yay! Score: Scott 0, Ozy 1!
(I love you both, however.)
LikeLike
blacktrance said:
The fact that torture is horrible is a feature of these scenarios, not a bug. Alicorn’s scenario doesn’t portray this scenario in as sharp of a light as torture vs dust specks – it seems easy to just shrug and say “It’s a difficult question, who knows what the answer is?” and not really think about it more. But the horribleness of torture motivates actually solving the problem.
Also, if torture can be right, I desire to believe that torture can be right.
LikeLiked by 7 people
ozymandias said:
How confident are you that that effect outweighs the effect of making people extremely irrational about the thought experiment?
LikeLiked by 1 person
propater said:
“Also, if torture can be right, I desire to believe that torture can be right.”
I think you are missing the whole point of the experiment. Subjecting 3^^^3 people (we are talking about a huge number of entire space faring civilizations, probably accross multiple universes) to dust specks is also torture. If you think utility scales linearily, it does not make a difference if “torture” is distributed or not… If you disagree with the linear part, you are not choosing the torture horn anyway…
So I think you kind of prove Ozy’s point with your comment, bringing torture into the experiment distracts you from the point and prevents you from thinking clearly.
Also: ” it seems easy to just shrug and say “It’s a difficult question, who knows what the answer is?” and not really think about it more. But the horribleness of torture motivates actually solving the problem.”
People who shrug off the Sublimity vs Youtube dilemma probably have not thought very much about utilitarianism anyway and are going to choose specks without a thought, and loudly so. If they engage the argument, they are probably not going to do it in a productive way. Why you would want to pull people who would make the conversation actively worse escapes me.
LikeLike
Baby Beluga said:
I think you’re presupposing that you should prefer torture, and implying that the opposite opinion is born of irrationality. And I mean, maybe it is, but the point of the thought experiment was to reach people’s intuitions under extreme conditions.
What I’m trying to say is that maybe the “TORTURE IS BAD TORTURE IS BAD” reaction to the thought experiment is what was trying to be reached in the first place. (Obviously, if someone is seriously triggered by torture, don’t tell them about the thought experiment–I’m referring more to the people with an “irrational aversion” to torture.)
Like, Scott’s post was all about finding out if an apparent rule would be broken under extreme conditions. From everyday life, you might think that the right moral principle was “value aggregates linearly”–but maybe the real moral principle is “value aggregates linearly, except when torture is involved.” It would be a shame not to find out the real moral principle because an extreme enough thought experiment hadn’t been considered.
Basically, what Blacktrance said–people’s irrational aversion to torture is a feature, not a bug. We’re finding out what their moral intuitions really are.
I hope this makes sense.
LikeLiked by 4 people
ozymandias said:
Well, what’s the purpose of the thought experiment?
If the purpose is “utility aggregation produces counterintuitive results because our brains don’t understand really big numbers,” then sublimity vs. youtube works just as well. If the purpose is “people have moral intuitions that they can’t really justify”, then the inclusion of the incomprehensibly big numbers confuses the point and you should use the incest thought experiment instead. If it’s both, then it is as if Thomson decided to modify the violinist thought experiment to include “…and the violinist requests to die, because he finds his life not worth living!” so that her thought experiment would address both abortion and assisted suicide. Thought experiments should try to get at one thing, more than one confuses the situation for everyone.
LikeLike
wfenza said:
I like torture vs. dust specks because it’s an example of why utilitarianism is wrong. If your answer there is “I prefer torture!” then you’re wrong. A right-thinking person won’t prefer torture.
Of course, I have uncommon ideas about ethics.
LikeLiked by 2 people
ozymandias said:
…weren’t you arguing simplistic hedonic utilitarianism like two days ago
LikeLike
wfenza said:
No! I was a complicated hedonic egoism. And in my ethical system, the torture is wrong because it causes me more mental distress to think about a person being tortured than it does to think about 3^^3 people with dust specks in their eyes. But I don’t expect very many people to agree with that outlook, so I have to find other ways to point out that utilitarianism doesn’t make sense.
LikeLike
Baby Beluga said:
Ozy, I think of the purpose of the thought experiment as being to answer the question, “Does utility aggregate linearly?” From this point of view, people’s intuitions that you should prefer dust specks to torture is important evidence against utility aggregating linearly, and it’s evidence that would be lost otherwise. Indeed, imagine if the question had been posed without using torture as an example, and had instead been posed using YouTube, or with no accompanying example! Then I think almost everyone would agree that utility aggregates linearly, which would give us an incomplete view of morality in the same way that only considering objects moving much slower than the speed of light would give us an incomplete view of physics (to borrow Scott’s example).
(This comment makes me sound like I’m in favor of choosing dust specks. In fact, I’m not! But I think that the framing of the problem of utility aggregation as torture vs. dust specks is important in order to make the ramifications of assuming utility aggregates apparent.)
LikeLike
heelbearcub said:
I agree that the dust speck side is the “right” side, but I think the very large number argument hides what is really going on.
We are creatures that are built to deal with dust specks. This is one dust speck second among hundreds that occur in a lifetime of 2 billion seconds. It is statistical noise in the lifetime. Real harms might come from the large numbers, but it would be because the dust speck was the freak event that caused some large tragedy, like a car accident. I think the framing of the argument specifically rules that out.
So you are comparing one life that is harmed irreparably vs what amounts to zero times infinity (i.e. zero)., because the persons life has not changed. Much like a train going down a track won’t be affected by another grain of sand on the track. Dust specks are within the normal operating parameters of human life.
I made some similar arguments over on a thread at SSC.
LikeLiked by 4 people
pedromvilar said:
@heelbearcub: I think in the original thought experiment you’re supposed to disregard all knock-on effects. This means that you’re comparing *strictly* the dust specks to the torture, and pretending that they’re causally disconnected from the rest of our lives, i.e. that after the torture no one’s going to be insane or harmed irreparably and they will lead a normal life, and after the dust specks no one will be involved in freak accidents due to dust specks.
LikeLiked by 2 people
rageofthedogstar said:
I think it’s the first one, but I disagree that sublimity vs youtube works just as well. For something to be counterintuitive it has to produce an intuition in the first place, and, for me at least, the modified experiment just doesn’t produce any sort of intuition – sublimity just doesn’t feel viscerally right to me in the way that torture feels viscerally wrong.
LikeLiked by 1 person
heelbearcub said:
@pedromvilar: Yes, I agree that the original framing is to ignore knock on effects. I was trying to re-iterate that my analysis depended on that framing.
And given that analysis, this is why the “harms” don’t aggregate. There is no harm at all. To illustrate this even further, think of getting a 1-second glimpse of a room painted in a color the person finds unattractive. Now aggregate that over 2^^^3 or 2^^^^3 or … add as many arrows as you wish. The harms don’t aggregate because each individual person isn’t suffering any harm (statistically speaking). You are not only adding “harms” but overwhelming those harms with the rest of those individuals lives.
LikeLiked by 1 person
TomA said:
Thought experiments can have many forms and features. Scott describes one type of thought experiment that incorporates a clear benefit; that of exploring a dilemma to the degree that it reveals an expanded understanding of one’s innate bias. What is objectionable about that?
LikeLiked by 1 person
caryatis said:
Hmm, I don’t completely disagree. I think it can be valuable to set up certain mental taboos, which are irrational I guess but useful. For instance, forbidding yourself from being attracted to someone when there’s a good reason not to be attracted to them. Maybe this is what people who are really good at monogamy do.
LikeLike
roe said:
No, I hadn’t heard of Sublimity vs. Youtube before, but when I read it, I just thought “Well, it’s an arguable point, but if I believe that feelings aggregate, I should prefer dust specks & youtube. If I’m being de-ontological (if I have that right?) I can say “everyone has a right to life/not be tortured!”” What’s being revealed by the polarity reversal isn’t exactly clear to me…
Re: Practising to be alright with torture
OK, but how often is a person likely to be confronted with such a real-life choice, such that it would matter? The closest thing I can think of is a “ticking time-bomb” scenario (Sam Harris, for eg., endorses torture on that one on Utilitarian grounds) – but it seems to me plenty of people take a de-ontological approach. If we’re talking about gov’t policy, we’re essentially arguing over whether or not we can ask and assess what the gov’t (and by extension, the populace) gets out of allowing torture. But if we can’t ask that question, we have to allow for the extreme case of being OK with a nuclear bomb going off in a major city to avoid torturing someone. This is not an easy question to deal with.
“The correct utilitarian rule about torture is “don’t torture people, even if it’s the right thing to do; it is more likely you are mistaken than that torture is morally right.””
On what grounds? Isn’t this something that needs to be taken on a case-by-case basis if we’re taking Utilitarianism seriously?
LikeLiked by 1 person
ozymandias said:
Conveniently, torture doesn’t work. Even in a ticking time bomb scenario (which is essentially a myth), there’s a high chance of getting false information, either because the terrorist has the presence of mind to lie or because the terrorist doesn’t know where the bomb is located and still wants the torture to stop. When you account for the fact that most claimed cases of ticking time bombs aren’t, I think the rule I laid out is correct.
I’m a rule utilitarian, not an act utilitarian, so I don’t have to take it on a case-by-case basis.
LikeLiked by 4 people
roe said:
Oh! I didn’t know about rule utilitarianism – that helps me understand where you’re coming from. Thanks.
(Maybe “torture never works – or doesn’t work enough to warrent a rule” is an arguable point – but I’m not anywhere near qualified to argue it, so I’ll concede)
LikeLike
osberend said:
Minor nitpick/potentially useful information: The word you are looking for is “deontological,” with no hyphen, from the Greek word δέον/deon; the similarity to the word “ontology” is a partial coincidence.
LikeLiked by 2 people
roe said:
Let’s take a historical example:
Winston Churchill declined to act on information obtained from code-broken Enigma transmissions that could’ve saved boatloads of young men from dying. These young men presumably would’ve gone home to get married, have children, live good lives, &etc.
He did this because he didn’t want the Germans to know that the Enigma code had been broken, and there were more valuable tactical opportunities to use obtained information later.
Is there a moral problem with this?
LikeLike
ozymandias said:
Yes, I am a utilitarian, but I am very confused what your question has to do with the overall post, which is not about specific thought experiments but about techniques of creating thought experiments.
LikeLike
wfenza said:
I think it’s implying that if you refuse to consider [really bad thing] in thought experiments, you won’t be able to consider [really bad thing] when faced with it in reality. Your decision-making skills will be underdeveloped.
LikeLiked by 2 people
MugaSofer said:
WRT torture, it seems trivially obvious to me that it’s the correct action in a “ticking time-bomb” scenario.
It shouldn’t be legal, perhaps, because there’s a risk of the state abusing that power (which of course they would never do if it was illegal.) But laws are not ethics.
So yes, I would say that someone who goes “torture is never right, even if it looks right” is not being a virtuously humble person, they’re just oversimplifying.
That said, I think you’re sooorta right about Sublime vs. Youtube, although in practice I don’t think it’s quite extreme enough. “Youtube” sounds too small, and “sublime” sounds too vague. That’s just a quibble, though; the principle is sound.
I don’t think this applies to that guy who asked if atheists (I think he actually meant nihlists specifically, but didn’t draw a distinction because outgroup stereotyping) opposed having their families killed in front of them. Merely saying “do they oppose rape” is too vague; it doesn’t have nearly the same emotional weight.
You could remove the specific example of rapemurder, but even if you replaced it with something like “would you want your kids to have great lives or unfilfilled ones” a-la Alicorn you’d get the same idiots accusing you of “fantasizing about ruining atheists’ lives”. You can’t discuss the existence of evil without giving examples of things that are, y’know, evil. Nobody was endorsing raping and murdering children in this case.
I’m not even sure about Singer, since his attempts to be prescriptive rather than descriptive WRT ethics seem staggeringly terrible to me. He’s literally using the ridiculous slippery slopes pro-lifers point to and going “see, don’t you just want to jump off those?”
LikeLiked by 2 people
osberend said:
This may well set the record for length of a post here that I agree with unreservedly. A few additional thoughts:
It shouldn’t be legal, perhaps, because there’s a risk of the state abusing that power (which of course they would never do if it was illegal.)
What I suspect amounts to the same thing, but which seems to me to get more directly to the heart of the matter, is this:
Some individual (usually more than one, but not always) has to make the decision that the situation is sufficiently dire, and other options sufficiently lacking, that torture is justified. If that individual would torture under a particular set of circumstances if doing so were legal, but not if it were criminal (and therefore put them at risk of punishment), there are three possibilities:
1. They don’t really think that it’s serious enough to justify torture, but are prepared to torture anyway.
2. They have an excessively low bar for when a situation is serious enough to justify torture.
3. They are unwilling to break the law (as such) and/or risk punishment even in a dire situation in which all the other options are far worse.
Conveniently all three of these lead to the same conclusion: This is not a person whom I want to have the power to decide when to use torture!
He’s literally using the ridiculous slippery slopes pro-lifers point to and going “see, don’t you just want to jump off those?”
Singer is, like Kant, a beautiful example of just how bizarre one’s sense of morality can get if one attempts to start from a single moral axiom and derive a perfectly consistent system of morality, without any reference to what it means to be a good human.
LikeLiked by 1 person
osberend said:
The part from “what I suspect” to “use torture” wasn’t intended to be italicized. Ah, well.
LikeLike
Glen Raphael said:
Nope. It just seems like the correct action in fictional scenarios of that type due in large part to a few quirks regarding what makes for a good story. For instance in 24 torture “works” because the plot has been carefully constructed so that there is only ONE good lead at any given time and only one person in a position to follow that lead and time is of the essence. Jack can’t trust or communicate with other people and his prior leads magically become unavailable once they’ve given up their one piece of intel that leads him to the next clue.
In the real world, after you interrogate somebody you can usually go back and ask them more followup questions later, so leads tend to accumulate. In 24, somebody gives you their one clue and then they get shot by the bad guys or they escape or they commit suicide or they get taken into custody by somebody you don’t trust or can’t safely talk to – so you might as well get that one bit of data they MUST have by any means since you’ll never see them again.
In real-world scenarios, you’re rarely SURE the bad guy knows something specific worth torturing for. Under torture you’re certain to get lots of false leads; following up all those leads uses up valuable resources and makes you less likely to solve whatever problem you’re trying to solve. Being *nice* to the bad guy in practice turns out to be at least AS good at getting the precise true info you know you want right now, is much BETTER at getting more useful true info in the future, and is incredibly better at not causing you to start following wild goose chases.
LikeLiked by 3 people
MugaSofer said:
Ah, right, sorry.
I meant a scenario where we’ve already got the guy bang to rights, and we just need them to tell us where the bomb is before it goes off. (Or where the kidnapped children are, or whatever.)
Is that not the standard ticking-time-bomb scenario?
Because you’re right, using torture to get confessions or leads is just going to lead to people making things up, which is only “useful” if you’re a terrible person who wants to meet their quota.
LikeLike
Lambert said:
Consider the converses of these thought experiments. One reduces a perfect life to mundanity, or deprives 3^^^3 people of youtube videos. The other saves someone from torture (or, equivalently modify to saving from some awful disease?) or casts specks of dust into 3^^^3 eyes.
I think I’m typical-minding pretty hard right now. Do most people consider reducing ‘sublime’ to mundane equate to torture?
LikeLiked by 4 people
heelbearcub said:
Adding a negative stimulus and removing a positive stimulus are not equivalent in psychological impact. I believe this is fairly well established.
Nor are removing a negative stimulus and adding a positive one.
Perhaps they “should” be in a perfectly logical world, but it’s not how our minds actually work.
In addition, I think there is probably an argument to be made that, outside a relatively narrow band of outcomes, there is diminished ability to actually discriminate between outcomes. There is diminishing returns past a certain point of “awesome” or “awful”, and even more, that almost all utility gains for the typical person come between “unlivablely awful” and “normal”.
LikeLiked by 5 people
Caio Camargo said:
This struck me as a particular blind spot of the rationalist approach, to focus on communication as exchange of propositions, minimizing the importance of the emotional impact. A lot of the time, the propositional content is secondary to the emotional impact. The Phil Robertson example Scott discussed in his essay was one that at the very least raised an eyebrow.
LikeLike
Lambert said:
We should employ an average Joe to look for blinspots such as these. 😉
LikeLike
PLC said:
Yes, I this Scott misread the context of Robertson’s comment or at least certainly stripped it out of context to go off on a thought experiment tangent. In the moment, Robertson wasn’t doing philosophy.
Emotional impact is not an objective in itself, but he was using it the service of some other goal. I think he was attempting to call out what he sees as atheist rationalization and self-delusion. It’s an extended form of “really!?! You believe that !?!”
LikeLiked by 1 person
Caio Camargo said:
“Emotional impact is not an objective in itself”
Disagree completely. I can’t speak with certainty about Phil Robertson in that moment, and there is such a thing as mixed motives, but it’s very clear to me that many utterances (e.g. most fiction), and many propositional utterances specifically, exist principally for their emotional impact. Some are blatant, like, say, any given headline on Salon or Town Hall. A sly communicator can be very good at dressing up something that’s meant to be hurtful (an image of his opponent’s family being raped and murdered, say) as an argument. That it’s not verifiable is precisely the point – if you’re smart enough, you create plausible deniability. But it’s quite consonant with what we know about human psychology.
LikeLiked by 1 person
Commentor 24601 said:
This is an extremely good argument.
The one thing I wonder about both Torture v Dust Specks and Sublimity v YouTube though is that, while it’s sort of the point, aggregating the qualia of pain or pleasure is just really not very intuitive. And more to the point 3^^^3 is just showing off, because “a million” is just as suitable for the purpose of being a distractingly big number and requires much less explanation.
Why not use economic utility as an example instead? The utility of a million people finding a penny on the sidewalk versus one person getting a $10K check in the mail. $0.01 is 1/25th of the value of a gumball and would insult a panhandler, whereas you can get a decent car or live comfortably for a few months with $10,000. Is our naive arithmetic approach valid or is the shape of the curve more complex?
Two nice bonuses being that it is, in fact, testable and shouldn’t enrage or confuse anyone.
LikeLike
heelbearcub said:
Well one reason is that all of those pennies actually increase the monetary supply. They are addictive in the way the life experiences of of different individuals are not.
LikeLike
heelbearcub said:
“Additive” not “Addictive”
LikeLike
Baby Beluga said:
Would a million really be enough? I think for torture vs. dust specks, it’s too small.
LikeLiked by 1 person
Commentor 24601 said:
Torture v Bee Stings then. It’s hard to argue that being stung ten hundred thousand times isn’t torturous, but a single bee sting is something we expect even small children to walk off after a minute. So one guy stung to death versus more people than you’ll ever meet in your life stung once (…assuming none of those million people has an allergy to bee venom, naturally).
The dust specks have their own problems btw, because while it’s supposed to be the epsilon of disutility they are in fact so minor that a human is really not capable of noticing them. I remember that there was a lot of nitpicking about the dust specks on LW for exactly that reason, they don’t represent even a minor harm.
LikeLike
Lambert said:
Now we’re getting to the point where 1 bee sting*1000000 events is possible to experience. That’s a sting or two per hour.
LikeLike
wfenza said:
Disagree. Torture vs. Dust Specks and Sublime Life vs. Youtube are different questions. Most people’s ethical intuitions don’t consider losses and gains as equal. Handwaving this away by saying that the question “presupposes utilitarianism, which traditionally treats the two as interchangeable” misses the fact that our ethical intuitions don’t treat the two as interchangeable, and the thought experiment is meant to invoke our ethical intuitions. Both situations are worth considering, but they are not functionally equivalent.
LikeLiked by 11 people
YmcY said:
1) Is there a law of conservation of thought experiments? It seems like it might make most sense to ask some people to consider Dust/Torture, and direct others to Alicorn’s alternative?
2) Since we’re both rule utilitarians, I’m curious about:
Surely in nearly any conceivable real situation in which you are actually assessing torture – who to vote for, whether to join movements dedicated to stopping it, etc – you will be employing far-mode, hard-headed, utilitarian calculus, and won’t be making basic mistakes about torture’s moral status due to a simple cognitive bias?
Or by night are you secretly Jack Bauer? 🙂
3) Further to the last point – so impossible, boundary condition thought experiments create an appreciable availability heuristic danger to our moral calculus, but our sexual fantasies apparently do not. …….really?
LikeLiked by 2 people
ozymandias said:
People who sexually fantasize about rape are more likely to commit rape. However, changing your thought-experiment habits is much easier than eliminating sexual preferences for most people.
LikeLike
YmcY said:
Fair.
LikeLike
heelbearcub said:
“People who sexually fantasize about rape are more likely to commit rape”
Isn’t the implied arrow of causality in this sentence backward? “People who commit rape are far more likely to fantasize about it” seems likely to be true and causal. I’m not actually sure the inverse true, and I think I’ve seen references to research (somewhere?) that points at the idea that indulging the fantasy (by viewing rape porn, for example) lowers the risk of actually committing rape. But I’m really not sure about that.
LikeLiked by 1 person
osberend said:
@heelbearcub: Wouldn’t it actually (mostly) be common causation, i.e. “people who think rape is nifty are more likely both to fantasize about it and to do it?”
LikeLike
heelbearcub said:
@osberond: “nifty” doesn’t seem to be specific enough.
First off, lots of people have fantasies of being raped. I’m assuming that’s not what you meant, but …
There are many people who fantasize about things that would be rape if they really happened. Does that qualify as thinking “rape is nifty”? If so, then, I don’t think those people are substantially more likely to commit rape, as I think almost everyone has had some fantasy like that.
But if that isn’t what you meant, then “nifty” just boils down to something like “people who want to actually commit rape”, which seems tautological.
LikeLike
heelbearcub said:
Up thread, I put in some thoughts on the dust-speck thought experiment and a link similar arguments I made at SSC, (basically i think that dust specks vs. torture has a fundamental flaw in it) but I want to also address the substantive point of this post.
I agree that it is property of the thought experiment that it contains non-condemnatory mention of torture. I also agree that some humans when viewing this property will cease to engage in rational conversation/thinking on the matter.
When attempting to have a conversation with people who react in this manner, this, then, is a bad line of thought experiment to utilize. I’d argue this doesn’t make the thought experiment intrinsically bad, just situationally bad.
As was already pointed out: negative outcomes, loss of positive outcomes and addition of positive outcomes all have different implications in terms of which intuitive biases they trip. So moral thought experiments should not seek to treat each of these possibilities as essentially equivalent.
Questions (perhaps rhetorical) should the those who can be made irrational by introduction of a subject be able to dictate the terms of debate simply because of their own failure modes? How would one avoid this being used as a club to beat people with if one applies to universally.
Conversely, if one is attempting to persuade a wide audience, should you take into account how likely your argument is to trigger an unwanted failure node in members of your target audience?
LikeLike
code16 said:
*Finds these really interesting points, especially the getting-used-to-it one*
Meanwhile, I’m confused about Sublimity vs. YouTube. Are lw utilitarians supposed to conclude YouTube?
Because, I conclude Sublimity (is the one you should choose) and that Dust Specks is, and this works out perfectly consistently – if I think that Sublimity is more ‘severe’ than YouTube and Torture is more ‘severe’ than Dust Specs (which I do), then I pick the less ‘severe’ bad thing and the more ‘severe’ good thing.
So for lw utilitarians, does it work the other way, or am I mixing something up?
LikeLiked by 1 person
Tuesday said:
I don’t think you’re inconsistent, but I think you are picking opposite to the standard lw utilitarian (if I am predicting that accurately – I’m neither lw nor utilitarian). I intuitively pick YouTube and Dust Specks, and it took me a few minutes to realize that those are inconsistent conclusions by utilitarianism. But, I’m not a utilitarian so that doesn’t necessarily bug me.
I think it’s actually a lot more useful to look at BOTH questions than just to look at one or the other — which makes me disagree with half of Ozy’s point. Putting those experiments next to each other is actually giving me useful thoughts about my own moral principles in a way that only Torture vs. Dust Specks hasn’t before, and presumably only Sublimity vs. YouTube would not have either.
LikeLiked by 2 people
Ghatanathoah said:
The idea is that under utilitarianism, Dust Specks for all those people adds up to something more severely awful than torture for just one person, and that watching a Youtube video for all those people adds up to something that is more severely good than a single Sublime life. So you should choose to Torture people to prevent Dust Specks, and you should choose to prevent a Sublime life from happening in order to get all those people Youtube videos.
LikeLiked by 2 people
Fisher said:
In the real world, this is the kind of thinking that the EPA uses to set permissible exposure limits for certain chemicals. They don’t believe in the concept of a threshold either.
LikeLike
osberend said:
@Fisher: Except that (unless I misunderstand EPA policy) the EPA is doing this with probabilities of the same event, which do sum linearly (if events are independent) to generate an expectation. Disagreeing with this is disagreeing with basic probability theory. Disagreeing with linear aggregation of utility is far more justifiable.
LikeLike
leave me alone i don't believe in blogging said:
I am confused for the same reason. My moral foundations remain unshaken.
LikeLike
Autolykos said:
I think that causing strong emotional reactions is at least sometimes a feature, not a bug of the “torture vs dust specks” experiment. It makes System 1 scream “TORTURE IS EVIL EVIL EVIL NEVER EVER ACCEPTABLE!” at you while System 2 goes “Wait. I may find a technicality to get me out of this one, but if utility is additive* in any way there must be some number of repetitions of a minor, harmless annoyance that equates to torture. And it would be highly unlikely that this exceeds Grahams Number.”
And that was really effective at making me notice my confusion, in a way “sublimity vs youtube” would have never done. It may not be the right thought experiment for all uses, but it definitely has its place when your mind is getting stuck and you need something to drag it out again (kinda like a good Zen koan does, but with more kicking and screaming).
*and what happens to people I can’t know of does not affect my utility function; let’s call this property “utility is local”. Otherwise, you could weasel your way out using geometric series.
LikeLike
Jack V said:
I think both are right. I think there are reasons to go to extreme examples to make it clear that HOWEVER BAD something is, you can still make moral reasoning about it. And there are reasons people’s intuitions are different for negative experiences vs positive experiences. So I think there are genuine reasons TORTURE VS DUST SPECKS is a useful argument in SOME situations.
But I think “I think a lot of torture vs. dust specks arguers aren’t really interested in the paradoxes of utility aggregation. They’re interested in signaling that they are hard-headed people who bite bullets and come to counterintuitive ethical conclusions” is really really right, and I think it’s an ongoing problem that people seeking to talk about rationalism go too far in fighting against ingrained moral assumptions, by choosing extremely provocative and often hurtful examples ALL THE TIME even when a less provocative example would get a point across more clearly.
There’s often a good individual reason — I know Scott, Eliezer, other people (probably me, maybe you) have often said something that seemed gratuitous and then when they explained, I saw they had a good reason for thinking it was useful to use that example. And I personally don’t care — in rationalist spaces I usually slip into a “ignore the content, argue logically” mode and ignore that the examples would be personal and hurtful for many people. But it just seems to happen way too often, and I think of many rationalist spaces as “somewhere people talk about rape/torture/infanticide all the time and expect people not to find that emotional” which is unfortunately hurtful and offputting to lots of people.
LikeLike
fubarobfusco said:
Thank you for posting this.
There do seem to be a bunch of elements of the LW memesphere that seem to amplify bias rather than working to diminish it.
The useful thing about “sublimity vs. YouTube” or “torture vs. dust specks” is the aggregation of utility across a hyper-astronomically large number of experiencers. It isn’t to get into political arguments about possible justifications for real-world torture, any more than it is to get into social arguments about whether YouTube is good for the world.
As such, any elements of the thought-experiment which distract readers from the aggregation question by sidetracking them into sociopolitical questions are bad for the abstract discussion.
LikeLike
Road to Servitude said:
Sometimes ethics thought experiments are odd; like the one “would you save one child, your own, from a fire; or two of your neighbour’s children.” The fact that anyone would even momentarily contemplate anything other than option one suggests to me that people are applying some sort of perverse over-calculative logic.
How could anyone be so heartless as to save even ten people less close to oneself, when one could save a family member instead?
I don’t know if this is a sign of a current moral malaise in recent historical or cultural contexts, or if it is just human nature that some people would apply such a crude, callous Gradgrinding ethic to life.
It reminds me of when people ask nonsense questions about “which mass atrocity was worse? Well, this one killed 10 million, this one killed 50 millions… ”
Urgh… I won’t say “I weep for humanity,” as so-called “Humanity” (as we all can see) doesn’t exist. But I weep for the Homo Sapiens individuals who don’t seem to grasp that it’s not about numbers…
And indeed, for any individuals who end up falling prey to this vulgar-Benthamite calculus.
LikeLike
ozymandias said:
But… those two children are someone else’s children! You can imagine the grief you’d feel at your child burning to death; how does that grief not lead you to go “gosh, my neighbor would also be very sad at their children burning to death”? Is your morality literally just “I care about me and mine and fuck other people”? That’s awful (and not really even morality at all).
LikeLiked by 1 person
Jeffrey Austen Gandee said:
Are you saying that morality can’t possibly involve weighting self-interests above the interests of others?
I don’t have children, but I do have a wife. If I had to choose between saving her, and saving an entire family that lives next door, I wouldn’t even hesitate to save my wife. I agree with you that in this case, I do care about “me and mine” more than I care about other people, but my morality cannot be summed up by “fuck other people” either. If I could save the other family without losing my wife, I would do so.
I think almost everyone weighs their own self interest above the greater interests of others almost every day.
LikeLiked by 3 people
ozymandias said:
Yes, and everyone behaves in an immoral fashion almost every day.
LikeLiked by 2 people
Jeffrey Austen Gandee said:
Fair enough.
You are more confident in your morality than I am in mine, I guess. It must be difficult to hold onto a morality that is so demanding.
For me, many of the comments here, as well as extremist thought experiments, add to my doubts and make me question what it really means to be a moral human, with extra emphasis on “human.” I’d rather hold onto my humanity, including what may seem like irrational moral intuitions, than surrender my humanities to an unyielding morality.
You’re one of my favorite bloggers to read, BTW. I’ve been meaning to thank you. Since discovering this blog and a couple of others, I’ve been working to live up to your standard of charity and civility during discussions and disagreements. I also work alone every day, sometimes doing tedious physical labor. I often need something to ponder while I work, and your blog never fails. Thanks so much.
LikeLiked by 2 people
Fisher said:
Are you denying that you have a particular duty to those people you have made a commitment towards?
LikeLiked by 1 person
stillnotking said:
A credible commitment to prioritize your own family’s welfare over someone else’s is a big part of why they’re your family. I know my reaction to someone who claimed they’d let their own child die in order to save two strangers would be to mark them off the list of potential spouses. (Actually, my reaction would be “Bullshit”, but on the off chance they actually did mean it…)
LikeLiked by 3 people
The Smoke said:
One could actually as well accuse utalitarians of having no morality, just principles.
LikeLiked by 1 person
osberend said:
Utilitarianism is fundamentally inhuman. As such, some variant of it may well be the optimal approach to moral reasoning for an all-powerful hyper-intelligent hyper-rational AI (which I suspect accounts for a large portion of its popularity among LW types), although I am not totally convinced of this, but it is appallingly wrong as a moral code for human beings.
LikeLiked by 1 person
Ginkgo said:
“Yes, and everyone behaves in an immoral fashion almost every day.”
Putting a family level on the level of a stranger, which is what you seem to advocating, is pretty immoral. It’s a dereliction of your duty to that family member.
LikeLike
blacktrance said:
I have no difficulty imagining that my neighbor would be sad, and I don’t want that, but that’s less bad than what I’d feel from losing someone close to me. It’s like a modified version of the trolley problem in which you can sacrifice yourself to save five people – five people dying is bad, but that’s nowhere near as bad as dying yourself.
LikeLike
Psmith said:
Road has a point, IMO. Bernard Williams seems to argue for something very like this, and I think he’s right. For example, from the Stanford Encyclopedia of Philosophy page on Williams:
” Another inappropriate commitment arising from the obligation out-obligation in principle, famously spelled out at 1981: 18, is the agent’s commitment to a “thought too many”. If an agent is in a situation where he has to choose which of two people to rescue from some catastrophe, and chooses the one of the two people who is his wife, then “it might have been hoped by some people (for instance, by his wife) that his motivating thought, fully spelled out, would be the thought that it was his wife, not that it was his wife and that in situations of this kind it is permissible to save one’s wife.” The morality system, Williams is suggesting, makes nonsense of the agent’s action in rescuing his wife: its insistence on generality obscures the particular way in which this action is really justified for the agent. Its real justification has nothing to do with the impersonal and impartial standards of morality, and everything to do with the place in the agent’s life of the person he chooses to rescue. For Williams, the standard of “what makes life meaningful” is always deeper and more genuinely explanatory than the canon of moral obligation; the point is central, and we shall come back to it below in sections 3 and 4.”
LikeLiked by 2 people
Held In Escrow said:
See, I agree with Scott’s side of the argument, because the entire purpose here is to induce strong emotions! You push an ethical system to its extremes and see where it runs aground against our innate moral compasses; what makes us angry and frothy so much that we either have to revisit the basics of the ethical system or reexamine our moral intuitions!
The Repugnant Conclusion (in this case shown by Torture/Dust Specks) vs Youtube/Sublimity serve vastly different purposes. You really want to do them in the order of YT/S first and then T/DS. Asking YT/S is more of a logical conclusion to the idea of utilitarianism, the point is to just see what happens when you apply the idea on a larger scale. T/DS on the other hand, is there to ask us if we can live with that conclusion.
It’s similar to asking someone the basic trolley problem and then expanding it to the organ harvesting variant; you’re basically going “are you sure about your answer?” after showing how the easy answer leads to what we see as terrible conclusions. YT/S is really just asking us if we understand the basis of utilitarianism, so that we can draw those out to see the horrors underneath.
LikeLiked by 3 people
Fisher said:
So what is the limit to your recommendation of “don’t unnecessarily mindkill/make irrational your audience?” This would seem to conclude that pablum is the ne plus ultra of persuasion. Which seems incorrect on a personal intuitive level, but I have no research to back that up. Then again, there are the advertising and reality television industries which seem to partially agree with you.
LikeLiked by 1 person
pocketjacks said:
The incorrect intuition here I think is that relationships are linear. Humans have a linear bias because we’re used to dealing with small quantities and at a small enough scale, any function looks linear. But few relationships in nature truly are.
The needle-pricks vs. torture question seems to implicitly ask, “what’s worse, one person being outright tortured, which is like taking a million pricks of a needle, or a million people taking a prick of a needle?” with the assumption that the correct answer “should” be that they’re equal, and if that’s not our answer, then we’ve disproved… something somehow.
What if the relationship is not that linear? “Fine, a prick of a needle that’s a thousandth of a torture session, not a millionth”, you may say. What if that still doesn’t come close? What if the correct answer is that a million people have to experience a pain one-tenth that of a torture session for it to be actually equal? Or one-fifth?
We can still apply utilitarian logic to this nonlinear framework – take the “integral” of the function, minimize the total “area” of pain. And I don’t think that the shape of this nonlinearity is necessarily just a rhetorical question. In the future, we might have advanced enough in brain-scanning technology and social science study design that we could actually come close to approximating the answer.
LikeLiked by 1 person
Totient said:
I have very mixed feelings about this.
For one, my instinctive reaction to the two thought experiments is different – with sublime vs. Youtube, I’m fine with linearly aggregating utility. With torture vs. dustspecks my emotions are screaming “WHOA, WHOA! Something has gone horribly wrong with utilitarianism, and you need to re-evaluate your ethics.” I’m just not that likely to think really think about the question unless there’s some kind of internal emotional battle.
I agree that practicing being okay with torture is a somewhat alarming skill to try to develop. And I’m willing to bet that a lot of people are, in fact, just interested in signaling that they are hard-headed people who bite bullets.
But for me, the overall effect of thinking about torture vs. dustspecks was to come to a new-found appreciation for just how important things like Schelling fences are. Maybe that wasn’t the intent of the thought experiment. But I don’t think I would have really considered it if the thought experiment wasn’t extreme.
LikeLiked by 1 person
Illuminati Initiate said:
I suspect some people would give different answers on torture vs [some tiny annoyance] and on sublimity vs youtube videos. The super-badness of the situation might be necessary to get people to see if they actually have a problem with the idea.
Speaking of this, I’m surprised I’ve never seen anyone try to analyze these thought experiments by applying them within an individual (probably someone has, but I’ve never seen it). Would you rather stub your toe [many’ many] times throughout your life or be tortured once for two days, when the total amount of pain from toe stubbing would actually be higher? I think most people would choose the toe stubbing. This doesn’t mean anything relevant if they are preference utilitarians, but it does if they are hedonic utilitarians (or claim to be).
LikeLiked by 1 person
blacktrance said:
Stubbing one’s toe so may times that it would outweigh being torture for two days may be difficult to imagine. Maybe it’d be easier to present a choice between definitely stubbing your toe and a 1 in 3^^^3 chance to be tortured – but a probability of 1 in 3^^^3 is hard to understand too, so maybe there’s no way to conceptualize the problem that gets around scope insensitivity.
LikeLiked by 2 people
MugaSofer said:
How about being tortured for five minutes vs. a lifetime of toe-stubbings? That seems comparable to really minor surgery without anesthetic.
LikeLike
Road to Servitude said:
Sorry for late reply to Ozymandias. You said: “But… those two children are someone else’s children! You can imagine the grief you’d feel at your child burning to death; how does that grief not lead you to go “gosh, my neighbor would also be very sad at their children burning to death”? Is your morality literally just “I care about me and mine and fuck other people”? That’s awful (and not really even morality at all).”
It is difficult to know how to respond to this comment, as (unless I am seriously misreading you, as is definitely possible), you seem to be assuming that I am advocating an “I’ve got mine” or “I’m alright, Jack” kind of mentality. I am not sure why you would think that, although this may be my misreading of what you said.
I’m going to focus first on the point I made (I am not sure how clear it is to everyone that it is my point) which is that people have special loyalties to those in their in-group. These in-group loyalties are not sacrosanct. For example, the mafia take in-group loyalties to an extreme; the mafia is synonymous with an exaggerated and counterfeit loyalty; the chauvinism which is precisely the “I’ve got mine” mentality.
However, this does not mean that everyone has an equal claim on everyone else, like in Huxley’s Brave New World. Most people love their family more than they love people they have never met. No-one buys birthday presents for strangers; no one goes to the funerals of strangers, except in exceptional cases, such as to support others, or when well beloved public figures die.
As for the question you raised, you can turn it the other way around. Would your neighbour, the parents of those children, expect you to sacrifice your own child in the name of saving theirs? If you did do that, how do you think they would treat you? Would they thank you?
The point here is that it is not about one person being more “valuable” than another. Everyone is born with the same basic intrinsic value. It is a question of who has a greater claim to protect, loyalty, love and all the other aspects of human bonds. I am not sure why selfishness would have to come in here; there is nothing more selfish than an abstract humanist ethic. The greatest “Lovers of Humanity” in history have been the greatest torturers and butchers, because they loved Humanity with an everlasting and invincible love, but they despised human beings.
LikeLiked by 1 person
ozymandias said:
According to Steven Pinker’s The Better Angels of Our Nature, the five atrocities with the largest death tolls in history, adjusted for population, are the An Lushan revolt, the Mongol conquests, the Mid-East slave trade, the fall of the Ming Dynasty, and the fall of Rome. I will grant you the fall of the Ming Dynasty (because Mandate of Heaven), but the other four seem highly unlikely to be motivated by a love for humanity, as opposed to a desire for wealth and power.
I have a comparative advantage in helping my family members and friends (for instance, by buying them birthday presents). However, if I am equally capable of saving a family member and two strangers, I clearly don’t have a comparative advantage in helping the family member.
Everyone’s same basic intrinsic value seems really fucking useless if it can be easily outweighed by “but this human shares some of my DNA!”
LikeLiked by 1 person
Road to Servitude said:
I don’t think it is easy for us both to find agreement on this topic, so I think it is more respectful for me to leave it here, with one last reflection (focusing on one aspect of what you said).
All mass atrocities are essentially motivated by a desire to promote “The Greater Good,” i.e. some encompassing ideal of a better “humanity” or “human nature” whose supposed “interest” transcends the welfare of real people/individuals.
And I would think that all of the events you mentioned were, in this regard, strongly analogous to the arguably more explicitly humanistic atrocities of the British Empire, French Empire, Soviet Union, etc, right down to the “Global Village/IntCom” of neocons/liberal interventionists.
Even Nazism was a form of humanism (i.e. an Aryan human instead of white middle class Victorian gentleman, etc). All humanisms are based on the idea that “humanity” (sic) is important, and individual welfare, the welfare of real and existent people, is (superficially) merely subordinate to this but (ultimately) entirely inconsequential.
I suspect you will not agree with me on this, but it would be cheap of me to not affirm this last point. I’m sorry for not elaborating on the other matter you raised, but I think I am risking making a tl;dr here. The historical perspective is important though, as you noted.
LikeLike
Road to Servitude said:
And briefly, to echo Jeffrey Austen Gandee (as I understand him), this is not intended as a criticism of your Things of Things blog as a whole. Raising troubling questions, as you do, is an absolutely fundamental part of the “examined life,” and splashing about on the comfortable surface of the water can be a dangerous temptation. I find it good (if painful) to think about less cosy things, and your blog is a part of that. (I’ve put that kind of clumsily, but I guess the basic sense is intuitive).
LikeLiked by 1 person
Pingback: Link Archive 5/7/15 – 6/16/15 » Death Is Bad