[content warning: torture used as a hypothetical without details]
Ethical injunctions are basically any rule one adopts of the form “don’t do X, even when it’s a really good idea to do X.” Don’t deceive yourself, even when it’s a really good idea to deceive yourself. Don’t torture people, even when it’s a really good idea to torture people. Don’t kill children, even when it’s a really good idea to kill children. Don’t break confidentiality, even when it’s a really good idea to break confidentiality.
A perfectly rational being, of course, would have no need for ethical injunctions. But we’re monkeys with pretensions. We’re self-interested and prone to rationalization. If we say “it’s okay to torture people in very extreme cases that are never going to happen”, then you can talk yourself into thinking that this is a very extreme case, even though the actual reason you want to torture the guy is that he’s a horrible person and you want to see him suffer, and next thing you know you’re the U S Government.
An ethical injunction is a good thing to have in any case where it’s more likely that you’ve made a mistake about whether this thing is a good idea than that it’s an actually good idea. For instance, torture basically doesn’t work, so there’s really no practical reason to torture anyone; therefore, it’s off-limits.
The human brain implements a lot of strategies for thinking, a lot of cognitive algorithms. There are two ways these algorithms can be implemented. Sometimes, you know the algorithm and you are deliberately choosing to execute it: for instance, you might look at the problem 99 + 16 = ? and think “take one from the 16 and add it to the 99… that’s 100 and 15… the answer’s 115”, in which case you’re using an algorithm for how to do mental math.
But not every algorithm the brain uses works that way. For instance, most people’s brains have an algorithm for choosing a partner; they probably evolved because in the past that particular algorithm maximized your inclusive genetic fitness. However, you don’t consciously think “hmmm, that person seems like a good person to maximize my inclusive genetic fitness with, following these rules I’ve figured out about how to maximize my inclusive genetic fitness.” You think “sexy!”
Thus we say: “this is what the algorithm from inclusive genetic fitness feels like from the inside.”
Morality, like choosing a partner, is often intuitive: for most people, the conscious reasoning process is subordinate to the instinctive feeling of “that’s wrong!” or “that’s hot!” So its algorithms are probably similarly unconscious.
Most people have something called sacred values— things they refuse to compromise on, no matter what. For instance, one person might hold life as a sacred value– refusing to take life even to prevent tremendous suffering. Another person might hold autonomy as a sacred value– refusing to violate another person’s bodily autonomy even to save their life or the lives of others. This is tremendously vexing to consequentialists. We are like “okay, but you have to admit that in theory it is possible that a million billion people could be saved by that person having the tiniest pinprick on their finger, and in that case would we be justified in violating their bodily autonomy?” And then that person is like “no” and in many cases accuses us of being in favor of violating people’s autonomy.
But a sacred value is how an ethical injunction feels from the inside. It doesn’t feel like a calm, level-headed endorsement of the statement “it is more likely that you made a mistake about torture being right than it is that torture is right.” It feels like torture being unthinkable. Unimaginable. Like getting morally outraged at the thought that you might torture someone.
If you think about the benefit you might get from torturing someone, then you might be tempted to torture them. So you feel repulsed at the idea of contemplating a situation in which it is beneficial. You might get angry because someone even brought it up. How dare they? Don’t they know torture is wrong?
I don’t have anything to contribute re the content, but I just wanted to say, Ozy, you’ve been very wise recently. My respect for you has grown tremendously over the past few months. Whatever you’re doing, personally or intellectually, please keep doing it.
LikeLiked by 3 people
Yeah, that sounds persuasive to me.
It’s like HPMOR talking about dark knowledge which experienced wizards hide because even the knowledge of it can be dangerous, except the knowledge is “in theory, there might be some situation where torture is a sensible strategy”. And then some hawkish numpty opened the box.
LikeLiked by 2 people
“If we say “it’s okay to torture people in very extreme cases that are never going to happen”, then you can talk yourself into thinking that this is a very extreme case, even though the actual reason you want to torture the guy is that he’s a horrible person and you want to see him suffer, and next thing you know you’re the U S Government.”
This reminds me of a favorite quote from Terry Pratchett’s “Thud!” (2005): “Once you had a good excuse, you opened the door to bad excuses.”
LikeLiked by 6 people
I don’t think this is necessarily true. Specifically, I strongly suspect that there is a categorical difference between thought processes for people who develop what you’re terming ethical injunctions for rules utilitarian reasons (if X seems consequentially good then it is more likely that I am wrong than that X is actually consequentially good, therefore I must never X) and people who sacralize values (even if X really is consequentially better than not X, I must not do X).
As someone who finds rules utilitarianism very appealing, I still find the sacralized value mindset pretty repulsive. I suppose you could argue that I don’t truly have ethical injunctions in the sense that you mean if I haven’t made the leap from the rules utilitarian position to the knee jerk emotional one, but I see little to suggest that people who use the knee jerk emotional one ever started at the rules utilitarian one.
Plus, if you look at arguments and efforts people make to inculcate ethical injunctions in others, they virtually never begin with the rules utilitarian position. Whether it’s religious indoctrination or the violinist argument, efforts at inculcating ethical injunctions in others always seem to rely on shady emotional manipulation and blatant use of emotionally compelling fallacies, rather than arguments about reasoning under uncertainty.
LikeLiked by 3 people
And people’s relationship choices virtually never begin with a position of maximizing genetic fitness.
LikeLiked by 1 person
Since people don’t actually value increasing genetic fitness, I’m not sure this is a point in *favour*.
LikeLike
I think this is a rgreat topic, and while I agree there’s a lot of overlap between sacred values and ethical injunctions, I think there are some non-overlapping areas as well.
1) Dispassionate ethical injunctions. I try my best never to say hurtful things to my loved ones in anger because I have learned that my judgment is very suspect in those cases. But it’s pretty dispassionate – while I feel myself outraged at something or other, there’s a part of my brain saying “don’t say anything until you think about it – you can’t trust yourself.”
But it’s pretty dispassionate. On the other hand, I would never hit a nun because it’s unthinkable. They’re NUNS! All those years of Catholic school! They’re authority figures, and women, who have sacrificed for the community, who are smaller than me (so far), and they’re the most obvious face of Jesus on Earth. So many buttons pressed – I could probably hit a nun to save a human life in some extraordinary hypothetical, but it would take a lot of conscious effort, plus a prayer before I did it.
The upshot, FWIW, is that it’s easier to violate the dispassionate injunctions than the sacred ones. There are obvious advantages and disadvantages to the superior lock-in ability of the sacred injunctions, and I’m currently working to “sacred-ize” a few.
2) On the other side, there are some sacred values where the connection to any rational utilitarianism or self-interest is pretty remote. No major thoughts on those yet, except that Chesterton’s gate suggests that some of them have an unknown function, and some of them might have had a function in other environments.
LikeLiked by 3 people
One can follow a rule like “Even when torture seems like a good idea, I shouldn’t do it because I might be biased” while still acknowledging that it’s at least theoretically possible for there to be a situation in which torture would be good. So ethical injunctions don’t necessarily feel like sacred values – they can feel like not taking deceptive bets.
LikeLike
On the one hand, empirically, groups that seem exceptionally good at sticking to ethical injunctions – the ACLU, various strains of Christianity, Jainism – all seem characterized by a very strong insistence on sacred values, sometimes to the point of incoherence. That seems important and worth paying attention to, and valuable empirical evidence that makes me update towards this theory,
On the other hand, this also seems true of ideologies that aren’t particularly good at maintaining their ethical injunctions. In fact, this may be a general rule of all ideologies, period.
On the gripping hand, statements like “torture basically doesn’t work, so there’s really no practical reason to torture anyone” make me want to scream “look at all these edge cases! Look at them! How can you possible endorse such a sweeping statement?”
LikeLiked by 5 people
1) It’s a separate topic, but I’m pretty suspicious whenever anyone tries to solve a problem by assuring me it doesn’t exist. (“No innocent people have received the death penalty,” “No one gets late term abortions for convenience” “Torture never works.”) On the other hand, sometimes they’re right – I was surprised by the amount of expert support for the torture point, so the fact that I’m suspicious doesn’t mean I’m right.
2) On the gripping hand, I love Niven references.
LikeLiked by 2 people
From outside view*, it seems plausible in this case- by which I mean it seems like torture not being very effective is the main cause of the lack of support for it, rather than being a casus belli by people who would object to torture even in the edge cases.
I’m pretty sure opposition to torture is far more widespread than opposition to wars, long prison sentences, the death penalty, or for that matter to much concern over collateral damage (this I expect would make an especially good case if one looks at the history of attitudes, going back to Vietnam or WWII), meaning there’s a large chunk of people who think it’s sometimes acceptable to kill people for a good cause but are against torturing, as well as people who accept executing criminals and don’t want to use torture to increase the accuracy of the justice system, and so on.
I can’t imagine this would be the case if torture actually, reliably got people to tell you what they knew and not lie about anything, such as telling you whatever they think will stop the torture.
*disclaimer: this agrees with my inside view and I was raised in a Blue Tribe culture, so obviously not perfect outside view.
LikeLike
On the other hand, I suppose an objective outside view might demand I question why torture was practiced for so long if it’s not effective? I could come up with justifications easily- torture *did* get confessions, and whichever factions were able to decide what confessions to extract had a strong incentive to claim they were accurate- but since those match my inside view I should be suspicious of them.
An alternative just-so story that fits the facts is that torture is *occasionally* effective, enough that it’s worth it to the person extracting the confession to at least give it a try, but the overall value is so small that as soon as meta-level moral rules became popular- like multiple militaries agreeing not to torture each others’ POWs, or a citizenry setting up rules for their justice system knowing they could be on the other side of it- it became clearly not worthwhile.
I suppose I’d have to look at the actual object-level to have any idea if the above were accurate, and that specific interpretation would require a relatively precise quantification of *how* effective torture is- I doubt there have been any controlled experiments on the subject?
LikeLike
I would be stunned if torture were ineffective in situations where compliance could be easily and swiftly measured, and the torture victim did not have a target deadline to try to outlast.
“We have your cellphone. Give us the password.” seems like it would be trivially easy to answer via torture, for the same reasons that “we know you’re a traitor, sign this confession” seems to be.
When people argue that torture never works, they generally restrict themselves to scenarios involving open ended quests for extremely difficult to verify information, where a “lie and/or say what your captors want to hear” strategy is difficult to distinguish from truth.
For what it’s worth, since I know there are people who will hate-read this and conclude that I must be saying that I love torture so much I want to marry it- no, I don’t, I just think that the rules utilitarian perspective of “torture is totally illegal and if you think some wild ticking time bomb situation has arisen you can argue the common law defense of necessity at your trial” gets the job done just fine. No need to convince ourselves of things that aren’t true. There are costs to cultivating that skill, too.
LikeLiked by 3 people
@Patrick, It is certainly possible to come up with situations where it is hard to believe that torture wouldn’t work. However, the empirical evidence seems to show a pretty poor track record for torture (apart from producing false confessions, which may in some circumstances be the intended outcome). That seems to suggest that there is little overlap between the hypotheticals where torture is effective and the actual situations in which the opportunity to employ it arises. In particular, your “get the password” scenarios require situations where a particular password is guarding something especially valuable; I see no evidence that such situations arise very often.
LikeLiked by 1 person
Torture never working is not actually required for “it is more likely you have made a mistake about whether torture is right than it is that torture actually is right” to be true. The latter seems perfectly accurate (e.g. the pretty depressing track record of waterboarding).
LikeLike
[Torture discussion hereafter]
People will do stuff to make the pain stop. Anyone who has ever had kids or seen em operate has seen the old “twist their arm till they drop/spit out whatever”.
Torture works if the only stuff they can do to make the pain stop is what you are after. “Sign this confession. Enter your PIN. Unlock your phone. Spit on this holy image. etc.”
It rarely works if there is an alternative that they can do to make the pain stop. They’ll pick (out of spite or convictions or whatever) that alternative. Lie about who their contacts are sounds like confess to the guy punching them. Etc.
I take “Torture Never Works” to be a combo of three things.
1: An argument that lets the lefties sound tough. We aren’t in favor of saving those terrorists, and thus evil by extension, no no, we are anti-waste, and you guys respect that mindset.
2: A statement about the nature of reality, that mostly the info you want from people is the second kind, where they can lie about it and you don’t know fast enough to do torture right.
3: A statement about the likely actions of the enemy, such that they change up when we get their hands on one of their folks, rendering even such information as you can obtain worthless.
It might not be true, but its a useful flag. “born this way”, etc.
LikeLiked by 3 people
Off topic, but
??? I find this horrifying? This is a thing?
LikeLiked by 1 person
Among kids? Most certainly
LikeLike
Patrick has a decent argument – “give me the password” seems intuitively like a case where torture would probably work, and is probably something you would often like to know. That doesn’t make torture worth it, but now I’m suspicious again of the “torture doesn’t work.” argument. Hmmm, and thanks.
LikeLike
Oh, among kids, okay. The “spit out” made me think this was a parenting tactic (like if a kid has something non-food in their mouth). Why does a kid want another kid to spit something out?
LikeLike
Utilitarianism is like the assembly language of morals. And it’s running with kernel privileges. Sure, everything is ultimately built on that, but you ought to be extremely careful when tinkering with a live system at that level.
LikeLiked by 5 people
The advantage of emotional “sacred values” over more rationally considered “rule utilitarianism” is that they’re less vulnerable to rationalization than rule utilitarianism is. It’s harder to convince yourself that “just this once” you can break the rules.
The disadvantage is that they’re more likely to be misapplied to a situation where the rule-utilitarian justification doesn’t hold water. I think the classic example would be people opposing voluntary suicide because they hold life as a sacred value.
I can kind of see a compromise where you override a dysfunctional sacred value with another sacred value (i.e. override “life” with “freedom”) rather than replace them with rule utilitarianism. But even that seems vulnerable, a person could just make up new sacred values on the fly to allow them to override any value they wanted.
It’s certainly a tough problem because I’ve seen a lot of people act stupid because of sacred values. But I’m also aware of the “valley of bad rationality” and I think one of the major reasons for it’s existence is that people unlearn sacred values, but fail to replace them with functional rule utilitarianism.
I can’t think of a meta-level solution to this problem. All I can say is try to be careful.
LikeLike
I’ve been thinking a Chesterton’s Fence approach to rules seems like it might be optimal.
The ideal for someone just starting to transition from sacred values to rule utilitarianism might be something like noting down any sacred values they have, before they even start questioning them- make sure not to unintentionally discard any of them (I’ve definitely done this at some point, so I think it may be far too late for me to do this myself; fortunately I’m most of the way through the valley by now although probably still missing specific ones)- follow them for the time being, and check very carefully for possible justifications or reasons for each one. As you figure out why they exist, either convert them to ethical injunctions, or in very rare cases discard them if they’re completely outdated. If you can’t find a justification for one, it might be violable in severe edge cases where it conflicts strongly with another value, but make note of having done so and think carefully each time it comes up.
I doubt Chesterton’s Fence can *fully* cover everything- the thing that comes to mind is Yvain’s recent thoughts on how tribalism can have benefits, or at least how breaking up tribes of one specific type (ex. religious, national) might not be a good way to stop tribal conflicts; “tribalism exists because tribes that promoted it out-competed the rest, but if we all abolish tribalism (and defend ourselves against those that don’t) there’d be no problem” seems to me good enough that I can’t imagine consistently doing better, at least not in a rapidly advancing world where a lot of old things genuinely are becoming outdated.
The main thing is obviously “don’t look for excuses, actually be very worried you might cause horrible unintended consequences”, as in every rationalist situation, but by itself that’s not perfect.
I suppose the other thing I can think of is gradualism, and ideally some kind of experimentalism (try out legalizing homosexuality in *one* country and see if God smites it or anything), to minimize the harm of mistakes- the latter is hard to pull off with precision, although it’ll happen to some degree anyway.
@ The overriding (more a rant about normal use of sacred values than advice for LW types): That seems a lot worse than “vulnerable” to me and doesn’t even require inventing new ones to go horribly wrong. Actually I don’t think inventing new ones outright is really a problem (actually I can’t even imagine what *inventing* a new one would be like, pretty sure they’re almost always ossified beliefs handed down from previous generations); the problem is you have no idea which one to prioritize.
The clearest example of this being a problem is “Free speech” or, for that matter, literally any meta-level political principle. Basically every challenge to free speech is based on the idea of some other sacred value overriding it, even though this defeats the *entire point* of free speech. Of course that’s far from the only case where it’s a problem; there’s also the fact that almost every political dispute includes shouting matches of which sacred value overrides the other with no way to resolve it.
LikeLike
>Why does a kid want another kid to spit something out?
Because that’s MY marble he put in his mouth, the little fucker.
LikeLike
Jesus Christ. I very much do not know how to manage comments in this blog.
LikeLiked by 2 people
Exactly why deontology is so appealing to me. Utilitarianism is good in theory, but it’s not hard to justify some terrible action sufficiently to convince ourselves, while having rules like “no, seriously, torture is right out, no, I don’t care what argument you’ve come up with, NO” is harder to subvert.
LikeLike
Pingback: Defending Sacred Values | Thing of Things
Pingback: 12 – Listener Feedback | The Bayesian Conspiracy
If you want a utilitarian (in form) argument for torture, then here’s one.
The only benefit of torture is that some people enjoy watching people they perceive as bad suffering.
Therefore we should ban private torture. Torture should only be conducted in public for the gratification of those who enjoy torture. The people being tortured should be selected by a phone vote by those who enjoy torture – actual guilt or innocence is irrelevant in this situation.
Because there are millions of people who enjoy torture, we require only a modest amount of utility gain for those people in exchange for the suffering of the tortured person.
LikeLike