Tags

,

[content warning: torture used as a hypothetical without details]

Ethical injunctions are basically any rule one adopts of the form “don’t do X, even when it’s a really good idea to do X.” Don’t deceive yourself, even when it’s a really good idea to deceive yourself. Don’t torture people, even when it’s a really good idea to torture people. Don’t kill children, even when it’s a really good idea to kill children. Don’t break confidentiality, even when it’s a really good idea to break confidentiality.

A perfectly rational being, of course, would have no need for ethical injunctions. But we’re monkeys with pretensions. We’re self-interested and prone to rationalization. If we say “it’s okay to torture people in very extreme cases that are never going to happen”, then you can talk yourself into thinking that this is a very extreme case, even though the actual reason you want to torture the guy is that he’s a horrible person and you want to see him suffer, and next thing you know you’re the U S Government.

An ethical injunction is a good thing to have in any case where it’s more likely that you’ve made a mistake about whether this thing is a good idea than that it’s an actually good idea. For instance, torture basically doesn’t work, so there’s really no practical reason to torture anyone; therefore, it’s off-limits.

The human brain implements a lot of strategies for thinking, a lot of cognitive algorithms. There are two ways these algorithms can be implemented. Sometimes, you know the algorithm and you are deliberately choosing to execute it: for instance, you might look at the problem 99 + 16 = ? and think “take one from the 16 and add it to the 99… that’s 100 and 15… the answer’s 115”, in which case you’re using an algorithm for how to do mental math.

But not every algorithm the brain uses works that way. For instance, most people’s brains have an algorithm for choosing a partner; they probably evolved because in the past that particular algorithm maximized your inclusive genetic fitness. However, you don’t consciously think “hmmm, that person seems like a good person to maximize my inclusive genetic fitness with, following these rules I’ve figured out about how to maximize my inclusive genetic fitness.” You think “sexy!”

Thus we say: “this is what the algorithm from inclusive genetic fitness feels like from the inside.”

Morality, like choosing a partner, is often intuitive: for most people, the conscious reasoning process is subordinate to the instinctive feeling of “that’s wrong!” or “that’s hot!” So its algorithms are probably similarly unconscious.

Most people have something called sacred values— things they refuse to compromise on, no matter what. For instance, one person might hold life as a sacred value– refusing to take life even to prevent tremendous suffering. Another person might hold autonomy as a sacred value– refusing to violate another person’s bodily autonomy even to save their life or the lives of others. This is tremendously vexing to consequentialists. We are like “okay, but you have to admit that in theory it is possible that a million billion people could be saved by that person having the tiniest pinprick on their finger, and in that case would we be justified in violating their bodily autonomy?” And then that person is like “no” and in many cases accuses us of being in favor of violating people’s autonomy.

But a sacred value is how an ethical injunction feels from the inside. It doesn’t feel like a calm, level-headed endorsement of the statement “it is more likely that you made a mistake about torture being right than it is that torture is right.” It feels like torture being unthinkable. Unimaginable. Like getting morally outraged at the thought that you might torture someone.

If you think about the benefit you might get from torturing someone, then you might be tempted to torture them. So you feel repulsed at the idea of contemplating a situation in which it is beneficial. You might get angry because someone even brought it up. How dare they? Don’t they know torture is wrong?