Tags
In the comments of my last post on sacred values, I noticed a lot of people thinking that they should transition from using sacred values to consciously-held ethical injunctions, or that sacred values are utterly unnecessary as long as you can think “torture is wrong even when I hear a good argument that torture is right”, etc. I think that this is ridiculous. Sacred values are great! I am personally working on cultivating several myself.
Of course, sacred values get a bad rap, because a lot of people have stupid sacred values. The problem with “it is unthinkable to not take every effort to extend a person’s life, even for a few days!” is not that it’s a sacred value; it’s that “spend all available resources to extend a person’s life, even when you have a good argument that you shouldn’t” is a terrible ethical injunction. Such considerations do not apply to sensible ethical injunctions like “don’t torture people” or “don’t deceive yourself” or “if you are Ozy, a known agoraphobe, go outside when you have planned to go outside.”
(Notably, “how important is the thing” is actually unrelated to how good a sacred value is. A good sacred value is one where it’s more likely that you’ve made a mistake about whether Thing is a good idea than it is that Thing is actually a good idea. Some ethical injunctions– such as me going outside even when I’m rationalizing why I shouldn’t have to– are about pretty minor things.)
Imagine the famous case of the dragon in one’s garage. In most cases, your system 1 and system 2 are aligned: you believe the propositional statement “there is no dragon in my garage” (system 2), and when you visualize your garage there isn’t any dragon in it (system 1). Sometimes this situation might get out of whack: you might believe the statement “there is a dragon in my garage”, while imagining your garage to be empty; on the other hand, you might believe the statement “there is no dragon in my garage” while on a certain level expecting that there’s a fire-breathing lizard inside it.
In the latter cases, I think, you can be said to “not really believe” that there isn’t a dragon in your garage. You “really believe” when your system 1 and system 2 match up; if they’re mixed up– if you think there’s no dragon in your garage while your gut is like “RUN AWAY RUN AWAY SCARY BIG LIZARD”– you don’t really believe there’s no dragon in your garage. (The former case– system 2 says there’s a dragon, system 1 says no– is typically called “belief in belief”; the idea is that you believe that you believe there’s a dragon, but you don’t actually believe there’s a dragon.)
So, let’s say that you accept the ethical injunction ‘you should never torture anyone, even if torture seems like a good idea, because it’s more likely you’ve made a mistake than torture is a good idea’, but you don’t treat it as a sacred value. I think you can be said to believe that you believe that ‘torture is bad’ is an ethical injunction, but you don’t really believe it– any more than the person who expects their garage to be empty really believes there’s a dragon there. If you actually believed it, your system 1 would get with the program. Now, there are certain advantages to believing you believe ethical injunctions, without actually believing them: most notably, you get to signal that you’re a tough-minded consequentialist who pushes fat men in front of trolleys and tortures terrorists for the greater good.
But there are also advantages to, you know, actually believing ethical injunctions. For one thing, when you find yourself in a stressful situation, your system one often takes precedence over your system two. When it’s time to open that garage door, if your system one is screaming “DRAGON! DRAGON! THERE’S A FIREBREATHING DRAGON AND IT’S GOING TO EAT ME!”, it is significantly more difficult to open the damn garage door. Even if you have no problems thinking “it is desirable to open the garage door, so I can get to my car, because dragons don’t exist” when you are in your living room safely away from any potential dragons, when you’re actually there you will quite often find yourself turning around and running instead. Similarly, when your system two is going “no torture! ever!” and your system one is going “surely if it were JUSTIFIED it would be okay, and I can come up with a dozen justifications, I am so good at coming up with justifications for things I wanted to do anyway”, then it’s a lot harder not to torture people.
You may feel that you would have no difficulties resisting the temptation to torture someone. I would propose that you are potentially falling victim to restraint bias and ask you to reflect on your no doubt spotless history of drinking exactly as much as you intended to, saying ‘no’ to all high-pressure sales tactics, never cursing out drivers on the freeway, responding in a calm and mature manner to your romantic partner at all times, and turning in all your work a week before deadline. Of course, if you are such a paragon of self-control, you may certainly choose not to have sacred values, but for the rest of us mere mortals, we need all the help we can get.
I think this line of reasoning is generally sound, but the problem I have is that you don’t specify any mechanism for changing your sacred values if it turns out that one you currently have is one of the stupid ones.
It seems to me that the way sacred values are set up, any attempt to change them should generate the same result as an attempt to violate them. If someone has a violent emotional reaction at the suggestion it is not unthinkable to not take every effort to extend a person’s life, they will probably also have a violent emotional reaction to the suggestion that their sacred value is stupid.
I have trouble seeing a mechanism for changing your sacred values that isn’t vulnerable to the same problems as ethical injunctions are.
LikeLiked by 3 people
My true rejection of this may be that ethical injunctions don’t seem that useful to me – you rarely need them, and when you do, you can get by well enough by remembering about bias. Any value in injunctions is certainly outweighed by the anti-epistemological nature of sacred values.
LikeLike
What if it’s not a coincidence that lots of people hold stupid sacred values? Maybe the very fact that such values are extremely resistant to change is what allows so many dumb ones to persist. If so, I’d think that would be a strong point against them..
LikeLiked by 4 people
If you think this argument is a persuasive case for sacred values, you have to think that we’re more likely to make mistakes by rationalization than by not critically examining our beliefs – that we save more by cordoning off than by dispassionately thinking “Does this make sense?”. This seems highly implausible. Also, if it’s true, it throws the whole rationalist project into doubt – indeed, it makes much of moral philosophy a questionable enterprise.
LikeLiked by 2 people
I see and understand the argument that its convenient to have gut level moral instincts that match your reasoned moral views. I can understand wanting to bring your guy level moral instincts into better accord with your reasoned moral views. But there seems to be another aspect to this that I can’t pin down. At times it seems like you’re suggesting that we should endeavor to switch our rationally held moral views to non rational moral views because our rationality might lead us to rationalize things in the heat of the moment that we might not choose from greater distance. I’m not sure that’s desirable- in fact, I’m not sure that’s psychologically possible.
LikeLiked by 2 people
How do you reconcile this with your position on cheating?
“Cheating bad” seems like a sacred value with obvious utility to large segments of the population, but you seem to be arguing that we should ignore this intuition and take a case-by-case utilitarian position.
LikeLiked by 2 people
Yeah, I really like this write-up.
Exactly. And I’m not convinced by people saying “what if you’re wrong”. What you’re basically saying is, in addition to training system 1 to NOT throw up SPURIOUS objections, you should train it so it DOES throw LEGITIMATE objections even if they’re hard to frame in the heat of the moment.
I often have in the back of my mind, “OK, under what circumstances should we relax this rule”. But I don’t dwell on it. I try to dwell on what’s _usually_ ethical.
Like, I admit intellectually, maybe there might be theoretical situations where torture is better than the alternative. Fiction is rife with them. But as you put it, I think in real life, “just never do that, it’s wrong” is right a lot more often than “I will work out on a case-by-case basis if it’s justified” because that’s almost always gone wrong.
LikeLiked by 2 people
How do you go about deliberately cultivating a new sacred value? I’m not being sarcastic, Ozy, I am genuinely curious about what approach you’re taking.
LikeLike
https://en.m.wikipedia.org/wiki/Alief_(mental_state)
LikeLike