[Attention conservation notice: Navel-gaze-y.]
[Content warning: brief discussion of eating disorders.]
I.
This essay is very personal. On scrupulosity, even more so than on other issues, I am acutely aware of the reality that different things work for different people, and that the advice that saves and soothes me is poison to someone else. We do not have best practices for dealing with scrupulosity yet. And for scrupulous people even more than for other groups, advice may be easily taken as orders; if the things that work for me don’t work for you, you can conclude that you are evil because they don’t work for you. This is not my intention. If my strategies work for you, excellent; if they do not, try something else. Write about it. We need more people who deal with scrupulosity talking about the techniques that work for them.
II.
I recently read an essay by Peter Singer, Ethics Beyond Species and Beyond Instincts, in which he defined the moral as that which is universalizable, in this sense: “We can distinguish the moral from the nonmoral by appeal to the idea that when we think, judge, or act within the realm of the moral, we do so in a manner that we are prepared to apply to all others who are similarly placed.”
I read that, sat back, and said to myself: “I cannot do morality.”
I cannot do it in the same sense that an alcoholic cannot drink, and a person with an eating disorder cannot go on a diet. I am incapable of engaging with universalizable morality in a way that does not cause me severe mental harm. While I can reject a universalizable moral claim on an intellectual level, I am incapable of rejecting them– no matter how absurd or contradictory to other things I accept– on an emotional level. If I fail to live up to such a claim, I will hate myself and curl in a ball and be utterly nonfunctional for a few hours, causing harm to both myself and those who have to put up with me.
So (with much backsliding) I have started to make an effort to weed out the universalizable morality from my brain. I do things I want to do, and I don’t do things I don’t want to do. I do not mean a simplistic sense of ‘want’ here. If a person is trying to kick the caffeine habit, they may deeply crave a cup of coffee, but they can still be said to “really want” to not drink caffeine. Notably, this does not require that we create a universalizable moral rule that no one ought to drink caffeinated beverages.
This resolution may prompt the question of why I’m an effective altruist. Well, I want to. It is nowhere written that I am not allowed to have preferences about states of the world, and as it happens I prefer worlds in which fewer people die horrible painful deaths to worlds in which more people do. I do not care about it as the most important thing in my life, but as one of perhaps half a dozen equally important goals. I would, of course, prefer that more people become effective altruists, and I will act in such a way that more people become effective altruists, but that does not require any justification other than my own preferences.
This is the reason, I think, that I am triggered by so much discourse around scrupulosity. It is not engaged in the project of stepping away from universalizable morality and learning to live without it; instead, it is engaged in the project of coming up with a form of universalizable morality that most people can achieve, and (often) of criticizing other systems as unrealistic and inhuman. (See that fucking Moral Saints essay.) For me, this is sort of like going up to an anorexic and saying “look, that diet you’re on is very unrealistic! You need to get on Weight Watchers instead.” If the anorexic could get on Weight Watchers and not have this predictably result in eating-disordered behavior, they wouldn’t fucking have anorexia anymore.
(People actually do give that advice, because people are the worst at putting themselves in other people’s shoes.)
Also, like, what if I want to be more morally saintly than Susan Wolf prefers? At least utilitarianism has the advantage that when it’s telling me to do shit I don’t want to do, it’s telling me to prevent children dying horrible deaths, rather than telling me to be someone Susan Wolf wants to hang out with. Why do I care whom you want to hang out with, Susan Wolf? I have literally never met you!
III.
One of the most useful techniques for me in coping with my scrupulosity is training myself to respond to universalizable moral claims by adopting the attitude in this picture:
BDSM is disrespectful to the dignity of the human person? Literally no one asked for your opinion!
Effective altruism is wrong because we should help people in our own neighborhoods first? Remind me again why I care?
Promiscuous women are being unfair to men because they only have sex with attractive men and not with the unattractive ones? Uh, who asked you?
Being fat is morally wrong because of the burden on our healthcare system and because people should be physically fit? I don’t recall asking for your input!
I am working on trying to parse universalizable moral claims as people having opinions about their own preferences which they then choose to extend to me. “I want there not to be any promiscuous women, therefore you have to want there not to be any promiscuous women!” No, I don’t. In fact, I am generally in favor of the existence of promiscuous women. It is an argument as absurd as saying that because you like dark chocolate therefore I must like dark chocolate.
(I know this isn’t actually true– universalizable moral claims are actually different than statements of preferences– but it’s sure as hell useful.)
A notable exception to this technique is moral philosophy, where I simply extended my eyes-glazed-over “why does anyone care about this bullshit?” attitude to metaphysics to normative ethics as well.
IV.
Part of my problem, I think, is that I don’t feel guilty enough.
This is an odd problem for a scrupulous person to have, but I think it’s true. An observation that comes up a lot in dialectical behavioral therapy is that for people with severely dysregulated emotions, an emotion that you have too much of is often a result of using that emotion to cope with another emotion you’re afraid to feel. For instance, a person who feels angry all the time might be shutting down their natural feelings of sadness at a loss, because they feel like if they start crying they might never stop.
Whenever I’m in a situation that should logically prompt guilt, instead I feel shame. I do not recognize that I have done something that goes against my long-term desires and acknowledge that I want to do better in the future. Instead, I think that I am a bad person, that others will hate me, that I am inherently evil and nothing I can do will wash the impurities away, that I will never be approved of or praised…
Of course, this is not right. The usefulness of guilt is in pointing out when I have violated my own standards; the approval or disapproval of other people does not particularly matter, except that they have an outside view and might be able to tell when I am being too hard or easy on myself. But, like I said, I care about my own preferences; while I do care about other people’s happiness, I do not give a flying fuck about their thoughts on whether parents should sacrifice everything for their kids or whatthefuckever.
But my brain slides, so subtly that I don’t even notice it, from the question of “did I do something that I don’t really want to do?” to the question of “does everyone hate me? am I inherently evil?” That first question is scary. It involves things like ‘taking responsibility’ and ‘making amends’ and ‘self-improvement’. All of that sounds like work. On the other hand, if you’re inherently evil, you don’t have to try to get better; you just have to try to stop existing, which is much easier. And if most people don’t care about me failing my own standards (which they don’t, because they don’t even know me, and also my standards are higher than most people care about), then I can determine that they don’t hate me, and never address the question of whether I’m failing to reach my goals. Because, you know, that would be hard. Self-flagellation is easy.
V.
Recently, I was having a conversation with an acquaintance who’s a negative utilitarian. He asked what my particular brand of morality was, and I began my usual “well, it’s kind of handwavey, but…” spiel, bracing myself for an argument about why I cared about things other than suffering and didn’t I realize that suffering was the most important issue and blah blah. Part of the way through, he interrupted me, smiled, and said “oh! You have complex values!” and the conversation moved on.
This made me feel really nice. Part of the reason, I think, is that I didn’t have to defend my position. He was a negative utilitarian. I was not. There were ways in which we could benefit each other: after all, I don’t particularly like suffering either, and so we could help each other on the common project of making there be less suffering in the world. Agreement on normative ethics or on ultimate goals was not necessary.
It felt freeing. He had his own morals, but (at least in that conversation) they didn’t have to be universalizable; he was comfortable with me believing differently from him. I didn’t have to be ashamed. It was great.
Inty said:
One way of living with sub-maximal saintliness without deifying it which I’ve found helpful is to view it as a negotiation between system 1 and system 2. System 2 says to system 1 ‘Okay system 1. I know you want me to eat tasty things and spend money on myself. I want to eat ethically and spend it on altruistic things. Here are some compromises I’ll give you today to keep you quiet.’ And each day, I’ll try to compromise a little less. I can see how other people might find this idea troubling, though, so take it or leave it.
As for universalising things to other people, this sort of reminds me of how Kantian ethics was explained to me at A-level. If a principle is not universalisable, you can often go one step back and make one which is universalisable. To use the coffee analogy- it might not be universalisable to say nobody should drink caffeine, but you could make a rule like ‘people should try and kick habits which they think are bad for them’ and universalise *that*- which for some people would mean giving up caffeine, and for others it wouldn’t.
I should add that this is not my own position. I don’t see this system as working overall because the meta-rules that people will come up with for how to determine rules to apply universally will depend on their own values, and persuading people to change their values is…Tricky.
LikeLike
Lambert said:
Why just go one step back?
Go right back to the start:
Become Good.
LikeLike
Inty said:
That’s what I was trying to say- you can always go a step back even further until you reach terminal values, and then it just becomes an impasse.
LikeLike
davidmikesimon said:
That seems kind of underspecified. Unlike the “try to kick bad habits” rule, which has a few obvious and pragmatically resolvable points of ambiguity (what counts as a bad habit, how hard should one try), the “be good” rule has effectively limitless degrees of freedom.
LikeLiked by 1 person
blacktrance said:
We may disagree about a lot of things, but at least we can agree that “Moral Saints” is bad
LikeLike
nostalgebraist said:
My own approach is similar to yours, but I respond to moral philosophy very differently. I think the reason is that I see moral philosophies as (among other things) gadgets that can make it easier, in practice, to answer the question “is this satisfying my preferences?”
For instance, if we’re talking about being EAs: GiveWell analyses depend on particular assumptions about how to evaluate outcomes, assumptions which are within the realm of “moral philosophy.” (See here for an example where a GiveWell evaluation depends crucially on an assumption about population ethics.)
The nice thing about having an opinion on these philosophical questions is that if I do, I can then trust such analyses from anyone who agrees with me — trust in the sense of “when these people say something is good, that’s equivalent to saying it maximizes my preferences.” If I look at the implications of a certain sort of population ethics and say “yeah, that looks good to me,” then if GiveWell uses that kind of population ethics, I can use the rule “GiveWell recommends this ≈ this satisfies my preferences,” without having to go in and re-check every detail of the recommended choice to see whether all the fine details indeed satisfy my preferences. It’s an immense simplifying device.
But by the same token, a seemingly “abstract” problem for population ethics can make me worry about whether my simplifying device will break in the real world. When I look at something like the Repugnant Conclusion, I think “man, I really don’t want that!” Which means that — until I do further thinking — I can’t trust practical analyses based on the premises that lead to the Repugnant Conclusion, because I know that sometimes that simplifying device will spit out “this is good” when I would say “I really don’t want this!” if I looked at the fine details.
Another way of putting this is that you can’t just go around checking everything that happens, or could possibly happen, against your (complex) preferences. There just isn’t time. So you need to have approximations that can be mechanically computed — proxies for your preferences that don’t need to query your actual brain, which has other things to do. But then it’s natural to wonder whether these proxies might stop working at some point, at which point you are suddenly doing moral philosophy. None of this demands that you care about universalizability, or about anything beyond your own preferences.
LikeLiked by 3 people
ozymandias said:
Wow, that post is *incredibly* confusing and I do not understand its argument at all; I would very much appreciate someone walking me slowly through the argument in the second paragraph after ‘A Paradox’, because I do not understand where A = B *or* B being better than or equal to C comes from.
Anyway, I was thinking more narrowly of arguments about e.g. whether utilitarianism or deontology is correct, or whether it’s inherently wrong to interfere with nature, or whatever. I agree that it can be useful to reference other people’s arguments about the implications of your values; I’m *skeptical* of philosophy as a project, but not *against* it.
LikeLike
Lambert said:
AFAICT, The crux is that the AMF is causing people not to be born (IDK how, maybe fewer ‘replacement children’ or something) and is thus, according to purely additive utilitarianism, is overestimating QALYs saved.
LikeLike
ozymandias said:
But as the author points out, GiveWell’s researchers aren’t additive utilitarians: their implicit population ethics is something along the lines of “people dying is bad, but failure to create new people is not bad.” The author claims this is contradictory, but I don’t understand *how*.
LikeLike
Patrick said:
His argument starts with this:
You are choosing between causing one of the following.
A: Afiyah is born, lives a little, then dies of malaria, but her life is a net positive in QALYs.
B: Afiyah is never born.
C: Afiyah is born, lives a long time because she doesn’t die of malaria, and her life has more QALYs than it would have under A.
He explains the next bit a little strangely. Here’s my paraphrase.
He interprets what you summarize as “failure to create new people is not bad” as meaning that failing to create a specific person is morally equivalent to creating them, regardless of how their life turns out, so long as their life isn’t a net negative.
Under this view, B is morally equivalent to any given hypothetical world in which Afiyah comes into existence, meaning that it is morally equivalent to both A and C.
So he reasons that A and C must be morally equivalent under this assumption, since they’re both equivalent to B. But the whole point of GiveWell is that they view C as morally superior to A. So, contradiction.
My response is as it always has been. If utilitarian value is undefined with respect to hypothetical persons, this is a non issue. This seems to me to be like arguing about what’s north of the north pole, or what happened five seconds before time began to exist. Or like insisting that X + Y must always be a defined function, even if X is 3 and Y is a turtle.
Weirdly, effective altruists usually disagree with me on this (on, for example, this blog and slatestarcodex), though per this guy’s summary, GiveWell probably does not.
He then argues that GiveWell defines the benefits of their charitable works purely in terms of QALYs that counterfactually exist if they are supported.
1. Adam, Bob, and Carl come into existence, live 30 years each, then die of malaria. This counterfactual universe has 90 QALYs.
2. Adam comes into existence, Bob and Carl do not. Adam lives 60 years because he doesn’t die of malaria. This counterfactual universe has 60 QALYs.
He argues that if your metric is purely QALYs, the first should be preferable. He acknowledges that GiveWell seems to view QALYs that are not lost to death as more valuable than QALYs that occur only because a person comes into existence, but argues that this is unreasonable if your metric is purely QALYs. He argues that there must be some hidden assumption that is being used other than mere adding up of counterfactual QALYs, and attacks what he presumes to be those assumptions.
Again, I find this uncompelling because I view the value of a hypothetical person’s existence versus non existence as undefined under utilitarianism.
He repeatedly argues that even if you do not accept his reasoning, you should still object to GiveWell’s alleged (I don’t read their press releases) use of the “ten times better” claim. His argument is that even if YOU accept the assumptions that go into that claim, many of GiveWell’s donors might not, and this might cause them to donate to a charity that is less efficient, per their moral viewpoints, than they might otherwise. Example: If you believe in a form of utilitarianism that endorses the repugnant conclusion, and if his assertions of fact are correct, you ought conclude that GiveWell is less efficient than then 10x figure claims.
LikeLiked by 4 people
nostalgebraist said:
“The value of a hypothetical person’s existence versus non existence as undefined under utilitarianism” is definitely the way my own intuitions want to go. It’s what came to mind immediately after I read that post (or rather, while I was reading and trying to make sense of it) — the post feels like it is sneaking non-person-affecting stuff in “through the back door,” obeying the letter of (some) person-affecting view but not the spirit. Like, those people actually don’t exist. Once we are weighing possible futures involving their possible lives we’ve already missed the person-affecting boat.
I’m having trouble squaring this with the obvious objections, though. If someone asks me whether they should bring [largenumber] people into existence who will inevitably live in misery beyond even my ability to imagine, I want to tell them “no,” not “sorry, my moral theory is incapable of answering that question.” Do you know of anyone who’s written on this in depth?
LikeLike
blacktrance said:
I think a large part of the motivation for the attempt to put morality beyond “what I want” is the expectation that if people did what they really wanted, most of them wouldn’t take on anything as onerous as the effective altruist project. More generally, people have some substantive idea of how to act morally (e.g. donate 10% of one’s income, be vegetarian/vegan, never lie, etc) that’s at odds with how people live their everyday lives, and they know that most people wouldn’t want to change their behavior to become moral in this sense, so they have to appeal to something else.
LikeLiked by 1 person
Lambert said:
This post is not about ‘most people’.
LikeLike
sovietKaleEatYou said:
While I’m not BP myself (as far as I know), I know and am friends with a few people who are, and I just wanted to point out that in my experience, there is an important distinction that should be made when talking about morality. I am assuming Ozy understands this and wrote the post with this in mind, but just wanted to point this out. Namely, there is a popular common-sense “belief”: that it’s better to be relaxed about moral dogma. Nobody likes a stickler, and if somebody anally follows a dogma (whether religious or legalistic), there must be something wrong with them. While I agree that this is true for most people, I think it doesn’t apply to people with BPD (or more generally people who have trouble with being naturally empathetic). In my experience, borderlines actually do better with self-imposed strict and somewhat restrictive (within reason) moral absolutes, for example religious ones. This is similar to how most people do better with flexible time management, but extremely disorganized people like myself have it easier when we follow a restrictive schedule.
LikeLike
Fossegrimen said:
“”
Part of the way through, he interrupted me, smiled, and said “oh! You have complex values!” and the conversation moved on.
“”
This is the default behaviour of the US red tribe and in my opinion the strongest point in their favour.
LikeLike
Rya said:
Reading this post was wonderfully and surprisingly cathartic for me. I think I’m going to try your method this week whenever I think about issues of morality (so, every second of every day) and see how it goes. It’s really validating to see someone talk about scrupulosity in this way.
One thing that has helped me a bit lately has been brief, localized acts of defiance—a sort of spontaneous exposure therapy. When my scrupulous OCD is stabbing me in the brain about moving too slowly to turn off the faucet after washing my hands, instead of guiltily rushing to turn it off, I might deliberately let it run for 5 more seconds and internally gloat about my villainous deed. I might even throw in a Bowseresque “BWAHAHAHAHA”. This only works with things I am readily certain, on a rational level, aren’t going to do much harm. If I have to spend even a couple of seconds convincing myself the exposure is ethically okay, I’m done for. But with miniscule trespasses like a cup of tap water wasted, it can be a great, if temporary, relief. Another example might be Liking a cute video of pet puppies on Facebook even though my brain is trying valiantly to convince me that doing so might offend my more radical vegan friends. It hurts at first, but then the dopamine flows in.
Thanks for the lovely article. :3
LikeLike
Pingback: You Don’t Have To Be A Utilitarian To Be An EA | Thing of Things