Effective altruism is a question.
The question that it is is something along the lines of “how can I do the most good with the resources that are available to me?” Of course, that’s not precisely accurate, because that question elides certain assumptions that effective altruism makes about how you define ‘the most good’. Effective altruism does not permit religious arguments about what the Good is; effective altruism judges the goodness of an action by whether our actions reduce bad things or increase good things; effective altruism does not care about people in one’s home country or that one is related to more than the global poor.
And, of course, defining effective altruism as a question does not mean that all effective altruists approach effective altruism with a spirit of curiosity and non-attachment, ready to go where the winds of evidence blow them. Most humans are quite ideological. Effective altruism being a question is something that can only be approached as an ideal, not something that we can assume we’ve embodied.
But nevertheless effective altruism is, at its core, a question.
I see no reason that only utilitarians should be interested in the answer to this question.
I expect most effective altruists actually agree with me here. After all, according to the latest EA survey, 56% of EAs are utilitarians, which implies that 44% of EAs are not utilitarians. (I could probably just post that and this post would be done, but eh. I like hearing myself talk.) In my personal experience, it’s hard to spend much time as an effective altruist without noticing the many valuable contributions from people who aren’t utilitarians, some of whom may wish to out themselves in the comments. This post is primarily directed at people who are interested in effective altruism but feel reluctant to join in because they don’t agree with utilitarianism, as well as tiresome people who think that the demandingness objection to utilitarianism somehow means effective altruism is bad and terrible.
Of course, there’s a very obvious reason that effective altruists and utilitarians are conflated. The two groups are closely related. After all, most people who could be considered ‘founders’ of effective altruism are utilitarians, and the earliest person who said proto-effective-altruist ideas was Peter Singer, a utilitarian philosopher who wrote a famous paper arguing that it is morally required to devote all of one’s resources to helping the poor.
However, there is actually no requirement that effective altruists agree with Peter Singer about everything. Effective altruists may disagree with Peter Singer about many questions, such as “is it morally permitted to murder babies?”, “how many severely disabled people have lives worth living?”, “should we care about animals?” and “is AI a serious concern that may wipe out humanity within the next few hundred years?” I see no reason that we can’t include normative ethics on the list of things that effective altruists may be permitted to disagree with Peter Singer about.
It’s true that a lot of effective altruists argue for effective altruism from a utilitarian viewpoint. This is quite natural. A lot of effective altruists are utilitarians. And an intellectually honest utilitarian in the modern world pretty much has to be an effective altruist. But there is a distinction between “utilitarianism is commonly used to argue in favor of effective altruism” and “all effective altruists are utilitarians and effective altruism is an inherently utilitarian endeavor.” Christianity is commonly used to argue in favor of giving to charity, but that doesn’t mean that everyone who donates to charity is a Christian.
I have a personal interest in this topic. I myself can’t do universalizing morality. I don’t like it when beings suffer and I want them to suffer less, in much the same sense that I don’t like it when I wind up waiting in long lines at the Social Security Administration and I want to do that less. While I am close enough to being a utilitarian that I tend to round myself off to one, I tend to part from utilitarians when they start going on about moral obligations and drowning children and so on; I consider “I want to spend this much of my resources on altruism and no more” to be a perfectly good reason to spend that amount of resources on altruism. So I have a natural interest in the subject of being an effective altruist without fully buying into utilitarianism.
And I don’t think I’m the only one. Off the top of my head, here are some people who are not utilitarians and who might be interested in the question of effective altruism: A virtue ethicist cultivating the virtue of compassion. A deontologist doing supererogatory good deeds. An ethical egoist who knows that the warmfuzzies of truly helping someone is the best way to improve her own personal happiness. A Christian who knows that what we do unto the least of these we do unto him. A Jewish person who is performing tikkun olam. A Buddhist practicing loving-kindness. Someone who cares about fairness and doesn’t think it’s fair that they have so much when others have so little. A basically normal person who feels sad about how much suffering there is in the world and wants to help.
Even more people might be interested in bits and pieces of the effective altruist project, even if they aren’t interested in the whole thing. A purely self-interested person has an obvious reason to be concerned about existential risk; someone who cares primarily about freedom might be interested in the best ways to help animals in factory farms.
Now, some of the people I named might choke on some of the effective altruist assumptions I listed: a religious person might object to the secularism, while a virtue ethicist might feel she has a particular duty to those closest to her. Certainly, the particular assumptions that effective altruism has are probably related to it being founded by a bunch of utilitarians. It would have different assumptions if it were founded by a bunch of deontologists.
I agree that we should be wary of effective altruism changing its assumptions. If deontologists wish to have a movement about being the best deontologist you can be, they must start their own movement and not piggyback on ours. I would be concerned if more than, say, ten percent of the effective altruist movement was self-interested people who are concerned about existential risk, freedom-lovers who are worried about factory farms, people who feel they have a special duty to those close to them but who don’t care literally zero percent about Africans, and other people who are not on board with the effective altruist project as a whole. It’s important to balance the contributions from talented allies with the risk of values drift.
But I don’t think everyone who isn’t a utilitarian poses a risk of values drift. Lots of religious people are, in fact, fully capable of compartmentalizing and using secular reasoning in secular contexts. Most non-consequentialists agree that good things are better than bad things and their non-consequentialism mostly comes up in contexts unrelated to effective altruism: after all, Give Directly very rarely involves either lying to Nazis or making decisions about whether a trolley should run over one person or five people.
To be clear: an effective altruist must be on board with the effective altruist project. I do not suggest outreach to people who think proximity is a morally important trait, the consequences of one’s actions are completely irrelevant, or we can find the optimal charity through clever use of the Bible Code. I just suggest that people who aren’t utilitarians can also be on board with the effective altruist project.
Plenty of religious folks are secular, and some atheists are not, the prime example being Soviet and Maoist communists. Secular people do not let their religious beliefs, including belief in no god, determine how they interact with people.
LikeLike
“While I am close enough to being a utilitarian that I tend to round myself off to one, I tend to part from utilitarians when they start going on about moral obligations and drowning children and so on; I consider “I want to spend this much of my resources on altruism and no more” to be a perfectly good reason to spend that amount of resources on altruism.”
(Not sure how to do the blockquote-thing.)
I think you might be interested in the philosophical stance of Alisdair Norcross, which he labels “scalar utilitarianism”.
From his paper “Reasons Without Demands”: “My concern in this paper is to argue that consequentialist theories such as utilitarianism are best understood purely as theories of the comparative value of alternative actions, not as theories of right and wrong that demand, forbid, or permit the performance of certain actions.”
LikeLiked by 1 person
Oops, “Alistair Norcross”.
LikeLike
The way to do blockquotes is to put the word “blockquote” in HTML tags. If you are not familiar with HTML tags, I recommend googling this because I’m not sure how to explain it without the page taking my mentions of symbols as uses 😛
LikeLike
You can always do substitutions. For example, the text below would come out as a blockquote if you replaced each square bracket with the corresponding angle bracket.
[blockquote]
The way to do blockquotes is to put the word “blockquote” in HTML tags. If you are not familiar with HTML tags, I recommend googling this because I’m not sure how to explain it without the page taking my mentions of symbols as uses
[/blockquote]
LikeLike
If you are a mainstream Christian the question of “how do I do the most good?” seems completely trivial. The answer is to cause as many people to go to heaven a possible. In comparison to the fate of eternal souls world concerns simply do not matter. I think any rational person would prefer to endure 1000 years of torture if it meant they went to heaven instead of hell.
In my opinion “logical Christianity” is simply incompatible with EA. Admittedly most Christians do not take the logical conclusions of their faith very serious. But the combination of “taking idea seriously” from EA and “eternal salvation/damnation is at stake” from Christianity just cannot mix imo.
LikeLiked by 3 people
That’s actually interesting, because while it’s true that that’s the logical, utilitarian conclusion to pull out of the Bible* (and I think that’s why a lot of fundamentalists do, actually, seem concerned with saving souls above all to the point where they’re fine with hurting people in the now to keep them from going to hell), that’s not what the Bible instructs people to do.
It does instruct people to go forth and preach and save souls. But it also commands them to help the poor and love they neighbour etc. etc.
*Assuming you believe the Bible actually talks about hell and that isn’t just a particularly pernicious mistranslation, which I think there’s good evidence for. But now we’re really getting off topic.
LikeLiked by 1 person
Things are probably not so simple, as is suggested by the existence of careful thinkers who were both Christians and utilitarians (e.g. William Paley).
What we do know is that there are many Christians who are also EAs. Given this, I’m not sure there’s much practical value in saying that Christianity and EA are incompatible, regardless of whether that is true. In fact, publicly stating ‘Christianity is incompatible with EA’ may itself be incompatible with EA.
LikeLiked by 1 person
I don’t think this is actually true.
– You don’t have to be perfect at taking every idea maximally seriously in order to be a useful EA
– I think your conclusion that maximizing number of souls saved is obviously the correct choice only holds up if you’re assuming utilitarianism in the first place, whereas this post has been entirely about how you don’t have to do that?
– Even if so, note that saving existing souls is hard, and trying hard at it and failing can cause damage; this is a high-risk/high-reward endeavour, and in current EA there is a lot of controversy about whether high-risk/high-reward endeavours are useful
– I just don’t think this describes reality well? In fact very religious people can and do reason well in other areas; you may say that this means they’re not reasoning well in general, and that may be true, but I don’t think this observation has any predictive power for how well their reasoning will be in most areas.
LikeLike
I consider myself broadly consequentialist, but agnostic about details. I’m cautious about biting any bullets in moral dilemmas because I imagine there’s another way of thinking about it that justifies the more other choice. This is different from rationalist-brand utilitarianism, which, in my impression, actively seeks out all the bullets in order to bite them.
I don’t consider myself EA but I’m sympathetic. I would say the only area where my differences on moral philosophy become relevant is ex-risk, which depends on population ethics.
LikeLiked by 1 person
+1 for your first paragraph
LikeLike
I’m note sure what you intended, but if you’re implying that Singers thinks AI is *not* a serious concern, then I’m not sure you’re right: https://www.project-syndicate.org/commentary/can-artificial-intelligence-be-ethical-by-peter-singer-2016-04
LikeLike
I consider myself a consequentialist subjectivist. That is, I think that the correct strategy is the strategy maximizing *some* utility function (see VNM theorem), but this utility function is just what *I* happen to value / prefer.
Practically though, I think that most people would agree on most ethical questions after sufficient reflection (e.g. religious beliefs are not ethical axioms, they are just mistaken beliefs about the physical world), the main exception being the prioritization of your personal friends and loved ones, which I’m pretty sure is inherent in human values (here I truly diverge from EA canon).
On the other hand, since I think the far future outweighs everything else, this prioritization has (or at least should have) little effect on my judgement of what is the correct strategy. Also, even without the far future there are acausal cooperation considerations which also tend to “smooth out” the distribution of moral weights over people.
So I’m on board with you when you say:
LikeLike
So given that you’re a person who seems to care about EA being more welcoming to religious people I’m going to note that this post is, um, not.
That’s – very much not the kind of thing that tends to make people feel actually welcome. (I have an elaboration on that in mind but am having trouble putting it into words).
It also in context doesn’t make particular sense. The thing that actually seems to be the point here is the
and ‘effective altruism already has its set idea about what are the bad things and what are the good things in question’. Effective altruism isn’t interested in diverging opinions on this, but it isn’t interested in them across the board – if someone’s idea of the Good is ‘the voluntary human extinction movement’ or (as you were recently discussing) ‘make sure people have as many babies as possible’, or ‘bring about a new British Empire’ etc, then EA is not interested in that, regardless of how they arrived at that. (
)
Conversely, ‘I want the most good for the most people because everyone is a child of god’ is a variation on the same variable as ‘I want the most good for the most people because that’s everyone’s moral obligation’ vs ‘I want the most good for the most people because
.
LikeLike
Yes, this is true. There’s a tradeoff between reaching out to Christians and convincing other effective altruists that they should reach out to Christians (by, for instance, acknowledging their Christians-are-irrational concerns). In general, I tend to assume the latter is my comparative advantage, as any of my Christian readers are probably used to off-hand statements that belief in God is irrational. 🙂
LikeLike
In fact, EA is bad for utilitarians. It doesn’t cause less suffering or more pleasure in the world, compared to a counterfactual world without EA.
For example, reducing x-risk is about saving the species, or civilization, but not about utiltiy. Our species and civilization has an abysmal track record at utilitarian outcomes. They cause systematically more suffering than positive experience value, by forcing certain categories of negative experience on large classes of beings. There is no equivalent to factory farming on the pleasure side. There might be in the future, but there probably won’t be. The EA movement, as a whole, is a net-negative for utilitarianism, even though it’s paradoxically associated with it,
LikeLiked by 1 person