The main cause areas of effective altruism seem to me to be weirdly historically contingent.
Global poverty seems to be the least historically contingent cause area: if Peter Singer’s Famine, Affluence, and Morality didn’t get people to help people in the developing world, Peter Unger or any of the other people working with similar ideas would have. The popularity of animal rights, however, is clearly connected to the fact that Peter Singer, a prominent early effective altruist, wrote Animal Liberation, one of the foundational books of the animal rights movement. He carried over a significant amount of his fanbase from animal rights into effective altruism.
As a cause area, existential risk reduction seems to be solely a product of Eliezer Yudkowsky, a tireless promoter of effective altruism who researches the risks of artificial general intelligence. In my experience, interest in other kinds of existential risk among effective altruists appears to be primarily a product of people who accept Eliezer’s arguments about the importance of the far future and existential risk, but who are skeptical about the importance of the specific issue of artificial general intelligence.
Existential risk reduction, global poverty, and animal rights all seem to me to be important issues. But “global poverty, plus the pet issues of people who got a lot of people into EA” doesn’t seem to me to be a cause-area-finding mechanism that eliminates blind spots. I ask myself: what are we missing?
The most obvious thing we’re missing, of course, is politics. I can hear all of my readers groaning now, because “effective altruism doesn’t pay enough attention to politics” is the single most tired criticism of effective altruism in the entire world. I do think, however, it is trotted out ad nauseam because it has a point. There is a significant gap in effective altruism for structural change in between “buy bed nets” and “literally build God”. And while development in Africa is a fiendishly difficult topic, so are wild-animal suffering and preventing existential risk, and effective altruists seem to have mostly approached the latter with an attitude of “challenge accepted”.
(It’s possible, however, that development is not sufficiently neglected for effective altruists to improve the situation much? I don’t know enough to have an opinion on the issue.)
However, the most interesting question of blind spots, to me, is not that. The three primary cause areas of effective altruism are all advocating for particular groups of beings who are commonly overlooked: the global poor, animals, and people who don’t yet exist. The question arises: what beings are effective altruists overlooking?
The impulse, of course, is to answer that question with one of the standard Groups People Know That We Overlook Their Suffering. “Black people!” or “Women!” or “LGBT people!” But the problem is that if people already know that everyone tends to overlook their suffering, then they’re likely to be in a shitty position, but it’s not a neglected shitty position. They probably already have an advocacy group. Whose pain is so overlooked that it won’t occur to us to answer that question with their names?
I have a set of people in mind that may fit the bill. I’ll try to avoid the object-level politics by not naming it for now, but maybe this qualifies it as a Group People Overlook That We Overlook Their Suffering:
– This is a group into which many people are classified. Membership in the group is involuntary, but not permanent.
– Members of the group face discrimination, enshrined in law, to a degree which hasn’t been the case for women, LGBT, or black people for decades. In the US, discrimination against this group is specifically exempted from anti-discrimination law.
– They have few material resources and little political power, because laws explicitly prevent them from obtaining it.
– Although there are one or two advocacy groups operating on their behalf, those groups are dysfunctional: the fact that membership is temporary, combined with members’ lack of resources and power, means that the people with the most incentive to advocate for change are also the ones least capable of doing so.
– Most people are aware of all of the above, but this group rarely comes to mind as an example of people who are suffering. When it’s brought up in that context, people tend to respond by justifying the situation, often either by invoking pseudoscience or by pointing out that the suffering is temporary.
I’m not sure whether the number of people in this group, times the length of their suffering, is comparable in scale to the suffering of animals, but the fact that they’re totally absent from such discussions still seems like an oversight.
LikeLiked by 5 people
Convicts?
LikeLiked by 3 people
I don’t think the suffering of convicts is often justified by “pointing out that the suffering is temporary” and I have never heard it justified by invoking pseudoscience. Usually it’s just justified as “they deserve it”.
LikeLike
Children?
LikeLiked by 8 people
Won’t someone think of the convicts!
LikeLiked by 2 people
I don’t think we really do overlook their suffering – assuming I’m interpreting you correctly, that group has reduced rights, but material harms against that group are prosecuted more viciously than usual, and laws assign a variety of protections to them not available to others. “Reduced rights” aren’t quite the scale of suffering we’re currently trying to address. (We don’t put much effort into reducing misogyny in Saudi Arabia, either – not that it’s not important; it’s just not near the top of the list.)
At least in the developed world, I’d say it’s pretty obvious that this is not an effective cause area for EA. I don’t think the absence of this area from EA is an oversight for EA, especially given that many EAs are unusually sympathetic to the cause.
LikeLike
There are some exceptions, but material harms against the group I’m thinking of are not, in general, prosecuted more viciously than usual. Indeed, many acts that would be considered felonies when committed against another person — even some that would be criminal when committed against an animal — are completely legal and commonplace when committed against a member of this group.
LikeLike
@taradinoc, if we’re thinking of the same group, both you and Tim are overgeneralizing. Harms to members of that group from certain people are largely overlooked; harms from other people are prosecuted more vigorously.
I’m open to possible solutions, but most of the solutions I’ve seen would actually, AFAIK, make matters worse.
LikeLike
@Evan
That’s fair, but when you phrase it that way, I think it’s obvious that the latter doesn’t really matter from the perspective of reducing suffering. The suffering is the same no matter who’s inflicting it.
As for solutions, I think one bit of low-hanging fruit would be to stop making an exception for acts that would otherwise be considered assault and battery when committed against this group.
LikeLike
This mostly applies to both children and convicts, but only children are involuntary (in the strictest sense) members of their set.
I think there should be a lot more work on helping both of these groups. OpenPhil is already funding anti-mass-incarceration, but functional organisations to help minors seem to be lacking.
LikeLiked by 2 people
In my view, “politics” isn’t a cause area at all, it’s a means of producing impact in cause areas. Furthermore, there’s no shortage of interest in politics in the EA movement; I see animal welfare and X-risk focused EAs pushing for policy interventions all the time, and that’s without even counting OpenPhil’s U.S. policy work or the Global Priorities Project. (Admittedly, many EAs who work on AI risk fear that government involvement in their cause area will have more negative than positive consequences at this stage, which comes off as looking like an aversion to politics.)
I think your real complaint here is that the global poverty side of EA is too focused on what GiveWell calls “giving as consumption”, at the expense of creating systemic change through policy. Personally, I too would like to see more of this, but also feel like it’s a really hard problem and, in many respects, already crowded, so I can’t blame the movement too much for not having pivoted hard in that direction. There seems to be a healthy ongoing debate over this in the EA community, which I’m going to keep watching with interest.
Also, obligatory link to Topher’s post on a similar subject as this one, in which he initially expressed a similar skepticism that EA’s choice of focus areas is optimal rather than just historically contingent, but ultimately found the idea plausible. I agree with him in this respect.
LikeLike
Minds that don’t exist but would like to and animals harmed by other animals.
LikeLike
Both of these are pretty common topics among my local set of EAs. They’re both just difficult to address both philosophically and practically.
LikeLiked by 1 person
How could a mind have preferences in the present if it doesn’t currently exist?
LikeLike
If it would be happier existing than not existing it has a preference for existing.
LikeLike
If it would be sadder existing than not existing, then does that mean that it has a preference for not existing?
I know that Thomas Ligotti would have preferred to not exist.
Moreover, why should we care about the preferences of beings who do not exist? Conventionally, a utilitarian cares about frustrated preferences because there is a being that is experiencing those dolors.
LikeLike
Content warning: I’m discussing preference utilitarianism only.
The preferences are about the state of the universe, and the state of the universe is timespace. If you can have a preference about something going right now in a closed room you’re not in then you can have preferences about the future you won’t live to see. It doesn’t at all matters if you can feel the differences or not. Utility cares about feelings only as long as people prefer to feel some things and not the others (and they do!), and not because feelings have any special significance for the utility calculation.
I don’t think the preferences of the minds that will not exist are important or are even possible to calculate in principle. It’s probably something like “find the average of all possible functions”. The minds that will exist are a different matter.
LikeLike
“If it would be sadder existing than not existing, then does that mean that it has a preference for not existing?” Yes, and I think most of us would agree that some minds (those always in tremendous pain, for example) are better off not existing.
“Moreover, why should we care about the preferences of beings who do not exist? ” For mostly the same reasons we care about the preferences of beings who do exist but are not in causal contact with us.
A universe with 100^1000 happy sentient minds seems much better than one with only 10^1000 such minds.
LikeLike
“I don’t think the preferences of the minds that will not exist are important or are even possible to calculate in principle” I agree, but what about minds that don’t exist but could?
LikeLike
//For mostly the same reasons we care about the preferences of beings who do exist but are not in causal contact with us.//
I think that we differ on some fundamental assumptions that, given that their nature, neither of us can successfully argue against. I care about the preferences of beings who exist but are not in causal contact with me because they exist at all.
//I agree, but what about minds that don’t exist but could?//
Someone (I can’t remember who) made a persuasive (to me) argument that we *should* consider the preferences of even potential beings (which I had previously disagreed on), but only to the degree that a given being’s existence is probability (i.e. a being who actually exists has a probability of 1 and should be given full weight, whereas a being with 50/50 odds of existing has a probability of 0.5 and should be given half weight, and so on).
This gets tricky when the probability of a mind’s existence depends largely or even entirely on whether you decide to make it (i.e. if I choose to reproduce then, barring sterility and other difficulties, the probability of that child’s existence is nearly 1, but if I choose to sterilize myself then that probability is basically zero, so, assuming that this potential being would want to exist, how much weight should I assign to its preferences?).
LikeLike
I think the conclusion of this argument is that if there’s any nonzero chance, however remote, of creating a mind capable of experiencing near-infinite happiness, then we have an obligation to devote all our resources to creating it, even at the cost of immiserating every other being that will ever live.
LikeLike
I literally do not understand why so many people hold the “more people is better” stance so intuitively. I mean that seriously. I can usually put myself in people’s shoes and try to imagine why they have the moral instincts they do, but that one doesn’t work on me. The best I ever come up with is the possibility that maybe they just get a good moral vibe when imagining happy people, and nothing when they imagine the empty void of space, so they figure we should fill the latter with the former if we can. But that can’t be it.
LikeLike
“I agree, but what about minds that don’t exist but could?”
There is a huge space of minds that could exist in principle. There is a mind for almost any possible utility function.
“I literally do not understand why so many people hold the “more people is better” stance so intuitively”
Imagine two unconnected populations of people that don’t know about each other. Is it morally neutral to kill everyone in one of them?
LikeLike
Institutionalized persons? The big problem with politics is most political changes have people pushing for them already who would benefit, so marginal additions that result in big benefits are hard to find.
LikeLike
The very elderly. The very young.
LikeLike
War causes a lot of misery. I’m assuming it isn’t on the EA radar because it’s hard to tell what prevents war.
LikeLiked by 2 people
Wouldn’t that be a subset of politics?
LikeLike
It’s a controversial point, but for the sake of completenes: past (dead) people.
LikeLiked by 1 person
I was also going to say dead people. Specifically people who are going to die soon. Specifically, old people and soon to be former old people.
Anti-aging is something that needs more traction.
LikeLiked by 1 person
I’ve heard “rescue simulations” occasionally discussed by the AI branch of EA.
LikeLike
Animal welfare doesn’t seem as arbitrary to me as you seem to be claiming. EA seems to be based on the utilitarian tradition, and animal welfare concern has been part of utilitarianism since Bentham.
LikeLiked by 1 person
Not an EA or part of the EA community, so perhaps this is unwarranted outsider criticism, but: is there really no difference between “not a neglected shitty position” and “not a neglected-by-EA shitty position”? It seems like an eminently reasonable hypothesis that there are lots of people who are cognizant of the shittiness of any particular position X but that the vast majority of those people have a poor understanding of what causes or what would help solve the problem.
LikeLike
The problem with Politics is that almost by definition it invites an opposition party, and then everything is lost in the thrust and parry.
LikeLike
Not really. In fact, the standard libertarian argument for regulatory capture hinges on the fact that there are a lots of issues– maybe even the majority of the things the government does– that only one group cares about (only hairstylists care about hairstylist licensing), which means they control the issue to the detriment of the greater good.
LikeLike
I can think of some groups that trigger disgust reactions that keep them from getting serious consideration even (usually) from Us Weirdos. Not necessarily good value, not necessarily positive, but certainly neglected as far as investigation.
Pedophiles, furries, zoophilia (I’ve actually seen a well-constructed argument for this one), probably sex offenders generally/
LikeLike
That one article by Luke Malone portrayed a number of not yet offending pedophiles under thirty as anxious, insecure nerds, ashamed of their sexuality (in general) and suicidal as a result. The population of people under thirty who are anxious, insecure nerds, ashamed of their sexuality in general and suicidal , is a very large group. The stigma of pedophilia is very large, resulting in mandatory reporting laws, with a very strong social incentive to publicly proclaim that you personally have zero pedophilic thoughts.
Therefore, it’s quite possible that a large population of depressed people are also struggling with pedophilia.
This concept can be expanded to less controversial areas of obsession. Many people compulsively watch shows about high school students instead of characters their own age. They have a feeling of needing an adult when they already are an adult nominally. This concept of emerging adulthood is linked to a deep societal malaise.
LikeLike
Since Groups People Know That We Overlook Their Suffering is used as a proxy for people who are actually suffering, probably the people suffering the most will be the ones who are actually suffering but don’t belong to a Group People Know That We Overlook Their Suffering.
I have a white friend who tells me he wasn’t able to get help while trying to make it through college because he was white, despite the fact that he was poor, left a bad family situation on his own at a young age, and deals with chronic health problems.
Meanwhile, rich black university students can adopt the demeanor of lower class poor blacks and be viewed sympathetically and given support as members of a designated overlooked suffering group.
LikeLike
I think fetuses/the unborn are an obvious contender. As SSC said in a different context “[W]here are all the people who really care about fetuses? In a world where there is a multi-million-person movement of people who get extremely upset that we are killing chickens for food, it would be really weird to find that no one at all has any legitimate qualms about killing millions of what’s basically a smaller, less developed human baby.”
LikeLike
Animal rights EAs mostly don’t care about killing chickens. They care that we’re (effectively) torturing chickens.
LikeLike
The environment is a big, big cause that’s largely untouched by EA as far as I’ve seen. Of course, environmental ethics touches on global warming and animal rights, which are EA cause areas (global warming seems to be the existential risk cause most favored by AI skeptics, at least in my bubble). But traditional environmentalism, the kind where you worry about species extinction and ecosystem collapse, that one’s completely absent.
One practical reason for this is that hedonic and preference utilitarianism are more divided on this than they are on a lot of other topics. The argument I internalized as a younger person for ecosystem protection is: animals seem to have a revealed preference for continuing to live, and their livelihoods are dependent on the continued health of their ecosystems on the whole, thus we should protect ecosystems. Hedonic libertarianism can lead to Brian Tomasik’s arguments for paving the entire planet.
(I realize also with the preferences argument you run into the issue of, do we want to have diverse life forms in existence, or is it most ethical to tile the planet with human beings–I know I would personally prefer the former, but I don’t know how much ethical value that diversity has.)
Although it’s possible that since EA tends to appeal to indoorsy computer science types, and environmentalism tends to appeal to woo-friendly granola-crunching types, there’s a cultural gap there.
LikeLike
I think Scotts argument against EA getting involved in politics is a decent one.
With bed nets and poverty etc we can be reasonably sure we can actually help and even if some intervention turns out to be useless it’s unlikely to turn out to be actually spectacularly harmful.
Politics…. is much more of a crapshot.
Imagine if an effective altruist met a philosopher who was cooking up a new system of government. It sounded good, it sounded like it could work it sounded like it could improve human wellbeing. Wanna risk a few hundred million more deaths in the hope it works this time?
LikeLiked by 2 people
Um. You do realize that there are political interventions other than overthrowing the government, the vast majority of which have never resulted in a hundred million deaths? Or am I really confused about the potential outcomes of lobbying for more government money for PEPFAR, and actually there’s a really high chance that if the government starts funding PEPFAR more everyone will wind up in gulags? Is this a property of effective altruist lobbying, or of lobbying in general, and if the latter how come I observe lots and lots of lobbyists and no gulags?
Like, honestly, this seems like the equivalent of “last time we gave money to the developing world it was spent on PlayPumps.” The whole point of effective altruism is figuring out how not to do that.
LikeLiked by 1 person
I’d imagine that the risks vary depending on how significant the change you’re trying to make is.
Don’t want to actually change the government much and just want to try to divert a little money in the existing system? low risk.
Want to try to create a movement to attempt to massively change how everything works because you think the current system is awful? Then we just have to look at the history of some countries where that was tried and the term gulag was coined.
LikeLike
Well, we do have a population of over seven billion, so I daresay we have some wiggle room if the potential payout is high enough.
LikeLiked by 1 person
Technically speaking EA could get involved in any issue, even ones currently being addressed. All it would take is a judgment that current efforts are not efficient. I can think of a few situations where I think legitimately needed advocacy efforts have been wholly waylaid and are now flat out counter productive. Unfortunately, the reasons for this are invariably political.
If I had to pick one particular issue that I think EA should be more concerned with, it’s environmental preservation. I think that’s far more meaningful in terms of future human and animal well being than, say, AI risk.
Unfortunately, EA as it currently exists has a very pro-growth, pro-development attitude, and a strong tendency to define the efficiency they’re trying to maximize in ways that don’t contemplate environmental concerns. This is usually covered for with defensive posturing about how unobjectionable mosquito netting is, which is fine as far as it goes, but not relevant when discussing whether “develop Africa as fast as possible by figuring out where the most needy people are and developing that geographical region and then iterating until an undefined end point” might ultimately be a philosophical position that might have environmental downsides.
LikeLike
Are you factoring in the decrease in population growth that development tends to bring?
LikeLike
As far as development in Africa, William Easterly is GiveWell’s top citation in favor of the incremental ‘bednet’ approach.
https://www.google.com/url?sa=t&source=web&rct=j&url=https://williameasterly.files.wordpress.com/2010/08/57_easterly_canthewestsaveafrica_prp.pdf&ved=0ahUKEwi2veaF_obNAhVPJFIKHXv-CtIQFgggMAE&usg=AFQjCNFTYXSL9HONERVFtGkkDYErFy1U5Q&sig2=TpaPy0RDqFsgElBmhsQNAA
I’m maybe 90% convinced, but either way it’s well worth a read.
LikeLike
Advocating for anything related to the suffering of low status men is basically impossible. The men’s rights movement has had 0 political victories and will never have any because *ugh, low status men*.
LikeLike
What about prisoners’ rights groups?
LikeLike
Pingback: Effective Altruism as Global Catastrophe Mitigation – Global Risk Research Network