Tags

,

The main cause areas of effective altruism seem to me to be weirdly historically contingent.

Global poverty seems to be the least historically contingent cause area: if Peter Singer’s Famine, Affluence, and Morality didn’t get people to help people in the developing world, Peter Unger or any of the other people working with similar ideas would have. The popularity of animal rights, however, is clearly connected to the fact that Peter Singer, a prominent early effective altruist, wrote Animal Liberation, one of the foundational books of the animal rights movement. He carried over a significant amount of his fanbase from animal rights into effective altruism.

As a cause area, existential risk reduction seems to be solely a product of Eliezer Yudkowsky, a tireless promoter of effective altruism who researches the risks of artificial general intelligence. In my experience, interest in other kinds of existential risk among effective altruists appears to be primarily a product of people who accept Eliezer’s arguments about the importance of the far future and existential risk, but who are skeptical about the importance of the specific issue of artificial general intelligence.

Existential risk reduction, global poverty, and animal rights all seem to me to be important issues. But “global poverty, plus the pet issues of people who got a lot of people into EA” doesn’t seem to me to be a cause-area-finding mechanism that eliminates blind spots. I ask myself: what are we missing?

The most obvious thing we’re missing, of course, is politics. I can hear all of my readers groaning now, because “effective altruism doesn’t pay enough attention to politics” is the single most tired criticism of effective altruism in the entire world. I do think, however, it is trotted out ad nauseam because it has a point. There is a significant gap in effective altruism for structural change in between “buy bed nets” and “literally build God”. And while development in Africa is a fiendishly difficult topic, so are wild-animal suffering and preventing existential risk, and effective altruists seem to have mostly approached the latter with an attitude of “challenge accepted”.

(It’s possible, however, that development is not sufficiently neglected for effective altruists to improve the situation much? I don’t know enough to have an opinion on the issue.)

However, the most interesting question of blind spots, to me, is not that. The three primary cause areas of effective altruism are all advocating for particular groups of beings who are commonly overlooked: the global poor, animals, and people who don’t yet exist. The question arises: what beings are effective altruists overlooking?

The impulse, of course, is to answer that question with one of the standard Groups People Know That We Overlook Their Suffering. “Black people!” or “Women!” or “LGBT people!” But the problem is that if people already know that everyone tends to overlook their suffering, then they’re likely to be in a shitty position, but it’s not a neglected shitty position. They probably already have an advocacy group. Whose pain is so overlooked that it won’t occur to us to answer that question with their names?