Scott Alexander recently wrote about weird effective altruism. Many people (mostly, but not entirely, people who aren’t effective altruists) offered the opinion that weird effective altruists should be banned from EA, or at least shouldn’t be allowed to give talks at EA Global and have blog posts written about them. Weird effective altruist causes are (sort of by definition) off-putting to most people; therefore, if you want people to donate to global poverty relief, you should kick out all of those people concerned about farmed animal welfare/AI risk/wild animal welfare/psychedelics research/suffering in fundamental physics, lest we scare the normies.
There are many reasonable critiques of this point of view, including that it’s not remotely clear that any of those claims are more frightening to normal people than “it is morally obligatory to make personal sacrifices in order to help poor, faraway black people.” But ultimately I reject the entire premise.
I’d like to be clear about what I’m not saying in this post. I am not saying all “weird effective altruism” causes are effective; I believe some are and some aren’t. I think many effective altruists are not taking seriously enough the difficulty of figuring out how effective highly speculative causes are, and that unless we seriously address this we’re going to waste potentially millions of dollars on boondoggles. And I suspect a lot of weird effective altruism tends to over-explore certain cause areas (for example, things you think of if you read too many science fiction novels) and underexplore other cause areas (for example, boring things). I don’t intend this post to be a whole-hearted defense of weird effective altruism, but simply a criticism of a single narrow argument too often wielded against it.
So the question arises: why is effective altruism a thing at all?
Most people care about charity effectiveness, at least a little bit. They look up their charities on Charity Navigator before donating; they object to money being spent on big CEO salaries or on overhead instead of on services; they circulate criticisms of the Susan G Komen Foundation and PETA. And yet not only do most social programs not work, for the vast majority of programs we simply haven’t collected the information to see whether it works or not. This isn’t a “no one cares about starving Africans” thing; the state of the evidence on warm-fuzzies American medical and educational interventions is equally poor.
Part of the problem is that while people care about effectiveness some, they don’t care about effectiveness that much. They are willing to google a charity to see whether it is an outright scam, but they’re not willing to read academic papers to see if the charity’s intervention works. They’re definitely not going to put in the time to separate intuitive but misleading measures of effectiveness (CEO pay) from actually good measures of effectiveness (randomized controlled trials).
The other part of the problem is that all charity advertisements are a hellhole of epistemic doom and despair.
Let’s pick on Feeding America. Not because it’s an unusually bad charity (it’s not), but because it’s large and typical.
Looking on their webpage, I find out immediately that 1 in 8 Americans struggles with hunger. That sounds awful! After clicking through several pages, I find that the source is this document, in which 1 in 8 households (not individuals) are food insecure. You can click through the document to read the full operationalization of food insecure (it’s on pages 3-4). Food insecure households include, for instance, a household that sometimes worries about whether they’ll run out of food, feeds their children only a few kinds of low-cost food to avoid running out of food, and sometimes can’t afford to eat balanced meals. While obviously this household is experiencing a good deal of suffering and Feeding America can help them, it’s not exactly what the average person would think of when they hear the word “hunger.” This is actively misleading.
I click through to Our Work, where I learn that Feeding America has fed four billion meals last year. What percentage of people who would otherwise go hungry did they feed? 10%? 50%? 99%? How many of their meals went to people who would have otherwise gone hungry, versus people who would have been able to figure out some other way to get enough to eat? Feeding America does not provide any insight into these important questions.
98% of all donations raised go directly to helping people in need: according to Charity Navigator, this refers to program expenses, with 1.1% of their income being spent on fundraising and 0.3% spent on administrative expenses. Would increasing their percent spent on fundraising allow them to help more people by raising more money? Would increased administrative expenses, say, reduce the amount of food waste by hiring someone to improve their distribution practices? We simply don’t have enough information to know.
In short, Feeding America is misleading about the scope of the problem they’re dealing with and does not provide the necessary information to assess their effectiveness in dealing with it.
Again, I am not picking on Feeding America because it is bad. The reason that charity is a total epistemic hellhole is that all charities are like this. The beloved effective altruist charity the Against Malaria Foundation explains on its homepage that 100% of donations go to buy nets (because presumably in a perfect world AMF employees would not need to earn a salary to pay for such luxuries as “homes” and “food”) and entirely omits the fact that most nets will not actually prevent any cases of malaria.
Of course, I’m being unfair here. The purpose of a charity’s website is not to tell the complete and unvarnished truth, it’s to get people to donate. How many people have actually read a GiveWell charity report all the way through without their eyes glazing over by the time they get to “Niger, Burundi, Malawi, and Liberia Prevalence and Intensity Studies”? If the charity actually had a proper cost-effectiveness assessment rather than a bunch of oversimplified bullet points, everyone would get bored and decide to catch up on Game of Thrones instead and no meals or mosquito nets would be bought at all.
And the harm here seems pretty small. So maybe “100% of public donations go to buy nets” means “we got some people to allocate money towards paying our employees instead of towards nets because you’re an idiot who thinks nonprofit employees can survive on nothing more than the satisfaction of doing good.” So maybe “struggles with hunger” means “at least one member of the family has missed one meal in the past year due to not having enough money and also the children do not eat enough vegetables” instead of “is hungry most of the time.” It’s not like they’re outright lying, and it’s for a good cause. Would you rather people spend that money on a new pair of shoes instead?
But the fact of the matter is that the Red Cross makes the same calculation about disaster relief, and the American Cancer Society makes the same calculation about cancer treatment, and the Smithsonian makes the same calculation about preserving priceless historical artifacts. And that means that it’s extraordinarily difficult to figure out really basic questions about charities you might want to donate to, like:
- How much does the problem the charity is trying to solve affect people’s lives?
- How many people does the problem the charity is trying to solve affect?
- Does this charity actually help with the problem it is trying to solve?
- If I donate to this charity, will the money go to really important programs that have a big effect on people’s lives, or do they already have enough money for all of that and my donation would go to something that doesn’t actually do that much good?
- Is this charity better than other charities I might donate to?
Which is the reason effective altruism is possible at all.
As far as I’m aware, effective altruist charity evaluators are the only people who are trying to answer this sort of question for the general public (although presumably some big foundations like the Gates Foundation are trying to answer it for themselves). This is our thing. This is the value we add over a Salvation Army bell-ringer who happens to have some fliers for Idealist.
I don’t care about effective altruists’ personal honesty. Lie to your parents about your dating life, shade the truth on your resume, compliment your friend’s hat which vaguely resembles a dead opossum, whatever. Hell, if you’re working for a top charity that isn’t an explicitly effective-altruism-branded top charity, do the epistemic hellhole thing. Everyone else is and you might as well try to grab some of the charity budget for things that actually work.
But when you are speaking as an effective altruist– don’t get complicated, don’t get clever. Just say what you think the best cause area or charity or career is. Every time you think to yourself “well, I think AI risk is more important, but it’ll turn people off, so I should probably say the Against Malaria Foundation,” the effective altruism movement takes one more step towards being the same as any other group of charitably-minded nerds.
I go pretty far on this. A lot of introductory effective altruism material uses global poverty examples, even articles which were written by people I know perfectly fucking well only donate to MIRI. I think people should generally either use examples from the cause they actually think is most effective, or use an equal number of existential risk, animal welfare, and global poverty examples, in order to reflect the disagreement in the effective altruist community.
I’m not saying you should pay literally zero attention to public relations. There are lots of things you can do to be more persuasive that don’t involve misleading people. You can show people pictures of sad animals or happy African children. You can wear professional clothes offline or write with proper grammar online. You can be kind and respectful and try to see things from other people’s points of view. But you must abjure all attempts to persuade people by doing anything other than giving people your best assessment of all the evidence, including all the nuance and all the caveats, even if it might turn them off.
blacktrance said:
I mostly agree, but the AMF vs AI tradeoff also depends on whether you’re trying to get people to donate to whatever cause you consider most effective, or persuade them of the correctness of EA ideas in general. Even if you’re an AI EA, you probably still think that AMF is better than most charities, and it’s easier to explain why you should be cold and calculating and not give to the Foundation for Cute Puppies when you aren’t also trying to get them to persuade them of a weird cause area. (Otherwise, they might reasonably say that EA can’t be that good if it implies donating to AI.) Once they’ve internalized EA ideas, you can then try to explain why AI is better than health without worrying that they’ll reject everything altogether. When possible, it’s better to bridge one inferential gap at a time.
LikeLiked by 4 people
Peter Gerdes said:
I think you make some very good points but I don’t think our options are limited to simply being opaque and misleading like all the other charities or offering useful data in a way that discourages more normal people.
Just as airlines can price discriminate EA based charities can discriminate based on interest in effectiveness. Nothing prevents an EA focused charity from having the same kind of effective attention grabbing advertisement on its home page or mailers as every other charity while also having a link on the top right corner of their homepage that immediately takes you to a clear non-obfuscated analysis of effectiveness and background data. As an EA advocate you can give the Against Malaria Foundation whenever you get a casual expression of interest and then switch to give your real answer the moment your interlocutor gives any hint they aren’t just expressing the passing ‘I guess I’ll google to see if their a fraud’ level of vague non-interest.
Understanding the arguments comparing the evidence for the effectiveness of various charities requires a certain minimal level of interest/attention to be worthwhile at all so we don’t really lose anything if the people who can’t be bothered to click a link or ask about evidence get the normal charity presentation. **The key thing is to avoid obscure our effectiveness data in a rabit hole of further hoops as many other charities do not insisting that we lead with that data in all situations.**
LikeLike
Peter Gerdes said:
I would, however, give a different defense of keeping EA weird. Far from reducing charitable donations I think it attracts many people to charitable giving who wouldn’t otherwise donate, generates an enthusiastic community who talks about charitable giving all year round and without that buzz I doubt the kind of people scared off by weirdness would even be considering EA charities.
I mean rationalists are just like anyone else. I kinda feel like I should help people but if EA offered nothing but a pile of boring spreadsheets about united way versus red cross fundraising I doubt I’d find it very compelling. In contrast, by making EA a fun playground of weird ideas, interesting arguments and attaching community status to persuasive analysises thinking about charitable given becomes fun and something I enjoy doing.
In other words, EA weirdness is to the rationalist community what fundraising dinners, swanky museum galas and the like are to normal charitable contributors. They are our way of turning the unpleasant act of giving up wealth into a fun activity which makes us feel good about ourselves and we are happy to pay for.
—
When it comes to the issue of directing people who don’t find the weirdness appealing to donate to more effective charities.I doubt handing them a pile of EA analysises but screening out weird stuff is going to be very effective. Instead, we can just wait for more mainstream intellectual figures and elites to be persuaded that effectiveness matters and should be analyzed and push mainstream charities to publish useable data as well as making non-weird recommendations based on effectiveness.
In short, so what if some people go EA that’s the weird stuff, and steer clear if there are plenty of non-weird authorities (Gates foundation etc..) who are happy to hand them a non-weird effectiveness based recommendation.
LikeLike
Jai said:
As a supporter of Weird EA, I feel compelled to point out that “making EA a fun playground of weird ideas” is probably a really, really, really bad thing to target. Targeting things that feel fun to think about is the road that passes by bad TED talks on the way to becoming Deepak Chopra. It’s a failure mode that people-who-tend-towards-EA should be *especially* careful of, because I think we’re far more prone to getting caught up in fun interesting ideas if we don’t keep ourselves tied to empiricism and boring analysis. I think if we weren’t careful, we could easily end up doing nothing but Weird Thought Experiments and meta^^^3-contrarian blog posts without ever going “hey, malaria cases actually increased 3% this year while we all got distracted by this intense debate about whether counterfactual thought experiments about twin-Earth p-zombie trolley operator twins in a box can suffer”.
LikeLiked by 2 people
benquo said:
Since you brought up Malaria, note that (last time I looked into it) GiveWell simply does not bother to check what happened to Malaria rates in areas AMF operates, even though AMF collects the data. This is not necessarily unreasonable, but the info is sort of buried on a hard-to-find supplementary information page, but we already live in a world where even the least weird, most conventional EA institutions, that get written up in major publications like The Atlantic with praise like “save lives with certainty,” are not really bothering to make sure their methods actually work, for their flagship recommendation most years, and as far as I can tell haven’t made progress towards being able to check.
LikeLiked by 1 person
Peter Gerdes said:
A lot depends on what the other option is. If you assume people will continue to be engaging in charitable giving and trying to figure out what are good inventions even without this weird community of course you are right.
I suspect, however, that most of the EA community would just be off not doing charity at all or offering ideas about how to do it without this aspect of the community.
This, however, is the sort of thing we really need empirical information about.
LikeLike
Aapje said:
It is the fundamental tragedy of the human condition: we seek to communicate, but due to many reasons (inferential distance, differences in intelligence, different intuitions, unwillingness to expend sufficient effort) we often fail. In so far that we succeed, it is usually only partial success.
This is why simplistic memes (which are frequently partially/wholly wrong or extremely one-sided) are so successful. They often don’t depend on actual reasoning ability, but trigger intuitive responses. Haidt’s work on how personality traits are strongly correlated with political positions strongly suggests that people’s politics/opinions are heavily guided by intuitive responses. A lot of the perception of rationality is self-deception, as intuitive biases makes people filter the world so the inputs to their reasoning and the way they reason is highly biased (for instance, people are more likely to accept evidence which conforms to their existing beliefs and doubt evidence that doesn’t).
I would argue that science is the most successful method to combat this, by (theoretically) building an enclave of highly intelligent people who try as hard as possible to eradicate intuitiveness from their work. Then they (again theoretically) leave it to others to insert their subjectiveness (goals, priorities, what evidence they will accept, etc) and use this to actually act or convince others to act (politics/activism). Acting or getting others to act requires making choices that science can’t answer and to persuade in ways that are non-scientific, so choosing to act or to convince others to act necessitates abandoning pure science.
So, Ozy, what you seem to want is for EA to be scientific, resulting in very high correctness and objectivity. However, per your last paragraph, you also desire persuasiveness. As I argued above, these goals conflict. I would argue that the only way to achieve both in greater society is to separate them, in the same way we distinguish between science and politics/activism. Only by doing this can you have a culture that maximally favors objective correctness as well as a culture that favors improving the world. Combine them and the latter will invariably corrupt the former*.
* This is also why I find it very worrisome that there seems to be a strong push towards making science much more political/activist, which will inevitable cause the goal of eradicating intuitiveness/subjectivity to lose ground. Already parts of academia can just be written off almost completely as they do almost no actual science and academia are treating these parts as if they did science, which is very worrisome, as it indicates an inability to defend scientific culture. I also think that the heavy pressure on scientists to ‘produce’ (based on corrupt metrics of scientific productivity) have greatly damaged science in a way that pushes it toward bad science and reduce its ability to reject political/activist science.
LikeLike
Vanessa said:
I think it’s important to keep track of what the actual goal is. Advertising EA as a movement seems useful because (i) if more people use rational analysis in their altruistic donations/volunteering/enterprising, then more good will be done and (ii) if more people participate in the discussion of ranking causes and interventions, we will learn more about which causes and interventions are better. So, if our goals are these “meta-level” objectives, self-censorship seems to be self-defeating, since your are undermining the very principles you are trying to promote. On the other hand, you might be working from the assumption that you already know what’s the “best” intervention is and the goal is getting more people to support it. In this case you probably shouldn’t be advertising EA at all, but should be advertising a specific “object-level” organization such as AMF or MIRI. You can mention EA in order to explain why you think that e.g. AMF is better than other global poverty charities (although even then you should probably focus on GiveWell rather than EA in general), but your goal isn’t getting a person on board of EA, your goal is getting a person to donate to AMF.
LikeLike
ozymandias said:
Yes, I agree. I do think it’s quite unlikely we’ve figured out all the most cost-effective causes, and regardless the cost-effective causes change over time, so I think (all things equal) a new EA is far more important than a new non-EA AMF donor.
LikeLiked by 1 person
benquo said:
Unfortunately, the major EA organizations are marketing organizations, which makes it somewhat difficult to make a coordinated effort to do something other than marketing. You’d need to structure an organization quite differently to get the sort of thing you’re looking for, I think. I encourage you to lay out in more detail what an org or community that did the thing you’re looking for might look like; some EAs might thereby be motivated to shift to that model.
LikeLike
Mosquito that can type said:
“the fact that most nets will not actually prevent any cases of malaria”
Do you mean anything other than “without nets only one in a few hundred people ever gets malaria, therefore only one in a few hundred nets prevents malaria”?
LikeLike
ozymandias said:
Yep, and therefore cost-per-net is misleadingly cheap when it comes to outcomes we actually care about.
LikeLiked by 1 person
fubarobfusco said:
Isn’t that why they talk about cost-per-life-saved, though?
I mean, most preventative measures are like this, sometimes with a bit of herd immunity to help. Smoke detectors are a pretty good idea even if only a very small fraction of them actually ever save someone from a fire.
LikeLike
ozymandias said:
That’s my point. Go look on AMF’s website, they prominently mention cost per net and do not include a cost-per-life-saved estimate.
LikeLike
Pingback: Rational Feed – deluks917