There’s an issue that I sort of addressed in my post about religion and effective altruism, and that also plays a role in a lot of discussion about weird effective altruism (insect suffering, wild-animal suffering, artificial general intelligence). The question is, essentially, how welcoming should the effective altruist community be to ineffective altruists who say that they’re EAs?

I think there are three competing impulses we need to balance:

Epistemic modesty. What if they’re right? What if insects do suffer, wild animal lives matter, an artificial general intelligence risks destroying humanity, and Hell exists and billions of people are suffering for eternity? Wouldn’t effective altruists want to do something about that? And how the hell can we do something about that if we kicked all of those people out for being weird?

It is possible that you are wrong, and you must act in such a way to reduce the harm of you being wrong. The best protection for effective altruism against wrong people is the free market of ideas. We decide which charities are effective through discussion, debate, experiment, and empiricism, not by fiat, and every closed discussion can be reopened if there’s new evidence. The one protection against the failure of individual intelligence is collective intelligence; the one protection against the failure of collective intelligence is individual intelligence. The process is slow and painful, but over time we will come to believe more true things than false things. The arc of the epistemic universe is long, but it points towards truth.

Welcomingness. If you only accept people who already agree with you to your effective altruism meetups, then there aren’t going to be any new effective altruists. If someone starts saying “I’ve been reading about effective altruism and I don’t know much about it yet, but EA has inspired me to fly to Africa and build a school for my mission trip!”, then gently asking why they believe what they believe is probably going to be a better path than yelling at them about the evils of voluntourism. 

Of course, that’s a pretty fucking condescending way to treat people, so I think it’s important to balance welcomingness with epistemic modesty. Hell, maybe she’s onto something! New people– particularly new people from backgrounds underrepresented in effective altruism, such as anything that isn’t programming– might not know a lot about effective altruism, but don’t mistake that for lack of insight; even if they’re not up on all the jargon, they might have a perspective that lets them notice something we’ve all missed.

Effectiveness. If the five cause areas of effective altruism became existential risk, animal rights, global poverty, meta-charity, and the Foundation To Cure Rare Diseases in Cute Puppies, I think that something would have gone wrong.

It’s important to note the distinction between having a cause area that supports the Foundation To Cure Rare Diseases In Cute Puppies and individual effective altruists who support the Foundation To Cure Rare Diseases In Cute Puppies. If an individual happens to support the Foundation To Cure Rare Diseases In Cute Puppies, then all things considered it’s better for them to be an effective altruist than not. Maybe in hanging around the community they’ll become more sensible. And, hell, maybe they’re right! Maybe those cute puppies with rare diseases are utility monsters and curing their diseases is literally the most important thing we could be doing! But if they are an actual demographic, and no evidence of utility monsterhood has been presented, then there’s an issue with EA.

I suspect the way to solve this is simply to continue to emphasize effectiveness. Of course, all conversations should be consensual, and sometimes it’s inappropriate to have certain conversations (there’s no sense derailing a conversation about budgeting for effective altruism because one of the people involved happens to be a Foundation supporter). But nevertheless, we should relentlessly bring up: “why do you believe this? Why are you donating there? Where’s your data? Have you thought about this argument? Do you think there’s a better use of your money?” Only in an environment of relentless self-criticism can effective altruism thrive.