, ,

There was a big fight on Rationalist Tumblr about MIRI last week, so I decided to write up my general thoughts on the matter.

Normally, I like to keep my conversations on the object level whenever possible. Unfortunately, I really cannot do that in this post, because I am completely unqualified to assess the evidence that MIRI is or is not an effective organization: I cannot tell apart a deep mathematical finding from one that’s trivial or even incorrect, and I don’t have a sense for how much math one should expect mathematicians to create. However, several people I trust have said– mostly in private conversations– that to them MIRI appears to be producing about as much math as half a CS grad student. If true, this is a damning statement.

However, very, very few of the conversations I’ve seen about MIRI have centered around MIRI’s effectiveness as an organization, either pro or con; those that have have usually been prompted by a single person, Su3su2u1, who does not identify as an effective altruist. Pro-MIRI arguments, in my experience, have a dreadful tendency to provide (compelling or non-compelling) arguments that AI risk is very important, and then assume that donations to MIRI are the natural outcome of this belief. All too often, they fall into the “something must be done, this is something, therefore it should be done” fallacy.

It is genuinely difficult to figure out a way of assessing the effectiveness of a speculative organization like MIRI. If they produce little math, is this because of the difficulty of the problem or some kind of organizational incompetence? How can we figure out a proxy measure that is genuinely correlated with the outcomes we care about? However, that is a reason to put more work into assessing MIRI’s competence as an organization, not to simply assume it must be competent because we have no way of telling otherwise.

There are a lot of reasons people don’t want to have this conversation. Most obviously, it would create drama, and many people are averse to drama. Many people, including myself, have a lot of respect for Eliezer Yudkowsky as a person. However, it should not be taken as an insult to say “I’ve looked into it, and I don’t think the charity you’re running is particularly effective”; ideally, our norm should be that that sort of criticism is a compliment. We’re all trying to do the most good here, right?

In addition, the term “cause neutrality” has blocked these conversations. In its original sense, cause neutrality refers to my own personal decisions: I should be indifferent between helping people by buying them malaria nets, helping people by buying them cows, and helping people by buying them tickets to see performance artists. If it turned out that the performance artists outperformed the malaria nets, then I ought to overcome my personal disgust for performance art, shut up, and multiply.

“Cause neutrality” has been expanded to mean something like “be polite to people with other causes”. This is obviously a good thing, because there are many places where, say, animal rights EAs and global poverty EAs share common goals. If the question is figuring out the most effective investment strategy for effective altruists, it’s good for the comment thread not to devolve into “DIE CARNIST SCUM.” However, too often it metastasizes into “you should be nice to people about their sincerely held beliefs, no matter how awful.” If a person believes that Hell exists and the most effective altruism is baptizing everyone, the correct response is not “well, effective altruism is a big tent, and as long as you sincerely believe in your faith you’re welcome”; it’s “you numbskull, Hell doesn’t fucking exist.” This is not a movement of people who sincerely honestly believe; this is a movement of people who are concerned with objective facts and evidence.

Worse, I have often seen people become defensive in response to criticisms of MIRI as an ineffective organization. Since criticisms of AI risk in general and MIRI in specific often come from people who do not identify as effective altruists, I’ve seen many people explicitly respond with “this leaves me with a bad taste in my mouth because you’re not an EA.” But is there some law that says that no truth comes from members of the outgroup? If a criticism is false, then by all means expose it, but the speaker doesn’t matter. If the world’s biggest fool says that it’s raining outside, that doesn’t mean it’s sunny.

In many cases, the non-EAs in question do donate to Givewell-recommended charities. To my mind, that means they’re actually effective altruists: they want to help others in an evidence-based fashion. Their opinions about the direction of the movement ought to matter as much as any other effective altruist’s. I mean, how else are we going to define it? “People who go to the right kinds of parties”? “People who have firm opinions about utility aggregation”? “People who get really teary about smallpox”?

To be clear, I am not saying that MIRI is an ineffective organization; as I said above, I am incapable of assessing MIRI’s effectiveness. However, I do want to encourage speaking up among people who are privately thinking “MIRI isn’t very effective” but feel reluctant to say anything because they don’t want to create drama or start shit. And I do think the evidence is unclear enough that we should have an informed discussion of this issue.

I think that the presence of MIRI supporters in the movement has probably been net positive. Less Wrong is the #1 source of new effective altruists; although some donate to MIRI, many donate to animal rights, global poverty, or other existential risk charities. Without Less Wrong, most of the MIRI supporters wouldn’t be donating to Givewell top charities; they probably wouldn’t be donating anywhere at all. The Sequences include such foundational pieces of EA thought as Purchase Fuzzies and Utilons Separately and Money: The Unit of Caring. Scott Alexander has written classic essays like Nobody Is Perfect, Everything Is Commensurable and Efficient Charity: Do Unto Others. Pretty much all of the Minding Our Way archives are worth reading for the committed EA looking to avoid guilt and burnout.

In the event that MIRI is found to be an ineffective charity– or even that existential risk is found to be intractable altogether– I would very much hope we could continue to have Nate Soares, Eliezer Yudkowsky, and Scott Alexander in our movement. It would be sadder without them.

I utterly reject the PR argument against MIRI. Let’s be honest: the people who are rejecting effective altruism because of MIRI aren’t rejecting it after a careful consideration of Bostrom’s arguments in Superintelligence and a thorough examination of the evidence for MIRI’s effectiveness. They’re rejecting it because it sounds weird. They’re going “ha ha! Those nerds think we can build a god-AI and solve all our problems! They’ve read too much science fiction!”

I don’t know what the best thing to do is. I don’t think it’s particularly likely to be purchasing malaria nets. And I think it’s very likely that the best thing to do is really really weird. That it’s, I don’t know, geoengineering or intervening in wild-animal suffering or encouraging people to buy things from sweatshops or sterilizing mosquitoes. (Note that I’m not saying any of those ideas are good, or even not harmful– I’m just giving examples of the kind of thing it might be.) And in that case I would not want people to go “gee, Ozy, your geoengineering sterile mosquitoes for sweatshop promotion charity is really weird, so in spite of your evidence that it’s the best charity we’re just going to keep buying those mosquito nets. Plays better in Middle America, you see.” The membership criteria for effective altruism must be, well, effectiveness and altruism: not PR.

Many of the proponents of the PR argument do not seem to be taking it particularly seriously themselves. The article I’ve most commonly seen cited to back up the arguer’s belief that EA pretends to be about global poverty and is actually about building God-AIs was, in fact, written by an EA. To a large degree, that article is about how MIRI is bad PR. If MIRI is such bad PR, maybe you shouldn’t write an essay for a highly-trafficked news website linking it with EA.

Many people have said things along the lines of “MIRI should be kicked out of EA.” My stance on this entirely depends on what the words “kicked out of EA” mean. If they mean that, after careful analysis and a lot of discussion among members of the community, people generally come to agree that MIRI is not currently an effective charity, then I’m all for it. (To their credit, many supporters of “kicking MIRI out of EA” do mean that.) However, if they want to skip the discussion part, I think this is tremendously harmful.

Ultimately, I think every movement gets wrong-headed sometimes. The world is full of sexist feminists, credulous skeptics, rape apologist anti-rape advocates, uncivil advocates of civility, free speech absolutists who want to ban dissent, and irreligious and unchaste evangelicals. “Come on, guys, you had ONE JOB” is the human condition. The problem is not when effective altruism, like every other movement, fails at its set goal. The problem is when we fail to self-correct.

The purpose of the EA movement, as I see it, is collaborative truthseeking. Effective altruism is a question, not an answer. We ask very specific questions like “do people use their malaria nets to fish?”; we ask broader questions like “does curing people of parasitic worms improve their school attendance?” When we grow philosophical, we ask questions like “what kind of being matters morally? How do we trade off values against each other? What are the best ways of figuring out whether something does what we think it does?” “Is MIRI effective?” is just another question. And it’s about time we put serious effort into finding the answer.