There was a big fight on Rationalist Tumblr about MIRI last week, so I decided to write up my general thoughts on the matter.
Normally, I like to keep my conversations on the object level whenever possible. Unfortunately, I really cannot do that in this post, because I am completely unqualified to assess the evidence that MIRI is or is not an effective organization: I cannot tell apart a deep mathematical finding from one that’s trivial or even incorrect, and I don’t have a sense for how much math one should expect mathematicians to create. However, several people I trust have said– mostly in private conversations– that to them MIRI appears to be producing about as much math as half a CS grad student. If true, this is a damning statement.
However, very, very few of the conversations I’ve seen about MIRI have centered around MIRI’s effectiveness as an organization, either pro or con; those that have have usually been prompted by a single person, Su3su2u1, who does not identify as an effective altruist. Pro-MIRI arguments, in my experience, have a dreadful tendency to provide (compelling or non-compelling) arguments that AI risk is very important, and then assume that donations to MIRI are the natural outcome of this belief. All too often, they fall into the “something must be done, this is something, therefore it should be done” fallacy.
It is genuinely difficult to figure out a way of assessing the effectiveness of a speculative organization like MIRI. If they produce little math, is this because of the difficulty of the problem or some kind of organizational incompetence? How can we figure out a proxy measure that is genuinely correlated with the outcomes we care about? However, that is a reason to put more work into assessing MIRI’s competence as an organization, not to simply assume it must be competent because we have no way of telling otherwise.
There are a lot of reasons people don’t want to have this conversation. Most obviously, it would create drama, and many people are averse to drama. Many people, including myself, have a lot of respect for Eliezer Yudkowsky as a person. However, it should not be taken as an insult to say “I’ve looked into it, and I don’t think the charity you’re running is particularly effective”; ideally, our norm should be that that sort of criticism is a compliment. We’re all trying to do the most good here, right?
In addition, the term “cause neutrality” has blocked these conversations. In its original sense, cause neutrality refers to my own personal decisions: I should be indifferent between helping people by buying them malaria nets, helping people by buying them cows, and helping people by buying them tickets to see performance artists. If it turned out that the performance artists outperformed the malaria nets, then I ought to overcome my personal disgust for performance art, shut up, and multiply.
“Cause neutrality” has been expanded to mean something like “be polite to people with other causes”. This is obviously a good thing, because there are many places where, say, animal rights EAs and global poverty EAs share common goals. If the question is figuring out the most effective investment strategy for effective altruists, it’s good for the comment thread not to devolve into “DIE CARNIST SCUM.” However, too often it metastasizes into “you should be nice to people about their sincerely held beliefs, no matter how awful.” If a person believes that Hell exists and the most effective altruism is baptizing everyone, the correct response is not “well, effective altruism is a big tent, and as long as you sincerely believe in your faith you’re welcome”; it’s “you numbskull, Hell doesn’t fucking exist.” This is not a movement of people who sincerely honestly believe; this is a movement of people who are concerned with objective facts and evidence.
Worse, I have often seen people become defensive in response to criticisms of MIRI as an ineffective organization. Since criticisms of AI risk in general and MIRI in specific often come from people who do not identify as effective altruists, I’ve seen many people explicitly respond with “this leaves me with a bad taste in my mouth because you’re not an EA.” But is there some law that says that no truth comes from members of the outgroup? If a criticism is false, then by all means expose it, but the speaker doesn’t matter. If the world’s biggest fool says that it’s raining outside, that doesn’t mean it’s sunny.
In many cases, the non-EAs in question do donate to Givewell-recommended charities. To my mind, that means they’re actually effective altruists: they want to help others in an evidence-based fashion. Their opinions about the direction of the movement ought to matter as much as any other effective altruist’s. I mean, how else are we going to define it? “People who go to the right kinds of parties”? “People who have firm opinions about utility aggregation”? “People who get really teary about smallpox”?
To be clear, I am not saying that MIRI is an ineffective organization; as I said above, I am incapable of assessing MIRI’s effectiveness. However, I do want to encourage speaking up among people who are privately thinking “MIRI isn’t very effective” but feel reluctant to say anything because they don’t want to create drama or start shit. And I do think the evidence is unclear enough that we should have an informed discussion of this issue.
I think that the presence of MIRI supporters in the movement has probably been net positive. Less Wrong is the #1 source of new effective altruists; although some donate to MIRI, many donate to animal rights, global poverty, or other existential risk charities. Without Less Wrong, most of the MIRI supporters wouldn’t be donating to Givewell top charities; they probably wouldn’t be donating anywhere at all. The Sequences include such foundational pieces of EA thought as Purchase Fuzzies and Utilons Separately and Money: The Unit of Caring. Scott Alexander has written classic essays like Nobody Is Perfect, Everything Is Commensurable and Efficient Charity: Do Unto Others. Pretty much all of the Minding Our Way archives are worth reading for the committed EA looking to avoid guilt and burnout.
In the event that MIRI is found to be an ineffective charity– or even that existential risk is found to be intractable altogether– I would very much hope we could continue to have Nate Soares, Eliezer Yudkowsky, and Scott Alexander in our movement. It would be sadder without them.
I utterly reject the PR argument against MIRI. Let’s be honest: the people who are rejecting effective altruism because of MIRI aren’t rejecting it after a careful consideration of Bostrom’s arguments in Superintelligence and a thorough examination of the evidence for MIRI’s effectiveness. They’re rejecting it because it sounds weird. They’re going “ha ha! Those nerds think we can build a god-AI and solve all our problems! They’ve read too much science fiction!”
I don’t know what the best thing to do is. I don’t think it’s particularly likely to be purchasing malaria nets. And I think it’s very likely that the best thing to do is really really weird. That it’s, I don’t know, geoengineering or intervening in wild-animal suffering or encouraging people to buy things from sweatshops or sterilizing mosquitoes. (Note that I’m not saying any of those ideas are good, or even not harmful– I’m just giving examples of the kind of thing it might be.) And in that case I would not want people to go “gee, Ozy, your geoengineering sterile mosquitoes for sweatshop promotion charity is really weird, so in spite of your evidence that it’s the best charity we’re just going to keep buying those mosquito nets. Plays better in Middle America, you see.” The membership criteria for effective altruism must be, well, effectiveness and altruism: not PR.
Many of the proponents of the PR argument do not seem to be taking it particularly seriously themselves. The article I’ve most commonly seen cited to back up the arguer’s belief that EA pretends to be about global poverty and is actually about building God-AIs was, in fact, written by an EA. To a large degree, that article is about how MIRI is bad PR. If MIRI is such bad PR, maybe you shouldn’t write an essay for a highly-trafficked news website linking it with EA.
Many people have said things along the lines of “MIRI should be kicked out of EA.” My stance on this entirely depends on what the words “kicked out of EA” mean. If they mean that, after careful analysis and a lot of discussion among members of the community, people generally come to agree that MIRI is not currently an effective charity, then I’m all for it. (To their credit, many supporters of “kicking MIRI out of EA” do mean that.) However, if they want to skip the discussion part, I think this is tremendously harmful.
Ultimately, I think every movement gets wrong-headed sometimes. The world is full of sexist feminists, credulous skeptics, rape apologist anti-rape advocates, uncivil advocates of civility, free speech absolutists who want to ban dissent, and irreligious and unchaste evangelicals. “Come on, guys, you had ONE JOB” is the human condition. The problem is not when effective altruism, like every other movement, fails at its set goal. The problem is when we fail to self-correct.
The purpose of the EA movement, as I see it, is collaborative truthseeking. Effective altruism is a question, not an answer. We ask very specific questions like “do people use their malaria nets to fish?”; we ask broader questions like “does curing people of parasitic worms improve their school attendance?” When we grow philosophical, we ask questions like “what kind of being matters morally? How do we trade off values against each other? What are the best ways of figuring out whether something does what we think it does?” “Is MIRI effective?” is just another question. And it’s about time we put serious effort into finding the answer.
davidmikesimon said:
Why do you think that purchasing malaria nets is not likely to be the best thing to do?
Like, specifically, do you mean that there’s some silver bullet intervention that would work fantastically well, but it’s buried among all the other long-shot ideas and we haven’t figured out how to filter it out yet? I definitely agree with that.
But if you’re saying there’s a better choice than malaria nets (and similar things recommended by GiveWell and co.) given our current knowledge, I disagree. KISS is a pretty good rule of thumb. It seems hard to beat simple interventions with straightforward metrics that report solid ROI, at least for the exploit side of the exploit/explore division.
LikeLiked by 1 person
ozymandias said:
The former, and I’m optimistic about Open Phil &co figuring out what it is.
LikeLike
Anaxagoras said:
It’s usually safe to bet on the field. I don’t know what it is, but in the space of possible actions we could take, AMF is probably merely quite good and easy to tell that it’s good.
LikeLiked by 1 person
Jacob Schmidt said:
In my experience you can often get people on board with a weird ideas with some evidence (usually in the form of actual accomplishments), some expert support, and a plausible mechanism.
MIRI doesn’t really any of that. The best arguments for AI risk kinda gloss over any actual mechanism, often falling back on “it hasn’t been ruled out yet.” It has some expert support, though usually experts from other fields. And MIRI in it’s various forms hasn’t really accomplished anything in it’s 10+ years of operating.
I suspect for most people “weird” in this case doesn’t mean “unusual and poorly explored/understood.” It’s “matches pretty closely with the cluster of scam artists and woo peddlers like homepathy, reike, enema cleanses, penile growth supplements, and ‘get $7200 a month part time from home!’ ads.”
This argument is kind of terrible. That MIRI and EA are linked is obviously true i.e. is pretty visible damn. If you believe the visibility of said link is a problem, your options are to downplay it (kinda dishonest), deny it (outright lying), criticize it (may or may not increase visibility), do nothing (will likely accomplish nothing), or try to convince people that it’s actually a good thing (apparently untrue).
If this were a party and your friend’s fly was down, you could pull them aside and let them know quietly. This isn’t a party, it’s a global movement, and the friend in question has their fly down in a deliberate display.
LikeLike
Rob Bensinger said:
A lot of the basic case for prioritizing AI safety work is echoed by GiveWell/OpenPhil on http://www.givewell.org/labs/causes/ai-risk. I might be able to elaborate on that case if you said more about the unanswered questions you had in mind re “the best arguments for AI risk kinda gloss over any actual mechanism”.
GiveWell/OpenPhil cite Eliezer Yudkowsky and Nick Bostrom as originating some of the main arguments they take seriously for worrying about AI risk, so they give some credit to MIRI and FHI for helping make the big-picture case clearer. OpenPhil also gave $1,186,000 to the FLI grant winners (pooled with $6M from Elon Musk), who included MIRI and FHI; though this has to be weighed against the fact that Holden harshly criticized MIRI back in 2012 (http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/).
All of the groups working on existential risk from AI are on good terms with MIRI, which I think is sufficient for non-specialists to distinguish us from homeopathy, though it’s possible you might still classify us with string theory (‘idea taken seriously by a community of respected specialists, but also subject to lots of academic skepticism’). Our ideas were also cited in the leading undergrad textbook in AI (https://intelligence.org/2013/10/19/russell-and-norvig-on-friendly-ai/), and Stuart Russell (one of the textbook’s co-authors) is currently collaborating with us on part of our technical agenda (http://intelligence.org/files/CorrigibilityAISystems.pdf). I don’t necessarily expect that to convince the most skeptical EAs, though; see https://intelligence.org/2015/08/10/assessing-our-past-and-potential-impact/ and the rest of my comment below.
LikeLiked by 3 people
Patrick said:
“All of the groups working on existential risk from AI are on good terms with MIRI, which I think is sufficient for non-specialists to distinguish us from homeopathy…”
Presumably your critics are objecting to AI risk’s status as a valid field, not MIRIs status within the field. The respect you’re accorded by other members of the field would likely be irrelevant to them.
LikeLike
Jacob Schmidt said:
Honestly, I’m referring to Bostrom here. I’ll admit I haven’t read the book for myself, but several sources friendly to EA e.g. SSC have quoted Bostrom somewhat extensively, summarizing and supplementing the arguments therein. Assuming I can trust these sources (which may or may not be a good assumption, though I’d be curious as to who I should trust to accurately describe arguments if not actual proponents of AI risk), and given how well received the book was among AI proponents, I’m left with the impression that AI risk has little going for it in the way of plausible mechanisms.
And Patrick is right RE: expert support. That MIRI get’s on well with the rest of the AI risk community doesn’t mean much, any more than homeopaths getting along with other homeopaths.
Stuart Russell’s support is to MIRI’s credit, and caused me to give MIRI a bit mroe credit that I was previously, but it’s not nearly convincing. Give me ten minutes online and I bet I can find a half dozen MDs who believe in homeopathy. It’s good that MIRI has at least some expert support, but Russel and Co aren’t nearly enough, especially absent concrete accomplishments.
LikeLike
Rob Bensinger said:
I’m still not clear on what you mean by ‘there are no mechanisms.’ Can you turn this into a particular question or objection you have to some part of the big-picture story on e.g. http://www.givewell.org/labs/causes/ai-risk or http://edge.org/conversation/the-myth-of-ai#26015? Are any of your questions answered, for example, by Eliezer’s first three comments on https://www.reddit.com/r/MachineLearning/comments/404r9m/ama_the_openai_research_team/cystwwf?
LikeLike
Rob Bensinger said:
Thanks for writing this. As someone who works at MIRI, I 100% endorse public debate about whether MIRI is an effective organization, and I also endorse “kicking us out” if the debate arrives at a consensus that we aren’t effective. (Hopefully we’d also have the self-awareness to disband in that case.) We don’t have the time to participate in every public discussion, but you can forward questions/links to rob@intelligence.org.
Since you’re arguing for “more talk about specific reasons to think MIRI is or isn’t effective, less talk about irrelevant meta stuff,” I should also actually respond to the criticisms you cited:
– I haven’t seen a rigorous comparison of how MIRI’s output compares to other research organizations’, but our output since shifting from an outreach focus to a research focus in 2013 seems comparable to the examples in http://rationalconspiracy.com/2015/08/21/citations-in-math-academia/.
– Agreed that “AI risk is very important” isn’t a sufficient argument for MIRI. I wrote my own response to “why MIRI in particular?” on https://intelligence.org/2015/08/10/assessing-our-past-and-potential-impact/. Other MIRI-specific arguments (e.g., “MIRI’s Approach”) are linked at https://intelligence.org/info.
– Daniel Dewey (GiveWell/OpenPhil consultant, formerly at FHI) gives a good overview of different strategies for AI existential risk mitigation at http://www.danieldewey.net/fast-takeoff-strategies.pdf. I think most people aren’t aware that we’re putting most of our eggs in the “AI-empowered projects” basket, rather than the “sovereign AI” basket, for more or less the reasons Daniel cites.
Also, in fairness to the “MIRI is bad PR for EA” perspective, I’ve seen MIRI’s co-founder (Eliezer Yudkowsky) make the argument himself that things like malaria nets should be the public face of EA, not AI risk. Though I’m not sure I agree, in part because I’m not sure what “public face” means. So much of this discussion about EA’s public face (including the Vox piece) is happening on highly trafficked public websites to begin with. The EA Facebook group has 9,158 members; should we treat that as part of our “public face” and only discuss schistosomiasis and GiveDirectly there?
I also buy that filtering for ‘people who can seriously examine ideas even when they’re weird’ might be more helpful early in EA’s development than filtering for ‘people who can appreciate the case for AMF.’ If we were optimizing for having the right ‘public face’ I think we’d be talking more about things that are in between malaria nets and AI on the obviousness/weirdness spectrum, like biosecurity and macroeconomic policy reform.
LikeLiked by 4 people
Rob Bensinger said:
Also, this isn’t an official MIRI response (unlike the posts on https://intelligence.org/info), though it’s obviously informed by the fact that I work there.
LikeLiked by 1 person
Professor Frink said:
Why did Yudkowsky say SIAI/MIRI has always been about solving the technical problem? (as compared to FHI). Also, why was MIRI hiring so many researchers if they weren’t primarily about researching? This pivot looks like a post-hoc justification for years of low output.
LikeLike
Evan Þ said:
On the other hand, if MIRI recognized in 2013 that they hadn’t been doing good research before, and fixed things so they significantly improved, that’s still a good thing.
LikeLike
Rob Bensinger said:
You’d be better off asking Eliezer yourself; I haven’t spoken with him about this. At a glance, I see Eliezer saying ‘we did lots of important unpublished / non-peer-reviewed research early on’ + ‘MIRI’s core mission has always been to make research progress’. That’s consistent with most of MIRI’s person-hours being spent on outreach pre-2013.
Luke announced this shift in priorities in early 2013 (https://intelligence.org/2013/04/13/miris-strategy-for-2013/), shortly after we ran our last Singularity Summit, ran our first math workshop, and changed our name from ‘Singularity Institute’ to ‘MIRI’. http://intelligence.org/all-publications confirms that we have a lot more technical publications (and papers in general) beginning in 2013. Maybe it was a mistake not to pivot sooner, or maybe our priorities should have been different pre-pivot; but I think it’s clear the pivot happened, was planned, etc. Another reason we operate differently than SIAI-MIRI did might be that none of MIRI’s current staff were around before late 2012, with the exception of Eliezer.
LikeLike
Professor Frink said:
Sure, their post 2013 output, while not as bad as their pre 2013, is pretty thin. Three arxiv papers, I think only one has a result.
One of them, the decision theory paper looks like a write up of timeless decision theory, which has been kicking around since 2008. The write up seems to lack a result, it only presents unsatisfactory attempts at formalization.
LikeLike
MugaSofer said:
>At a glance, I see Eliezer saying ‘we did lots of important unpublished / non-peer-reviewed research early on’ + ‘MIRI’s core mission has always been to make research progress’. That’s consistent with most of MIRI’s person-hours being spent on outreach pre-2013.
Is it? To me, that sounds like Eliezer claiming that MIRI was, and has always been, doing research, and simply declined to publish it.
Is there some context to these comments I’m missing as a layman?
LikeLike
James Babcock said:
Professor Frink said:
> Sure, their post 2013 output, while not as bad as their pre 2013, is pretty thin. Three arXiv papers, I think only one has a result.
Huh? https://intelligence.org/all-publications/ has a lot more than three. Do you mean three in a particular category, excluding other categories?
LikeLike
Professor Frink said:
@James Babcock – 3 papers on arxiv, which are the more cleaned up ready to go results. In terms of paper quality, publications > = arxiv >>> small technical reports on a website.
LikeLike
James Babcock said:
I encourage anyone who happens upon this thread to follow the link to MIRI’s publications list, and count (a) the number of publications in venues at-least-as-impressive as arXiv, and (b) the number of papers on arXiv in particular.
Professor Frink is straight-up lying. There are more than three papers *on arXiv in particular*. This error is not one that could have been made honestly.
LikeLiked by 1 person
Professor Frink said:
@James Babcock: The papers I found on arxiv were “Towards Idealized Decision Theory,” “Robust Cooperation on the Prisoner’s Dilemma,” and “Reflective Oracles.” Which arxiv paper did I miss? I found them by searching for Yudkowsky and then searching for his coauthors on arxiv.
Now, I’m not sure what you count as an impressive venue, but glancing through their publication list they do a lot of double counting. i.e. “Two Attempts to Formalize Counterpossible Reasoning in Deterministic Settings” is the same as “Toward Idealized Decision Theory” on arxiv, they’ve just listed it twice.
Which publication venues do you think are particularly impressive? It looks like a lot of small, all submissions taken conferences (like AGI).
LikeLike
James Babcock said:
You can just go to their all-publications page and search for “arXiv”. You do not need me to point out the ones you missed.
I’m not sure where you get the idea that “Two Attempts to Formalize Counterpossible Reasoning in Deterministic Settings” is a duplicate of “Toward Idealized Decision Theory”. I would invite any readers of this thread to look at how these two papers are characterized on the all-publications page, and at the actual PDFs.
I’m not sure where you get the idea that AGI is small or is “all submissions taken”, but I do not trust you enough to report that without a cited source.
LikeLike
Professor Frink said:
I missed 1, there are 4 arxiv papers since 2013 listed on their website. I’m not sure if 3 vs 4 arxiv papers is going to make or break their effectiveness for you. I’ll read the Benford’s law paper I missed a bit later. The reason I missed it is that none of the authors (Scott Garrabrant, Siddharth Bhaskar, Abram Demski, Joanna Garrabrant, George Koleszarik, and Evan Lloyd.) are on the other MIRI arxiv papers.
I get the idea that Toward Idealized Decision Theory is the same paper as Two Attempts to Formalize Counterpossible Reasoning in Deterministic Settings because I actually read the first and skimmed the second. Also, MIRI themselves say that Toward… is “An extended version of “Two Attempts to Formalize Counterpossible Reasoning in Deterministic Settings.”” Have you read them? It’s pretty clear these are different drafts of the same thing.
Anyway, duplication like this exists throughout that list. “Robust Cooperation on the Prisoner’s Dilemma: Program Equilibrium via Provability Logic.” for instance is the same result as ““Program Equilibrium in the Prisoner’s Dilemma via Löb’s Theorem.”
And you can look up AGI if you don’t want to trust me.
LikeLike
James Babcock said:
I did look it up. They say they’re large. I couldn’t find any statement about their acceptance rate, and I don’t think you found anything like that, either; you called it “an all submissions taken” conference not because you knew that to be the case, but because you thought calling it that would make MIRI look bad.
(The number of arXiv papers is five. You are still missing one, because you did not press Ctrl+F.)
LikeLike
Boz said:
AGI has an acceptance rate of between 40 and 60 percent depending on the year and how your count it. Read the proceedings and they usually give you stats. Definitely not hard to get into, but not literally accepting everything. If say Frink is accurate that it isn’t that impressive, but not that tit takes exactly everything.
LikeLike
Professor Frink said:
AGI has an acceptance rate of 40% – 60% depending on the year and how you count it. You can generally find this information in the conference proceedings for a given year. Yes, I was being hyperbolic, you pissed me off when you called me a liar for the mistake of missing a single arxiv paper. My point was that it isn’t a particularly prestigious venue, and I was asking which papers you thought were in prestigious venues. You did not answer.
As to the arxiv papers. There are 4 papers AFTER 2013. If you look at my statement, I was discussing MIRI’s POST 2013 work. If you include pre-2013 work, there are at least 7 papers on arxiv. Bill Hibbard appears to have 2 papers that they don’t list as on arxiv on their website.
I’ve admitted to missing the one arxiv paper, will you admit the papers I’ve pointed to as duplicates are duplicates?
LikeLike
Evgeny said:
Wanted to clear a few things up. (I am a math Ph.D. student who has published in computer science.)
Conference proceedings are generally considered *more* prestigious than ordinary journals in computer science, and many important results are published only in conference proceedings. This is *not* true in math or physics, and people coming from math or physics may make incorrect inferences from a CS publication list. (That said, of course, some journals are more prestigious than some conferences.)
I don’t understand the obsession with the arxiv in this thread: Getting a paper on the arxiv is easy. I don’t know why MIRI hasn’t put its other papers on the arxiv (except those that are incomplete—once something is on the arxiv, it’s there forever) and I think they should, but it’s almost certainly not because arxiv *won’t accept them*. Papers on the arxiv have not (necessarily) been peer-reviewed; papers in conference proceeding and journals have been.
My cursory impression of MIRI’s publication list is neither particularly good nor particularly bad. I’d like to see them continue doing what they’re doing for now (foundational research takes a long time), but I’d also like to see other independent groups approaching AI risk from other angles (or even attacking the same set of problems as MIRI).
LikeLike
nostalgebraist said:
I’ve more or less said this on tumblr, but it might be worth saying it here because no one’s going to wade through those messy forking tumblr threads:
I personally want to make EA more mainstream, specifically among high earners (who became high earners before discovering EA). This would be great because it would direct the massive amounts of money earned by “normal” well-paid people toward effective causes.
This doesn’t necessarily mean that EA has to distance itself from MIRI or from its “tech geek” origins. But ideally EA should become popular among people who aren’t tech-geeks. I don’t know what is necessary to make that happen, but “PR arguments” hold some appeal to me for this reason. If there are PR moves that would make EA appeal to a much broader constituency, they should be made — although I don’t know what those moves are or if they exist.
There are various reasons to be skeptical that this will ever happen: “normal” high earners may be very disinclined to altruism; people who are already philanthropists are altruistic but may be difficult to reach (age gap, various culture gaps); many people who currently give are religious and probably less receptive to EA; etc. (One possible group to look at is young independently wealthy people, who often feel they lack direction and/or moral purpose.)
But I do think that managing EA’s image is important, because the giving ability of the current EA community is dwarfed by the giving power of people who could be EAs and aren’t.
(I am less concerned than you are about the issue of whether the best thing might be weird. Even if we haven’t found the best thing yet, we have found some very good things, and building credibility for EA in the eyes of a broad public will help if/when we choose to pivot towards something weirder than malaria nets.
This is a special case of a more general trend: it seems to me that LW-rationalist ventures in the real world [e.g. MIRI, MetaMed] often jump to the weirdest, most ambitious applications of their skills, hoping that “the right people” will know to trust them, rather than building trust first with more manageable ventures and then using that trust to do the weirder stuff. If “rationality gives you an edge” — as it certainly does for superforecasters, at least — then it should be possible to use that edge in relatively mundane endeavours, then frame your weirder, more ambitious endeavours as “we’re the people who succeeded at this other thing, and now we’ll use the same trick to do something else.” EA is already doing this to some extent — GiveWell does mundane but important stuff well and professionally, so people will trust them more if they make a “weird” recommendation — and it is possible that EA could extend that to a much larger audience.)
LikeLiked by 3 people
Ann Onora Mynuz said:
>I personally want to make EA more mainstream, specifically among high earners (who became high earners before discovering EA). This would be great because it would direct the massive amounts of money earned by “normal” well-paid people toward effective causes.
This doesn’t necessarily mean that EA has to distance itself from MIRI or from its “tech geek” origins. But ideally EA should become popular among people who aren’t tech-geeks. I don’t know what is necessary to make that happen, but “PR arguments” hold some appeal to me for this reason. If there are PR moves that would make EA appeal to a much broader constituency, they should be made — although I don’t know what those moves are or if they exist.
So, how do you feel about Animal Rights’ place in EA?
LikeLiked by 1 person
wfenza said:
“Let’s be honest: the people who are rejecting effective altruism because of MIRI aren’t rejecting it after a careful consideration of Bostrom’s arguments in Superintelligence and a thorough examination of the evidence for MIRI’s effectiveness.”
That’s not why I reject EA (I’m not even sure I do reject EA), but it’s a big factor in why I am uncomfortable with The EA Movement. Part of doing EA, to me, is having reliable sources to tell me what charities are effective, as I lack the resources and inclination to figure that out myself. A movement that embraces MIRI as an effective charity is not a movement I can trust.
LikeLike
InferentialDistance said:
Good news! GiveWell is an organization that does exactly that, is strongly advocated for in the Effective Altruist movement, and doesn’t suggest you give any money to MIRI at all!
LikeLiked by 3 people
raemon777 said:
As I see it, one of the major points of the EA movement is that it’s a place for people who care about the same question, but with somewhat different priorities, can engage/debate with each other.
There is a place for a reliable, well vetted credible sources to give you most the rigorous data available. That place is (for now), Givewell. (I do hope there will be more competitors that focus on that type of research)
I think there is also a place for discussion of speculative causes that seem important but which don’t yet have a lot of concrete data. This includes things like AI – but it also includes things as simple as “water and sanitation” – things that, for one reason or another, do not yet have very rigorous data available, even though they’re obviously at least somewhat important.
I’m not sure what you’re asking for – the rigorous data exists and there explicit places to talk about it. Nobody’s asking you to trust the movement as a whole (and in any case, most of the movement DOESN’T think AI is the most important cause)
LikeLike
imuli said:
“A movement that embraces MIRI as an effective charity is not a movement I can trust.”
How can you know that without doing the “careful consideration of Bostrom’s arguments in Superintelligence and a thorough examination of the evidence for MIRI’s effectiveness.”?
You proclaim yourself to be disinclined to do the hard work yourself, but you also claim to know that MIRI isn’t effective. This is exactly what that quote was talking about, if we actually want to be effective than we can’t just use pattern matching to decide these things.
LikeLike
rationalistAdjacentLurker said:
Actually, the perceived importance of MIRI in EA activities is the major thing that turns me off about the EA movement, and this is because I have actually read the arguments supposedly in its favor, including Botsrom’s “Superintelligence”.
I’m a software engineer working with a lot of ML people, and the arguments for AI risk in general just do not make sense to me, or the vast majority of software people I know. There are too many chains of rickety hypotheticals for me to take a lot of the potential failure modes for AI seriously, especially the notion of recursive self-improvement leading to a “hard takeoff”. Su3su2u1’s take on MIRI’s effectiveness was basically the deathblow to any interest I had in supporting them or anything like them, and I’m confused and dismayed when I see otherwise rational people take them so seriously.
So yeah, I like the idea of EA, but MIRI’s involvement really is a huge stumbling block.
LikeLike
Evan Þ said:
A stumbling block to doing what? To giving to AMF / Give Directly, or to joining with EA meetup groups and publicly identifying as an effective altruist?
(I ask as someone who seriously disagrees with MIRI’s goals, but gives to AMF, and hasn’t joined any EA groups but might do so in the future.)
LikeLike
rationalistAdjacentLurker said:
Definitely the latter. I give to charity, and am on the lookout for ways to ensure that my charity money is actually doing good work. But yes, to the extent that “EA” and “MIRI supporters” have strong overlap, I wouldn’t consider myself an EA or go to any meetups.
I’m willing to have my mind changed though, and I recognize that there’s no simple way to “kick MIRI out of EA” even if it was obvious that was something the broader EA community wanted.
LikeLike
Siggy said:
Note: I am an outsider. I don’t donate to any EA causes. I disagree with a lot of rationalist ethos, such as aversion to heuristics and aversion to drama. I think the rationalist movement should be burnt to the ground, and the EA movement split. Just wanted to put all that out there so I can be as unsympathetic an observer as possible.
MIRI is basically scientific research, at best. I notice, that rather going after established grant funding, it seems to target individual donors. This is bad. It sounds like just about the least efficient way to assess the value of a research program. Either it should be prohibitively expensive because each donor is doing assessment independently, or it should be really shoddy and unreliable because most donors are unable to do the assessment. (I think it’s the latter.)
Compared to grant funding, I also think donors have extreme tunnel vision. All science is connected, and if MIRI really were a worthwhile research program (it is not), it would rely on advancements in adjacent research programs. The best method of funding is to pick a general area, and let researchers in that area compete for it. From what I’ve seen, EA people seem to have basically zero discussion of the value of research adjacent to MIRI. This makes no sense. Except, I suppose, it lowers the cost of assessing research programs, since you’re only thinking about one single program. But, you know, those lower costs make for a shoddy job.
Basically, assessing research programs through community discourse is a bad idea.
LikeLike
shemtealeaf said:
The problem with embracing weird ideas is that you’re operating almost entirely from the inside view. For instance, even if the inside view tells me that there are strong reasons to be concerned about AI safety, the outside view tells me that MIRI belongs to a reference class that doesn’t have a great track record. While we shouldn’t abandon them entirely, I think ‘weirdness’ is a good reason to significantly lower my (inside-view generated) confidence in an idea.
LikeLike
Nick T said:
Has anyone ever said “I entirely agree with MIRI about AI, their strategy, and their research agenda, but think they’re executing very ineffectively”? I can’t think of when I’ve ever heard anyone say this publicly at all, or say it privately and not still basically support MIRI because it’s the best we’ve got. If not, this seems like it suggests something bad about everyone’s epistemology – why should beliefs about MIRI’s direction and effectiveness correlate? – and might have something to do with the defensiveness.
I would love to see more quality constructive critique of MIRI’s effectiveness, but think it would be most valuable from people who agree with their direction or at least deeply understand it and can factor out any disagreement they have with it. (Who can, for instance, try to evaluate their output in relation to their research agenda, rather than using the common and seemingly obviously wrong metrics of quantity of output or impressiveness of output to the broader AI/math/whatever community.)
LikeLiked by 6 people
stargirlprincesss said:
I basically agree with MIRI’s perspective on AI. Previously I would have said their AI-progress timeline was way too optimistic. However after seeing the progress in recent years, especially alpha go, I think they are only slightly optimistic.
I think MIRI’s technical/mathematical/CS output has been extremely thin. As an academic institution they are not, imo, even close to productive. However MIRI has been incredibly successful as a PR organization*. Elizier’s/Bostrom’s ideas about AI are now held by a large number of important people in the software industry. The idea, which were previously very fringe, are now widespread in the circles that matter. This is a tremendous success.
So overall I think MIRI has very effective. Even though its research output is negligible.
*Some people seem to think MIRI is bad at PR but I obviously disagree. They have been very successful at spreading their ideas.
LikeLiked by 4 people
raemon777 said:
I generally think MIRI has the right approach, and I’ve been impressed by the way they’ve pivoted approaches after thinking about the problem further.
I think MIRI of 5 years ago seemed to be a terrible organization. I think they’ve made huge amounts of progress at being an effective organization since then, but that it’s still fairly underwhelming.
LikeLike
multiheaded said:
Agreed that the critics of MIRI neglect how unexpectedly good it seems to *actually* be at PR.
LikeLike
Nick T said:
Perhaps “___ is bad at PR” is Not About PR.
LikeLiked by 3 people
raemon777 said:
Appreciate this post a lot. I very much want more critical discussion of MIRI that actually engages with the issue rather than saying “the whole thing seems weird and stupid”, and I think I agree with almost all of your points.
For my part – I become defensive and frustrated when I see people attacking something uncharitably (in the current-zeitgeist-technical-sense of uncharitable). This is a failure mode on my part and I don’t know what the proper response actually is.
I *think* I’d like to see MIRI setting concrete goals for themselves (papers published or citations seem like a reasonable benchmark), and I think I agree with the argument they should be sending people to get PhDs in AI-related fields, not for the official-status-credibility, but because that will force them to learn to engage with the outside world, and build relationships with other prominent AI researchers.
LikeLike
Kelsey said:
“If a person believes that Hell exists and the most effective altruism is baptizing everyone, the correct response is not “well, effective altruism is a big tent, and as long as you sincerely believe in your faith you’re welcome”; it’s “you numbskull, Hell doesn’t fucking exist.” This is not a movement of people who sincerely honestly believe; this is a movement of people who are concerned with objective facts and evidence.”
This is an aggressively uncharitable dismissal of one of the major points that was debated on tumblr. It turns out that running meetups in this proposed way is ineffective; we tried it, and we turned off a lot of compassionate, capable people who would otherwise have been effective altruists. We now do it differently.
People who disagree with you think the facts and evidence are on their side. If you say “sorry, get out, this is a movement for people who are concerned with facts and evidence” you won’t convince them they’re mistaken about the facts, you will convince them you are an utter asshole who it is a waste of their time to work with and who holds them in too deep contempt for thoughtful, mind-changing discussion to actually occur.
To leave open the possibility people will actually change their minds, you need a foundation of common ground. “We all want to do the most effective thing possible” is that common ground. Then you say “so, here’s the evidence that would convince me religious charities are a good idea” and “I think what you’re doing is ineffective” and everything else. But you open the door and you make your centerpiece your /shared ground/, unless you want to alienate tons of bright and capable and altruistic people for ideological purity reasons. That’s what actually works! I’m telling you this having tried multiple things, including the attitude you describe above! I’m telling you this having talked with other organizers who have tried multiple things, including the attitude you describe above!
Relatedly: “I’ve seen many people explicitly respond with “this leaves me with a bad taste in my mouth because you’re not an EA.” But is there some law that says that no truth comes from members of the outgroup? If a criticism is false, then by all means expose it, but the speaker doesn’t matter. If the world’s biggest fool says that it’s raining outside, that doesn’t mean it’s sunny.”
I think it is obvious that discussions about MIRI’s effectiveness should be started by, and should include, anyone at all who cares. I think discussions about the effective altruist community and the norms it should have, whenever instigated by people who aren’t in it, are worse than useless.
Every person who participated in the “should we kick MIRI out” conversation who has run an EA meetup, or does major organizing work for one, said “from my experience running an EA meetup, these are the actual barriers to productive discussion; sincere differences of opinion about which is the most effective charity has never ever been one of them.” That is why and how we each concluded that, in practice, having EA welcome everyone who gives to what they think is the most effective charity (and then argue with them!) is healthy.
So, there’s a bunch of people who don’t consider themselves EAs and don’t have any experience organizing or working within the EA community. They keep making suggestions about community norms, suggestions which are obviously a bad idea to people who have spent the last several years working on this.
Ozy, if someone said “feminist safe spaces have X problem” I agree it doesn’t matter at all if they identify as feminist but it certainly matters that they have never been part of a feminist safe space. If someone who has played a role in EA organizing says “yes, I think my group would benefit from kicking out people who are wrong about what the most effective thing is; our conversation getting dominated by people trying to save souls is a real problem we’ve actually experienced” then I will change my mind. That’s what I mean when I say I’d react differently to hearing this from an EA. Right now, what I’m hearing (from people who’ve done the work I want to get better at) is “no, the best criteria is /definitely/ ‘I want to do as much as I can’ – having seen years of failure modes, this minimizes them.
I think, at that point, pointing out to people that their suggestions are coming from a place of ignorance, and that we have lots of evidence their suggestions don’t work, is completely legitimate.
LikeLiked by 9 people
Patrick said:
My personal and rather cynical take-
When I look at effective altruism I mostly see these things:
1. People who are desperate to maximize human biomass.
2. People who are worried about future robot overlords.
3. People who are trying, poorly, to deal with the reality of nature and animal suffering.
4. People who are trying, poorly, to resolve the fact that their excuse for only donating 10% even though they acknowledge that the suffering they’d endure by donating 11% is outweighed by the good it would do, is the same excuse as used by those who donate zero.
5. People who try to redirect away from all this with the banal insight that if you’re going to donate money in the first place, it’s better to donate to an efficient charity than an inefficient one.
LikeLiked by 1 person
Evan Þ said:
Are you including all global poverty charities under point (1)? If so, I think that’s an unfair dismissal of them. No matter what impact interventions like mosquito nets would have on the future, they alleviate a whole lot of suffering now and in the next generation. That’s my main reason for supporting them.
LikeLike
Patrick said:
Oh yeah, that’s my other complaint about effective altruism.
Critic: “I don’t think your metric for judging charitable efficiency is very good in the global and long term scheme of things. Please cross apply all of the usual ways that unrefined utilitarianism does poorly when dealing with possible future people, and add in some concerns about environmental damage that isn’t easily reduced to a suffering based metric, but which is liable to bite us in the ass sooner or later.”
Effective Altruist: “Why do you have a problem with mosquito netting?”
LikeLike
Evan Þ said:
Ah. I’m not even a utilitarian myself; I’m a complete deontologist. I have a whole lot of problems with unrefined utilitarianism, and I don’t want to even pretend to defend it. But, I think once you take out all those epicycles, there’s still enough substance that GiveWell’s recommendations are valid to judge among charities affecting the current generation and next generation. Though, since you apparently value possible future people more highly than GiveWell or me, you might disagree?
LikeLike
davidmikesimon said:
The world kind of sucks. Making the world suck less is a good thing to do. If acknowledging that is reacting poorly, then what’s reacting well look like? Sociopathy?
LikeLiked by 2 people
wireheadwannabe said:
Probably “~acceptance uwu~” or some such ineffective bullshit.
LikeLiked by 2 people
Patrick said:
In a world where the default means of death is starvation or predation, if you’re going to argue for a moral obligation to take the most efficient steps possible to minimize animal suffering, you’re either
1) going to look like a hypocrite for declaring that such an incredibly ambitious goal and then settling for just being a vegan, or
2) end up accidentally becoming your generations instance of “the scary guy who accidentally reinvents the utilitarian argument for the annihilation of all life,” or
3) concluding that perhaps population culling operates as a means of trading off an acceptance of certain types of death in order to minimize others without eradicating species, then giving a very creepy look at the guy down the table from you arguing that we should focus all of our efforts on short term projects that have the knock on effect of maximizing human biomass, or
4) some form of “acceptance uwu” bullshit. Protip, kids. If you fight for a while and then accept that the problem of animal suffering is intrinsic to animal existence and you’ve done all you can reasonably be expected to do… welcome to acceptance uwu bullshit.
LikeLike
ozymandias said:
So, wait, Patrick, your belief is that if I do not wish to be a hypocrite or a peddler of bullshit, then me not having a way to solve the problem magically means that the problem doesn’t exist?
Ancient Roman Patrick: “Well, I can’t really think of a way to *solve* the fact that slaves can be legally raped and tortured, so I guess rape and torture is a-okay!”
LikeLiked by 3 people
multiheaded said:
Wow, we have a hardcore kill-all-animals person at last. Inevitable, but I’ve been wondering when one would come along.
LikeLike
Patrick said:
I have absolutely no idea how either of you reached those interpretations.
Sisyphus- “Getting this stone to the top of the hill is a moral obligation. I must act in the most efficient manner possible to fulfill it.”
Epictetus- “But the stone will always roll back down.”
Sisyphus- “I cannot accept that.”
Epictetus- “Maybe you’d be better off phrasing your moral obligation in terms of something achievable, like getting the stone as high up the hill as possible for as long as possible, given your own strength and will.”
Sisyphus- “You mean accept that the stone will always roll back down?? Never!”
Epictetus- “But it will. Well… best of luck then… Wait. I’m noticing that you have been pushing the stone halfway up the hill, wedging a plank under it, then eating lunch. And then when the stone rolls loose halfway through your lunch, you go fetch it.”
Sisyphus- “So? I need to avoid burn out.”
Epictetus- “You’re actively choosing a strategy that gets the stone less close to the top of the hill than it might otherwise get.”
Sisyphus- “Yeah, but I’m ok with how far it’s getting, all things considered.”
Epictetus- “This sounds like an explicit rejection of the metric you earlier set for yourself. The difference between this and accepting that you can’t reach the top of the hill is what, exactly?”
Sisyphus- “Attitude.”
Epictetus- “But the stone doesn’t care.”
LikeLiked by 3 people
MugaSofer said:
>maximizing human biomass
You use this term a lot. Patrick, are you under the impression that some sort of Malthusian crisis is likely?
LikeLike
slatestarcodex said:
>> “There are a lot of reasons people don’t want to have this conversation. Most obviously, it would create drama, and many people are averse to drama. Many people, including myself, have a lot of respect for Eliezer Yudkowsky as a person. However, it should not be taken as an insult to say “I’ve looked into it, and I don’t think the charity you’re running is particularly effective”; ideally, our norm should be that that sort of criticism is a compliment. We’re all trying to do the most good here, right?”
No, the reason I don’t want to have this conversation is that I’ve had it a thousand times before, most people don’t know enough about it to have interesting perspectives, it usually becomes hostile, and the argument is apparently interminable in the same way arguments like “Is eating meat wrong?” and “Is liberalism better than conservativism?” are interminable.
Take the eating meat example. Some effective altruists are vegetarians, some aren’t. It seems to me that either animal suffering is commensurable with human suffering (in which case animal charities are far more effective than human charities and eating meat is horrifying) or it isn’t (in which case animal rights people are wasting their charity money). But if we opened this up for constant unlimited debate, a carnivore couldn’t walk into an EA meeting without being accosted by people saying they’re literally complicit in a system of torture and mass killing.
It would be worse for the animal rights people, though, because they’re heavily outnumbered, and they would be getting constantly peppered with people insisting “Hey, let me tell you why caring about animals isn’t efficient and is just wasting money that could be used on humans!” You describe people who are tired of dealing with that all day of being “defensive” and of violating our norm of caring about efficiency, but sometimes people just want to be able to walk across a room without being accosted and told condescendingly that they’re wrong.
(I actually think you’re the one who taught me about this dynamic, in some kind of social justice context. Trans issues are very interesting and worth debating, but in a space that’s 99% cis and not well-sorted for agreeability, a “norm of vigorous debate” basically means that every cis person who’s thought about the issue for five minutes is going up to the one trans person and saying “Hey, let me explain why gender is actually one hundred percent based on chromosomes!” and then accusing them of being “defensive” when they’re not interested.)
So we mostly paper over the animal rights issue when we can and don’t make a big deal of it unless we’re with somebody who we’re very sure wants to have this debate at this particular time.
I’ve debated AI and MIRI-related issues for almost a decade now. Most people don’t know enough to have an informed opinion and tend to be on the “Well robots don’t have souls, so there!” level, or the “What MIRI doesn’t realize is that any sufficiently smart robot will feel love because love is great” level. A few people are beyond that level, and I’ve debated them many times before, and I’m sure I’ll debate them again, but I want to debate them when *I* want to debate them, and not in the context of having to defend my right to participate in a movement that people like me helped found.
Even more, I’m not *able* to fully defend this right in the sort of public debates people often insist upon, because defending this right would involve saying loudly and publicly that I think MIRI is important effective charity, and not only would that give more ammunition to the people who *do* want to cause us public relations problems and embarrass us, but then *other* EAs would feel insulted that I’m implying it’s more important than whatever *they’re* doing. So unless I want to cause trouble I basically just have to smile and take it.
You say that we need to be able to tell people who believe in Hell that it “doesn’t fucking exist”, but in real life we are very careful *not* to import religious debates anywhere we can possibly avoid it. Religious debates are interminable, get people angry, and are easily avoided. Leah is religious and used to work for CFAR; as far as I know the people there very carefully avoided anything that felt like trying to argue her into submission, or telling her she was a bad rationalist for not wanting to debate it with every single person who wanted to challenge her on the issue. I asked her if she wanted to argue about it once, she said not really, and then I successfully avoided being a jerk to her in that irrelevant domain and was able to cooperate with her in the 99% of things we agree about.
Based on the fact that the people who are most vocal about this aren’t even effective altruists (su3su2u1), and the fact that practically nobody involved in this debate cares nearly as much about the much more interesting and tractable question of whether or not deworming is really effective ($400,000 of EA money in the past few years; many confliting studies), no, I don’t think this is just another discussion of which charities are better or worse. I think this is a toxoplasma-of-rage thing where everybody loves talking about the most divisive thing they can think about, and I think the solution is to stop doing that, unless you have something new and interesting to say about the object level question.
You say you don’t want me kicked out of EA, and I thank you for that, getting kicked out of EA is not my main concern – there’s nobody with the authority to do that anyway. My main concern is having the movement become so hostile with so many ill-informed “should MIRI supporters be kicked out of EA?” blog posts that I finally snap and said “Okay, you can’t fire me, I quit.”
(this is not an exaggeration; I am right on the verge of doing this)
LikeLiked by 9 people
ozymandias said:
If you don’t want to read blog posts about whether MIRI supporters should be kicked out of EA, to the point that it is making you reconsider being an effective altruist, I would advise beginning by not reading posts with titles like “Concerning MIRI’s Place In The EA Movement” written by xrisk skeptics.
LikeLiked by 1 person
slatestarcodex said:
I think you are missing the point of that analogy. In fact, I specifically used the vegetarianism metaphor and only brought that analogy in at the end to try to make sure you wouldn’t do that.
I’ve already banned the relevant people on Tumblr, and I guess I’ll blacklist this blog too, but I can’t block people in real life and as long as it keeps being phrased in terms of “should we kick these people out” I worry that ignoring the conversation just means people can agitate against me and I can’t defend myself.
I continue to think that the old version of effective altruism which was a superstructure for helping and connecting various people who wanted effective charity, while trying to be respectful about and de-emphasize disagreements as to exactly what the most effective thing might be, worked pretty well, and would continue to work if people could be more considerate and less toxoplasma-y.
LikeLiked by 1 person
ozymandias said:
I… don’t want a movement that de-emphasizes disagreements about what the most effective charity might be. I am interested in effective altruism because I want to figure out what the most effective charity is. How am I supposed to do that if everyone is pussyfooting around saying my brilliant new Foundation For Curing Rare Diseases In Cute Puppies charity isn’t very good?
Of course, not every conversation is appropriate for every time. If we’re talking about avoiding burnout, don’t derail into whether leafleting makes people vegan; if we’re talking about whether leafleting makes people vegan, don’t derail into whether animals have qualia. And there may be persistent long-running disagreements in the community. But if we’re not even *trying* to figure out the best thing, why not just give up and donate to Make a Wish instead?
The solution to “people are talking too much about MIRI and not enough about deworming” is not “no one argues about MIRI or deworming”, it’s “more arguments about deworming”. (Although there were a lot of arguments that I saw after those studies came out, and IIRC most people came to agree with Givewell’s position on the issue.)
LikeLiked by 5 people
ozymandias said:
Also you do realize you’re endorsing the philosophy of this article, right?
LikeLike
slatestarcodex said:
…I thought your first comment was relatively reasonable and was prepared to say maybe I was reading too much into this and let it drop, and then you had to say I was basically the same as someone saying that trying to do charity effectively was evil and effective altruists are “defective altruists” and we’re all “Mr. Spock”. Really, Ozy? Did you really think it through and decide that was a good place to take this conversation?
I’ve spent more time, energy, and word count defending AI and MIRI than most people ever spend on anything. All I want is an understanding that healthy communities have issues that they try not to bring up all the time because everyone’s run over the arguments and counterarguments a hundred times and we don’t want it to be #SpeckGate2015 24/7. You obviously aren’t willing to consider this point or discuss it respectfully so I give up.
LikeLike
ozymandias said:
I am just saying that you two seem to have *remarkably* similar opinions, regardless of the language in which you choose to couch them. His rejection of EA– the thing that makes him call us “defective altruists”– is that we want to try to find the best thing instead of giving effectively within our preferred cause area. I am curious about the differences between your positions here.
LikeLike
Professor Frink said:
I don’t know how to say this without coming off like a dick, but know this is meant with some respect. Why are you the go-to guy for debating MIRI effectiveness? You’ve been talking about AI-risk for a decade, but mostly as a promoter, I think. You’ve never claimed anything like the technical background to follow MIRI’s actual technical output, which is the biggest question to address regarding effectiveness! You have to either cede the space or argue a technical case without technical background, which is not enviable.
It is true that a lot of AI-risk attackers probably don’t have much technical background, but that seems true of their supporters as well. Shlevy, who I follow on tumblr, and who has a good software engineering background said of Bostrom’s book that it made lots of basic technical mistakes throughout. That is a bad sign.
When I read your debate with su3su2u1 and his tumblr, I see him asking for evaluations of MIRI’s actual research and I see math and CS academics agreeing the research looks really thin. And then I see MIRI supporters jumping in with general, non-technical AI-risk arguments. This doesn’t look good to me, when the technical knowledge is stacked on one side of the conversation.
LikeLike
stargirlprincesss said:
I don’t remeber Bostrom’s book being filled with “basic technical mistakes.” Though I remember bostrom there being some problems in how Bostrom described some classes of machine learning algorithms. Su3Su2u1 often brings up that Bostrom ignores computational complexity. Which is basically true. But the practical implications of computational complexity theory are very sketchy. Bostrom’s claims are more modest than some of the things Elziier has argued so I think Bostrom ignoring computational complexity was sub-optimal but not damning.
I would like to see the details of the technical mistakes Bostrom made.
*Elizier claimed a Baysean super intellegence could postulate general relativity from a few frames of video of an apple falling. This seems pretty impossible. Bostrom never makes a claim this strong.
LikeLike
Professor Frink said:
Here is tumblr user shlevy’s goodreads review
https://www.goodreads.com/review/show/1335549653
He says “I’m not sure there was a single section of the book where I didn’t have a reaction ranging from “wait, how do you know that’s true?” to “that’s completely wrong and anyone with a modicum of familiarity with the field you’re talking about would know that”” but doesn’t seem to specify examples.
I vaguely remember su3su2u1 mentioning lots of little technical errors, but he doesn’t tag his tumblr and after a few minutes of looking I gave up.
LikeLike
stargirlprincesss said:
Thank for sending me the reply. I guess I will have to ask Shea about the details. It might be good if zie posted publicly but idk if zie wants to.
LikeLike
Shea Levy said:
I don’t have the book in front of me (mine was a library copy), but I did write up one of the blatant technical issues that is relevant to my expertise (next two paragraphs are me quoting myself):
In discussing “recalcitrance”, a vague metric he uses as part of an “equation” he defines for the rate of change in intelligence, he says: “In the short term, computing power should scale roughly linearly with funding: twice the funding buys twice the number of computers, enabling twice as many instances of the software to be run simultaneously… A system that initially runs on a PC could be scaled by a factor of thousands for a mere million dollars”
Which is just not at all how this works. Writing code that takes advantage of multiprocessing is hard, it often requires deep changes to your architecture and algorithms in a way that would be harmful in the less parallel case, and even then you end up upper-bounded by Amdahl’s law. This is why my 2.7 GHz dual-core cpu is nowhere near as good as a single-core 5 GHz cpu would be.
Also, su3su2u1 has a series of posts with more details starting at http://su3su2u1.tumblr.com/post/128424448248/superintelligence-a-review-in-many-parts
Finally, I stand by my review here. I purposefully left out details, because Bostrom’s argument (unless something significant changes after chapter 8) is terrible even if he were right about every technical detail.
(sidenote: if you were referring to me, I use he/him pronouns, though don’t mind other ones).
LikeLiked by 1 person
stargirlprincesss said:
I used gender neutral pronouns because I actually did not know your gender.
LikeLike
James Babcock said:
Shea Levy: This objection seems incorrect. It’s true that there’s an initial (possibly large) cost to getting parallelism, but, looking at current leading AGI projects, they all pay it early on; it’s reasonable to expect that any successful AGI will already be parallelized. The Amdahl’s Law objection might apply, but current leading machine learning algorithms are not in fact bottlenecked by it at present scales; neural net-flavored algorithms parallelize very well. And Bostrom does say “in the short term”, giving a nod to the long term in which it’s more sub-linear.
My general impression of the book as a whole is that there are a lot of cases like this, where Bostrom says something which has a first-level objection to which there is a second-level objection-to-the-object and a third level and so on. Presenting the objections is fine; these are complex issues, there’s a lot to say and he didn’t say nearly all of it. But acting as though he’d made an obvious, straightforward mistake is wrong and unfair.
LikeLike
Alyssa Vance said:
“There are a lot of reasons people don’t want to have this conversation.”
I challenge the premise. Don’t people have this conversation pretty frequently?
There was a long discussion of it by Holden in 2012 (http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/), which was the most-upvoted post in the history of Less Wrong, and similarly lengthy replies in the comments. Luke Muehlhauser wrote a detailed reply at http://lesswrong.com/lw/di4/reply_to_holden_on_the_singularity_institute/, followed by more rounds of lengthy comments.
Of course, things have changed since then, but there was a further round of discussion between Holden, Eliezer, and others in 2013 (http://files.givewell.org/files/labs/AI/10-27-2013-conversation-about-MIRI-strategy.pdf), and of course further posts on the MIRI website (as linked by Rob Bensinger). There’s also been lots of discussion in other places, especially Scott’s blog (http://slatestarcodex.com/2014/10/07/tumblr-on-miri/).
There might be issues which have been left unexamined by all the previous conversations. But if so, I think we should focus on those particular issues, and not pretend that the effectiveness of MIRI as an organization is something nobody’s ever looked at before. Once an argument has been made, you don’t need to make it again; you can just cite the previous guy. Real progress means not re-hashing the same issues over and over.
LikeLiked by 1 person
Alyssa Vance said:
I’ll add that FLI has given $7 million to 37 research teams tackling parts of the AI risk problem, only a small fraction of which went to MIRI (http://futureoflife.org/2015selection/), and I’ve never heard anyone argue that “MIRI’s approach is less effective than , so we should support instead”. This leads me to believe that disagreements are usually based on the value of AI risk as a whole, not MIRI or its strategy in particular.
LikeLiked by 1 person
The Smoke said:
That someting should be done to prevent all wild animal suffering is the most cocky, arrogant, bumptious, cavalier, hubristic, insolent, overbearing, presumptuous idea I have ever encountered.
LikeLike
davidmikesimon said:
Well, why’s that?
LikeLike
pamape said:
All those adjectives point to the direction that the person is a virtue ethicist, there’s nothing about the ethicality of the outcome itself.
LikeLiked by 1 person
MugaSofer said:
“If a person believes that Hell exists and the most effective altruism is baptizing everyone, the correct response is not “well, effective altruism is a big tent, and as long as you sincerely believe in your faith you’re welcome”; it’s “you numbskull, Hell doesn’t fucking exist.””
Why? Figuring out how to evaluate effective evangelism is a common goal of multiple EA sub-movements.
Indeed, Scott has repeatedly said that funding Vegetarian evangelists is by far the most effective vegetarian/animal-rights intervention, in his estimation, to the point that he feels justified eating meat. Whether or not this is true, it hasn’t led people to run Scott out of the EA movement.
Furthermore, christians give more to charity, so an EA who goes around converting a lot of people is genuinely helping the cause of Effective Altruism. Arguably by a lot, since the current undertone of “we should run these people out of the EA movement” isn’t exactly conductive to converting more christians to EA.
This paragraph alone significantly increases my estimation that you’re wrong, and cause neutrality (by whatever name) is an important part of the EA movement.
LikeLike
MugaSofer said:
Actually, apparently people are trying to run Scott out of the movement. But it’s not for arguing pro-vegan advertisements are great moral investments, so I think my point stands.
LikeLike
ozymandias said:
Citation on people trying to run Scott out of the movement?
LikeLike
MugaSofer said:
Scott identifies as pro-MIRI, complains at length above about people trying to kick pro-MIRI EAs out.
LikeLike
deusphasmatis said:
They want to kick MIRI out, not pro-MIRI EAs. Effective Altruists concerned about existential AI risk are welcome to stay, so long as they aren’t visibly
weirdsupportive of existential AI risk in EA contexts.LikeLike
davidmikesimon said:
The problem is not with evangelism/advocacy in general, it’s with poor research methodology. If Hell really existed, it ought to be very high on the list of EA concerns, and I can see baptism being a very high-ROI intervention then! But the evidence isn’t there.
On the other hand, most religious charities have goals beyond merely converting people. That sometimes means that emergency resources are used as a vehicle for a sales pitch. But though that’s annoying, it’s a negligible concern in many cases compared to the direct good done.
On the gripping hand, I strongly agree with you that being unwelcoming to religious folk hurts EA a lot. I’m the angry atheist type myself sometimes, but I try hard to put that aside when working together with other people on something this much more important than philosophy.
LikeLike
Gregory Lewis said:
I would suggest caution about interpreting the EA survey as ‘LW is #1 source of new EAs’.
The 2014 survey was a convenience sample of people who clicked on a link to the survey and filled it in. These links were displayed prominently on LW, but less prominently to other ‘clusters’ of the EA community (e.g. GWWC, GW, etc.). Thus people who heard about EA via LW will be oversampled.
LW remains an important source, and it may indeed be #1, but I don’t think the survey gives significant support for this.
LikeLiked by 1 person
AndR said:
I’d love there to be an alternative to MIRI that is trying to fulfil the same goals; then we can really compare efficiency.
But right now there isn’t one. And so even if MIRI isn’t being very good at the job, I’d donate to it; hoping that either the increased funding causes spin-off charities that might compete for the money, or that they manage to improve the situation despite their supposed ineffectiveness.
“Something must be done, this is the only group that seems willing to try to do something, ergo this is a group that should be supported”. This doesn’t seem to be obviously wrong to me.
LikeLike
Professor Frink said:
What about the universities and groups that were granted the FLI money? What about giving FLi money to give out as grants to similar groups? What about FHI?
When Musk had millions to give for AI-risk, he didn’t go to MIRI.
LikeLike
Alyssa Vance said:
Potentially relevant: http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/
LikeLike