I’ve recently read the book Epic Measures, about the global burden of disease studes. One person involved in the studies had a habit of asking people who said their intervention was the most important, “what intervention is the second most important?”
It was intended as a gotcha, but I think it’s actually a really interesting question that sheds a lot of light on cause prioritization, and I’ve got a lot out of thinking about it.
So: people who prioritize a single charity, intervention, cause, or broad area in which to work, what charity/intervention/cause/broad area in which to work is second-best, and why?
The most improtant is AI risk, Animal Suffering is second.
LikeLiked by 1 person
Very broadly, I think that preventing death(s) is the most important thing, and eradicating depression/severe-suffering that makes people not *want* to live is the second most important thing.
LikeLiked by 1 person
I’m doing AI, I think EA community building is second, but probably that’s just because it impacts AI. I guess I’ll go for either anti-aging research or global mental health.
LikeLike
Deterring wars and acts of terrorism. War is a unique ill in that it can destroy our ability to combat other ills in a way that few other ills can.
LikeLiked by 2 people
I think that the notion of “best cause” that makes most sense is “suffering alleviated per pound spent”.
My impression is that there are a number of causes close enough in that sense that the difference between them is small compared to the ambiguities in measurement and, for that matter, in defining “suffering alleviated”.
But, roughly speaking, my reading of Givewell’s output is that it looks as though the strongest candidate for first place is probably malaria, and the second-strongest candidate is probably parasitic worms.
LikeLiked by 1 person
Most important: eliminating low-hanging fruit for QALYs, which mostly looks like GiveWell, eliminating diseases, eradicating poverty, etc..
Second most important: Building sustainable political tools. Once you’ve got the low-hanging fruit taken care of, you need a society that has a strong ability to coordinate on larger scale problems like climate change, AI risk, terrorism, and war. I think the US in particular is remarkably outdated (we still have a two party system!), and that makes it significantly more difficult to react with appropriate urgency.
LikeLike
My guess would be wild animal suffering.
LikeLike
I’m not contributing measurably to any cause, but the cause I am most interested in contributing in the future is x-risk. I put my trust in the experts that the most effective interventions are with AI and the second most are with Biosecurity. Personally, the limit of my computing knowledge is Excel, while I have significant public health knowledge. So if I do decide to become an EA, my second choice biosecurity is where I’d contribute.
LikeLike
The question “what intervention is the second most important?” is not well defined.
First, it depends where we draw the boundaries around “intervention”. Is MIRI an intervention? Or the whole of AI alignment research? The whole of AI alignment + policy research? X-risk? Far future?
Second, it depends on what is meant by “important”. We can interpret it as (i) where donations make the most marginal impact per dollar (ii) where *I personally* should steer my career.
Third, one question you could ask is, what is the second most important intervention according to my current beliefs, and another natural question you could ask is, what is the intervention which would be most important conditional on discovering something that “invalidates” the first intervention.
I did spend some time thinking about this in the form of, how would I change my career if I discovered AI alignment research is actually ineffective. The first scenario I examined is: superintelligence *is* coming in the mid future, but it is going to be aligned (to its creators) anyway (because the alignment problem is easier than we think, or its solution follows automatically from any solution to AGI in general). Then, I would probably try changing track to AI policy research, or otherwise maneuvering to a position of influence on AI policy. The second scenario I examined is: superintelligence is very far, many centuries away, and nothing we can do now has a predictable effect. Then I would probably switch to, trying to build a scientific field of futurology, more scientifically rigorous and quantitative than what currently exists. Of course if I would actually come to believe in one of these scenarios, I would spend much more time and effort thinking about it.
LikeLike