Tags

,

Many effective altruists, including myself, tend to consider certain cause areas to be “long-term” and other cause areas to be “near-term.” Global poverty, farmed animal welfare, developed-world politics, and the prevention of rare diseases in cute puppies are near-term areas; conversely, AI risk and prevention of extreme suffering are long-term areas.

(Tangent: “long-term” and “near-term” are awful terms, since some people expect AI superintelligence or a global communist revolution in the next fifteen years. I am using them because they appear to be the commonly accepted terminology, but I appreciate suggestions for better terminology.)

However, in a recent discussion with Kelsey Piper, I noticed that essentially all cause areas can be approached from a near-term or a long-term perspective. I will begin with a definition by example:

Cause Area Near-Term Long-Term
Global poverty
  • Bednets
  • Deworming medicine
  • Cash transfers
  • Education for girls
  • Global communist revolution
  • Spreading democracy and capitalism
Farmed animal welfare
  • Welfare-improving legislation
  • Vegan outreach
  • Corporate outreach
  • Lab-grown meat
  • Banning the farming of animals
Wild animal welfare
  • Wildlife contraception
  • Keeping cats inside
  • Habitat destruction
  • Genetically engineering predators to not eat meat
AI
  • Combating algorithmic bias
  • Friendly artificial general intelligence
Feminism
  • Planned Parenthood
  • Passing the Equal Rights Amendment
  • Paternity leave
  • Overthrowing the patriarchy
Abortion
  • Crisis pregnancy centers
  • Supporting unwed parents
  • Abortion-limiting laws
  • An abortion ban
  • Uterine replicators
Cancer
  • Financial aid for cancer patients
  • Ronald McDonald House
  • Early screening for cancers
  • Cure for cancer
Developed-world poverty
  • Food banks
  • Homeless shelters
  • The Salvation Army
  • Global communist revolution
  • Economic growth to eliminate poverty
Rare diseases in cute puppies
  • Treating the rare diseases
  • Genetically engineering puppies so they never get sick again

(To be clear, I don’t think this is a complete taxonomy of all the possible approaches to a particular cause area. For example, I could add another category for “foundational research,” which includes GiveWell, philosophers trying to solve population ethics, and research into the basic biology of diseases– research that is trying to reduce our uncertainty.)

I believe all these long-term causes have several things in common. They’re radical, in the radical feminist sense: they get to the root of the problem. For that reason, they have a very high payoff, much higher than anything in the near-term column. If you give a kid a bednet, that kid won’t die of malaria; if you end global poverty, no children will ever die of malaria ever again.

Long-term causes tend to be highly speculative. Many long-term causes sound like the subject of a science fiction novel more than they sound like a charity, which can lead people to unfairly consider them absurd. (Reality does not generally check whether or not it resembles a SF novel before it does something.)

Long-term causes tend to have many steps between the last outcome that one can reasonably measure and the outcome of interest. For insecticide-treated bednets, the primary outcome of interest is deaths from malaria. In principle, it is perfectly possible to measure how many fewer people die if you have distributed bednets.

Conversely, for lab-grown meat, the primary outcome of interest is how many fewer animals exist if lab-grown meat is available on store shelves. But it’s not possible to measure that outcome, because you’re only going to know it once lab-grown meat has already been invented. You instead have to measure outcomes like the amount of progress a particular lab is making or how favorable the regulatory climate is. In theory, these outcomes are linked to your outcome of interest– but if your theory is wrong, your work could be useless or even counterproductive.

Long-term causes also typically have a qualitatively different kind of uncertainty than near-term causes do. Deworming charities are highly uncertain. A study has suggested that children who are given deworming medicines have far higher lifetime incomes than children who are not given deworming medicines. However, the proposed mechanism turned out not to be affected by deworming, and many papers fail to replicate. Still, deworming medicine is so cheap that– even though it’s more likely than not that deworming doesn’t do much of anything besides cause children to have fewer worms– it’s a cost-effective charitable intervention.

AI risk is also highly uncertain, but it’s a different kind of uncertainty. Deworming essentially comes down to a single well-defined question: “given that there’s a replication crisis, and given that the proposed mechanism was not affected by deworming, what is the chance that deworming increases income?” Conversely, the uncertainty in AI risk comes down to dozens of extraordinarily broad questions: “When do we expect human-level AI? Will a human-level AI rapidly self-improve until it becomes a superintelligence? How easy is it to program an AI that won’t destroy the world? What strategies should we pursue to program an AI that won’t destroy the world?”

Of course, long-term causes are not a monolith. I’d like to contrast lab-grown meat with overthrowing patriarchy– both causes I feel strongly about. Lab-grown meat is speculative, but many informed experts believe it will be available on store shelves within decades. While it’s totally possible that we’re fundamentally mistaken about lab-grown meat, there are many actions which seem very likely to increase the likelihood that it’s invented and accepted by the general public, such as funding researchers and getting good PR.

Conversely, no one has any idea when sexism will be eradicated from society: it could be decades from now or thousands of years from now. There are some steps that might help– gender-neutral parenting, less sexist media, improved contraceptive access, legal equality for women throughout the globe, feminist awareness-raising– but it’s unclear how much any of those steps could accomplish. They may very well be useless or even counterproductive.

I suspect a lot of disagreement about AI risk comes from people placing it in different categories. Many people who don’t think AI risk is a good thing to spend money on think that AI risk is as speculative as smashing the patriarchy. Many people who prioritize AI risk think that AI risk is about as speculative as lab-grown meet. A few people who prioritize AI risk seem to believe it is as speculative as deworming medicines.

I think illuminating the reasons people disagree about AI risk is just one of the many ways that this framework would improve discussions about near-term and long-term causes, and I hope this framework will be more generally useful in the future.

Advertisements