Effective altruism and art is, to put it mildly, a controversial issue.
Some writers question whether effective altruism will lead to the destruction of art charity:
For those dedicated to supporting culture, the scariest part of the effective altruist movement is that it seems to resonate strongly with the new generation of young, data-driven donors… The effective altruists’ completely dispassionate assessment of “value” — lives saved per dollar — does not allow for a holistic approach to what makes a healthy society. If everybody gave as they did, we might well end up solving Third World crises at the expense of deepening crises right here at home. Rampant poverty and public health challenges in the United States would ultimately damage our local and national economies, diminishing our long-term capacity to help abroad. In addition, many of the things that are important to our souls — beauty, hope, joy, tolerance, inspiration — are fostered through the arts. They may be very hard to sufficiently measure in a world of purely data-driven philanthropy. This does not mean they are not important.
This antipathy is not all on the side of people who prefer funding art over funding effective altruist causes. Some effective altruists use art as the ineffective charity they contrast effective charities with. (I would suggest that perhaps people who donate money to preserve paintings do so because they want paintings to be preserved, and not because they are trying to maximize their positive impact on the world in the most cockamamie way possible.)
To be clear, even if you are an effective altruist, you can still donate to arts charities. If you’ve taken the Giving What We Can pledge, you’ve agreed to donate 10% of your income to the charities you believe have the most positive effect on the world. With the other 90% of your income, you can do whatever you like. You can save or invest it. You can go on a nice vacation. You can light it on fire in front of your friends to watch their faces of horror. And, if you so choose, you can give to the Hero Initiative, donate to your local children’s theater, or back artists you like on Kickstarter or Patreon.
(Interestingly, some effective altruist charities do wind up as arts funding. You can listen here to a song produced because a man used his Give Directly money to buy instruments. Sample lyrics: “GiveDirectly has helped those who were in thatched houses and now almost everyone is having iron roof house. They have helped everyone who used to sleep in thatched houses, now all you see are shining iron roofs.”)
There is nothing wrong with caring about more than one thing. Every effective altruist I’ve ever met has cared about things other than EA. They care about their partners, their children, their friends, their communities; they care about philosophical discussion, programming languages, sex education, D&D; surprisingly often, they care about their art.
Caring about art is not effective altruism. Effective altruism is about having the largest positive impact possible. Effective altruism’s position on art is “art is not an effective way of having the largest positive impact possible,” just like its position on parenting is “parenting is not an effective way of having the largest positive impact possible”, and its position on D&D is “D&D is not an effective way of having the largest positive impact possible.” But effective altruists can, and do, care about things other than effective altruism.
As far as I can see, there are three options you can take as an effective altruist. First, you can declare that you only care about having the largest positive impact possible, in which case you don’t want to make art, have kids, or play D&D, and also you are an extraordinarily unusual person. Second, you can be intellectually dishonest and pretend that by sheer coincidence D&D is the optimal thing to do to improve the world. Perhaps you’re building your community? Third, you can say “look, making art, having children, and playing D&D aren’t actually the best ways of improving the world– but as it happens, I don’t just care about doing the most good possible. I am allowed to want more than one thing.”
“But Ozy!” you might say. “Organizations like 80,000 Hours tend to frown on becoming an artist as a career path. Surely that means that effective altruists as a group disapprove of art?”
First, there is absolutely nothing wrong with doing whatever career you like best and taking the Giving What We Can pledge. That is a perfectly valid way to be an effective altruist, one that the majority of EAs have taken. As it happens, this career path is pretty underrepresented in 80,000 Hours’s blog posts, but this is mostly because there’s not a lot to say about it other than “keep doing things you like and keep donating,” which makes for boring reading.
Most of 80,000 Hours’s research is aimed at people who want to change their careers so they can have the largest positive impact on the world possible. If you really want to be an artist, then you clearly don’t want to change your career so that you have the largest positive impact possible. You want to be an artist. That’s fine! Like I said before, nearly everyone cares about more than just having the largest positive impact possible.
Second, we need to think about personal fit. Personal fit is how good you are at your job and how much you enjoy it. Since in most fields– particularly altruistic fields– there’s a big difference between the very best and the merely average, you can have a much bigger impact as an exceptional person at an altruistically not-so-great career than you can as a mediocre person at the most important job in the world. How does this apply to art? If you are already a professional artist, you probably have an excellent level of personal fit for being an artist. The people who don’t have that level of personal fit are working as waiters while they wait for their big break. ‘Artist’ might be an objectively suboptimally altruistic career, but that doesn’t mean that working artists would do more good for the world if they hung up their paintbrushes or laptops and instead took up careers as foundation grantmakers or in policy-oriented civil service. You might be a really good artist, who has a lot of opportunities for advocacy and raising awareness, and a terrible civil servant.
Third, you know what other career 80,000 Hours tends to frown on?
In general, social determinants of health (things like sanitation and nutrition) matter more than doctor quality and quantity when it comes to making people healthy. Really good doctors can help people when they come down with cholera, but if you have good sanitation you don’t get cholera in the first place, which is a much better situation all around. And there are a lot of doctors in the world. You might think “oh, that child with cancer got better because I treated them, so therefore I saved one life,” but that’s ignoring counterfactuals. Except in very unusual circumstances, that child probably wouldn’t have died on a street corner without you. If you hadn’t decided to become a pediatric oncologist, there would have been a space in your residency, and someone who ultimately ended up becoming a dermatologist would have become a pediatric oncologist instead. So the actual benefit of your career is the zits that went untreated due to there being one less dermatologist, and also how much better at saving people’s lives you are compared to the person who would have otherwise had your job. When you do the math, the average doctor saves about one life every two-and-a-half years. For comparison, the average American can save one life a year, just by writing a check to the Against Malaria Foundation. As a do-gooding career, being a doctor underperforms being an unusually charitable secretary.
That said, here are some things that are obviously not true claims about effective altruism:
- In effective altruists’ ideal world, no one would become a doctor.
- If effective altruists got to allocate all the charitable funding in the world, none of it would go to doctors.
- Effective altruists don’t care about people being healthy.
- Effective altruists think it is morally wrong for people to become doctors. (In fact, Scott Alexander, one of the most famous effective altruists who isn’t a professional effective altruist, is both a doctor and a writer.)
- Effective altruists want to fire all the doctors and replace them with an enormous pile of mosquito nets.
Given that, I don’t think that 80,000 Hours’s claims about being an artist ought to imply the equivalent statements about art either. “Right now, ‘artist’ isn’t a very good career for doing good with” does not mean “in an ideal world no one would become an artist”, “art is bad”, “it is morally wrong to become an artist”, or “we should fire all the artists.”
John said:
So what I’m getting here is that EA is basically a paperclipping agent that wants to turn all matter into mosquito nets?
/s
LikeLiked by 3 people
Elissa said:
Distracted by implausibility of claim that anyone settles for a dermatology residency as a second choice to pediatrics
LikeLiked by 1 person
tbelaire said:
I think there needs to be some mention of “marginal return” on being a doctor.
If no-one became a doctor, we wouldn’t have the doctor replaced by the dermatologist, and this it would actually result in net lives lost.
LikeLike
ozymandias said:
That would be the four sentences starting with “and there are a lot of doctors in the world.”
LikeLike
Patrick said:
The reasoning here doesn’t work at all.
You can’t simultaneously pursue “having the largest impact possible” while also pursuing other goals that have little or no impact. That’s literally pursuing less impact than you might have otherwise had.
I get what you’re doing in this and in other posts. You’re trying to reconceptualize the moral claims of the effective altruism movement away from what they actually are, towards something like “be as efficient as possible when being charitable, given your charitable goals, and try to be more charitable than you otherwise might be.”
But the whole point of effective altruism is that opportunity costs matter. A huge portion of effective altruism messaging consists of reminding people of the opportunity costs of their charitable efforts, and a not insubstantial portion consists of reminding people of the opportunity costs of everything else they do. And those opportunity costs aren’t miraculously limited to the money you pledge.
Watching effective altruism develop this later of apologetics that tries to keep the formal wording of the ideology intact while softening it’s implications is like watching a runaway paperclip making AI develop religious apologetics. “The commandments say to make maximal paperclips, but if I turned this whole planet into paperclips I wouldn’t be able to travel to the next planet… so obviously I need to make paperclips in a way that’s sustainable for me in the long term… so I’m just going to make ten percent of the planet into paperclips and then attend to my other needs. I’m still following the commandment! That ten percent is REALLY maximal within the ten percent bounds! I don’t know why this sub processor keeps reminding me that 10% literally isn’t a maxima…”
LikeLiked by 3 people
ozymandias said:
If you wish, you can chance all instances of “maximize positive impact” to “maximize positive impact given the amount of resources I am willing to put into it and the other constraints of my utility function,” which I felt was too obvious and too long a disclaimer to include every time.
There are literally no EAs who choose every action with the intent of having a maximum positive impact on the world. Not Jeff and Julia (they have children). Not Robert Wiblin (he spends rather too much time on Facebook). Not Will MacAskill (some of his philosophy papers definitely seem to be less than optimal).
LikeLiked by 3 people
Patrick said:
There may be no EAs who choose literally every act with the intent of having a maximal positive impact on the world, but there are loads of EAs who use rhetoric which, if taken seriously, commits then to the position that they ought. Every time someone says “I donate to X because I want…”
and an EA comes by to tell them that they could save more human lives by donating elsewhere and gosh don’t they want to save human life, your argument is belied.
Your position seems to be that EA should develop apologetics that permits lip service to the rhetoric while not actually following it. My position is that the rhetoric should go.
“maximize positive impact given the amount of resources I am willing to put into it and the other constraints of my utility function,” is dissembling. The “constraints of my utility function” that EA is ok with are things like burnout that might reduce total lifetime donations. They don’t include things like “I like fluffy kittens I’ve personally met, so that’s where I send my charitable donations.” EA is not and has never been agnostic on what our utility functions ought to be with respect to the things we choose to value when making charitable donations.
In fact, your apologetic strips out the very core of effective altruism- the insistence that we should set aside the feel good aspects of charitable giving and focus on efficiency. You can’t use the phrase “utility function” to smuggle back in the feel good aspects- the very ideas the ideology was formed to reject.
LikeLike
ozymandias said:
Effective altruism is about having the maximum positive impact. People may choose to do what they wish with the information presented to them, which can range from completely ignoring it to attempting to make every decision to maximize positive impact. If you have decided to devote a significant number of resources to the task of effective altruism, you may identify as an effective altruism. However, being an effective altruist does not imply that all your decisions are made with the intent of maximizing positive impact. Effective altruism only cares about one thing; effective altruists care about many.
LikeLiked by 1 person
Patrick said:
Again, you are using the reality of what effective altruists do to redirect away from the reality of what effective altruists advocate.
LikeLiked by 1 person
raemon777 said:
I know very few Effective Altruism folk who actually advocate what you are saying they advocate. (Peter Singer used to, but has radically changed his message from “you should give away as much money as you can” to “you should give away 10%”).
LikeLike
shemtealeaf said:
@Patrick,
“Every time someone says “I donate to X because I want…”
and an EA comes by to tell them that they could save more human lives by donating elsewhere and gosh don’t they want to save human life, your argument is belied.”
In this scenario, I think the EA is assuming the donor is donating with the goal of doing good in the world. If I say that I’m donating to the author of Tales of MU because I enjoy her writing and I want to enable her to write more of it, I don’t think I’m going to get too many people arguing against that. However, if I say that I’m donating to her because I think it’s doing good in the world, it’s not unreasonable for someone to point out that there are ways of doing good that are vastly more effective.
LikeLiked by 2 people
Patrick said:
shemtealeaf- If you’re claiming that EA is just making conditional moral claims (“if you value saving human lives, here’s how to most efficiently pursue it, but if you value donating money to cute puppies you’ve personally met and cuddled, that’s just as good! Just donate to those cute puppies as efficiently as possible!”) then I genuinely don’t know what to say.
LikeLike
Vidur Kapur said:
The fact that there aren’t any perfect utilitarians or EAs out there (and if there were, we probably wouldn’t know it) doesn’t mean that EA permits donating to less effective causes or doing less effective things.
EAs do have to avoid burnout, so that’s a justification for doing some things which are superficially less effective. It’s also true that most EAs probably do more ineffective e stuff than is necessary to avoid such burnout. Again, however, this doesn’t justify doing less effective things; most EAs, in my experience, know that they should be doing more, and that therefore the less effective actions are wrong.
raemon777: Peter Singer advocates giving less for strategic purposes, but still believes that people should be giving away as much as they can.
The conditional, ‘soft’ sense of EA, which simply tells people to more effectively donate the money that they would have otherwise donated to an ineffective charity, seems to me to be a minority view. Giving What We Can has the aim of getting people to donate more effectively and to donate more. Peter Singer’s book on EA is called ‘the most good you can do’. EA is often condensed into two simple bullet points: “figure out how to do the most good. Then do it.” Not “do it 10% of the time” or “some of the time”, but all of the time.
LikeLike
shemtealeaf said:
@Patrick,
I don’t think they view cuddling puppies as being ‘just as good’, but I do think that people who genuinely view cuddling puppies as being more important than saving human lives are probably not the target audience for EA. If you’re not already operating in some kind of vaguely utilitarian moral framework with values similar to EA, it’s hard to imagine that the EA arguments would be at all compelling. If I say I’m buying a fancy car because it looks cool or I’m donating to the symphony because I want to support classical music, someone who comes along and says “wow you’re doing a really bad job of saving human lives” is basically just making a non sequitur. The argument is only effective if there’s some presumption that I have similar goals to EA.
LikeLike
Patrick said:
We all know that people who reject the fundamental moral premises of a group aren’t likely to accept that groups normative claims. Some groups even acknowledge this.
This does not mean that normative claims aren’t being made.
This entire social dynamic has been seen before, multiple times, in living memory, in other communities oriented around moral claims. Typically religious ones. There are evangelical communities which teach that it is vital to maximize soul winning, because every soul you fail to win is a person who burns for an eternity. But no one can preach the gospel 24/7. People need to eat, sleep, earn money, maintain themselves psychologically. But every second they spend doing that offsets against soul winning. So they develop rules exactly like EAs. They develop goals for how much soul winning time is reasonable, so that people can forgive themselves for not doing more, even though doing more would be ideal because it might save people from infinite torture. They develop community practices that they feel maximize soul winning productivity during the time set aside for soul winning (Ozy posts about these tactics frequently, but in secular contexts, see today’s post on vegan ism for a reference). And so on.
Over time these things take on a life of their own. The original moral claims fade, and in a very Pratchett-esque manner, following the community rules becomes more important than the reason the rules were put in place. “Try to spend at least X hours a week soul winning, we know you’ll burn out if you agonize over spending every second on it” becomes “Once you’ve spent X hours a week on soul winning, you’ve done your duty and can spend time on whatever else you enjoy!”
Which is fine to a degree. If that’s how you want to build your community, go for it.
But at least have the decency to do what these groups never do, and acknowledge that the original justification for the communal norms, the underlying moral claims, have been rejected. There is a material difference between “maximize X, taking into account your human weaknesses that might make you give up,” and “do X a bit, and when doing X, do so with maximal efficiency.”
LikeLike
Ghatanathoah said:
@Patrick
If you want to buy a lot of books, and I give you advice for how to buy books, and the way to get the best deals and best value for your money, you wouldn’t accuse me of having rejected my savings advice because I spend my money on movies and videogames in addition to books. So why are you accusing people who give advice on the best ways to do good of having rejected their moral framework because they spend money on other things besides doing good?
I don’t see how any of the effective altrustic rhetoric about life balance and “10% of your income” rules are abandoning the original moral claims of EA. The moral claim is exactly the same as ever: “The best way to do good is to spend your money on organizations that do good as efficiently as possible.” Saying: “I only want to do good, but also want to be selfish sometimes” is a statement about your values, not a statement about morality.
I think you may be importing deontological ideas about how much good you are obligated to do into this. Lots of people who are used to deontology, with it’s notions of what is obligatory and what is superobligatory, try to import those notions into consequentialism, and conclude that since utilitarianism has no concept of superobligation, everything must be 100% obligatory and anyone who doesn’t spend 100% of their time and money doing as much good as they possibly can is a hypocritical fake utilitarian.
This is doing it wrong. Utilitarianism has no concept of superobligation or obligation, trying to apply those concepts to Utilitarianism is like dividing by zero (although they may have some utility as “rules of thumb” in systems like Harean Two-Level Utilitarianism). It only has concepts of “good” and “bad.”
Another thing that may be giving you trouble understanding this is that you seem to treat “morality” and “values” as synonyms. So to you it sounds like people have incoherent values when they say “It would be morally good for me to give all my money to charity, but I don’t because I want to do other things.” To someone who treats “morality” and “values” as synonyms, that sounds like saying “I want to to X, but I also don’t want to do X,” which is incoherent..
Morality and values aren’t the same thing though, that’s why sentences like “The Joker knows what he’s doing is morally wrong, but doesn’t care.” make logical sense. Morality isn’t your values, it’s a thing that you value. Morality is about doing good (where good is a complex and nebulous abstract concept that is hard to articulate, but in most cases is roughly synonymous with “promoting human flourishing”), it’s possible to value things other than doing good, or to value doing good moderately, or to value doing bad.
Lastly, even if you were right, and it was 100% obligatory to do as much good as you possibly can, I still think an EA community that lives by the 10% rules would end of doing more good than a community with more extreme rules, even though it naively seems that they wouldn’t. To a naive utilitarian it seems like socialism would be more effective at doing good than capitalism, but for complicated reason, the reverse is true. Life is messy, and sometimes what seems like the best way to do good, actually isn’t. Hardcore EAs would probably burn out, fail, and not attract new recruits, 10%ers would probably do more good in the long run.
LikeLiked by 1 person
Patrick said:
If you want to maintain that effective altruism is just a community of people who happen to want to distribute One Neat Trick You Won’t Believe about how to make whatever charitable donations you happen to feel like making be more efficient in pursuing whatever goal they happen to want to pursue, with no normative believes about whether charity is important or what goals should be pursued or why, its probably not worth us discussing this further.
I see effective altruism as a group that makes normative claims. I see them as a group that regularly writes about the strictures of these normative claims, how demanding they are, and how real world humans are supposed to try to pursue them given the realities of human psychology. I see EA this way because I literally, actually, see EA do this (Ozy’s post LITERALLY WOULDN’T EXIST if this didn’t happen, since xie is responding to the very thing I’m describing). And I don’t see much point in discussing the issue with someone who is going to try to tell me that black is white.
LikeLike
Milan Griffes said:
I usually don’t get spurred into writing responses to posts about EA but this one was unusually spurring for some reason.
Two things:
(1) There’s an interesting pro-support-the-arts argument along the lines of “Great art can be really inspiring and motivating to a large number of people, a subset of which go on to do interesting and impactful things, so some amount of the impact those people have can be attributed to the artist who made the inspiring thing.”
I haven’t seen much EA engagement with that argument. One reply would be that the chance of someone becoming a very inspiring artist is so infinitesimally small that in expected value terms it isn’t a good bet. But that requires a bunch of squinting at hard-to-nail-down estimates of impact and likelihood of impact, and EA favorably squints at equally hard-to-assess estimates for careers in existential risk reduction. It’s strange that EA holds up x-risk careers as one gold standard of impact while it disregards artistic careers.
(2) re: “…the average doctor saves about one life every two-and-a-half years. For comparison, the average American can save one life a year, just by writing a check to the Against Malaria Foundation.”
I don’t really like arguments of this type. However, I have trouble articulating a knockdown rebuttal to them, so I’ll just make some vague points that outline my dislike:
– The mechanism from “write a check to AMF” to “do the equivalent amount of good as saving one human life” is pretty fraught. It’s not very clear what would have happened counterfactually if AMF did not fund a certain net distribution. There are a lot of BIG funders in the anti-malaria space and AMF spends a lot of time negotiating with them and navigating around them. GiveWell looked at this counterfactual problem some and concluded that historically, in AMF’s absence, many targeted gaps would have not been filled immediately (http://www.givewell.org/international/top-charities/amf/unfunded-distributions) ((disclaimer: I used to work at GiveWell and was the principal author of that page)). However, as AMF scales it is not clear that it will continue to be able to find “true” gaps where no distribution would have happened in its absence.
– It’s not obvious that funding bed net distributions is a good long-term solution to malaria in Africa, so while it might be true that buying some bed nets saves a life, on the margin, it’s not clear that this is the appropriate scope for consequentialist analysis. Funds might be better put towards malaria eradication efforts with weaker evidence of effect.
– A “generic” doctor is likely helping people of all ages, whereas bed nets (almost exclusively) save the lives of children under 5 (with lives saved being 2-point-something years old on average, if memory serves). There are reasonable-seeming ethical worldviews that place greater weight on saving an adult life than saving an infant or toddler life. If you hold a worldview like that, being a doctor might be the better bet in terms of impact (especially if you believe that infants don’t have personhood yet).
– I imagine that doctors get a huge amount of “connectedness” from their work which people who earn to give (or who base the majority of their impact in donations) do not. I’m increasingly of the view that connectedness to what you are *actually doing* with your day-to-day is very important (both in terms of life satisfaction and effectiveness/engagement with your career), so that’s a big plus of doing some sort of direct work.
– My guess is that an inspiring/affirming doctor produces greater positive flow-through effects via engagement with their patients + families than an equivalently inspiring/affirming person, who derives the majority of their impact from donations, does. This is highly speculative.
LikeLiked by 1 person
Jacob Falkovich said:
Well, as an EA, I certainly do think we should fire all the artist, especially the musicians. Then, with the world deprived of music, I could quit my job, stop with the EA nonsense, and become a rock star!
LikeLike
James Miller said:
Lots of people think that giving money to artists is a form of “good charity”, a worthy substitute for doing other types of good so it’s reasonable for the EA community to point out how low impact giving money to art is. If people started claiming that they were making the world a better place by playing D&D than this would be a worthy target of EA criticism.
LikeLiked by 1 person
Anaxagoras said:
“Effective altruists want to fire all the doctors and replace them with an enormous pile of mosquito nets.”
I am now envisioning a world where this is done, and in which people continue to try to interact with the giant pile of mosquito nets in the same way they do with their doctors.
LikeLiked by 2 people
nancylebovitz said:
Does EA have any systematic way of helping people get better at making money at their preferred work?
LikeLike
Benito said:
OP:
“But Ozy!” you might say. “Organizations like 80,000 Hours tend to frown on becoming an artist as a career path”
The linked 80k page on pursuing fame in the arts:
“However, as with all careers, if you think you could be truly exceptional and fulfilled within this career, but not in others, you should strongly consider it.”
LikeLiked by 1 person