I have never quite understood why the utility monster thought experiment was supposed to be a “checkmate, utilitarians!”
The utility monster, for those of you who don’t know, is a hypothetical beast who derives much more utility from everything than anyone else. He finds cookies, tea, music, and so on a thousand times more pleasurable than anyone else; conversely, he finds even a pinprick a thousand times more painful. Therefore, the utility monster outweighs everyone else and you should sacrifice utility for him.
Now, pigs have preferences over states of the world, can feel pain and pleasure, and can probably even feel my handwavey concept of eudaimonia. Therefore, under most formulations of utilitarianism, pigs matter ethically. However, most people– even most animal-rights advocates– agree that humans matter more than pigs: if you have a choice of giving a delicious meal to a pig or a delicious meal to a human, you should probably not give it to the pig.
(This, of course, does not justify torturing a pig to feed a delicious meal to a human.)
If you ask people why this is, you’ll get a lot of answers like “humans are smart, pigs are dumb” or “humans are conscious, pigs aren’t”, which cache out as: humans feel pain and pleasure more intensely, we are more capable of eudaimonia, we hold stronger preferences; that is, humans are utility monsters.
defectivealtruist said:
i think that’s kind of the point? pigs and humans form an existence proof for utility monsters, which then implies that an entity could exist that is a utility monster with respect to humans. that goes against most people’s moral intuitions.
LikeLike
ozymandias said:
Most people are speciesist. 🙂
LikeLiked by 1 person
The Smoke said:
Does anybody else in the EA-society actually confess to utiliarianism? This feels really strange and arbitrary to me.
Thinking ‘Give Well’ and similar things appear to be a good idea to me, as in those cases one has already an imagination of what constitutes good, and tries to be most efficient about that. Trying to derive moral judgements from the fixed idea of a “utility value” seems to be unfounded.
The only reason I can imagine why one would stick to that is a combination of
1) one wants to “feel moral” (understandably) and therefore needs a moral system to tell good from bad, and
2) one really, really doesn’t like the idea of being irrational, and tries to avoid this by deriving everything from a small set of rules/from a simple principle.
Now while a lot of people (e.g. many religious ones) care about 1), normally their moral system comes from what they pick up from tradition and mainstream culture, so they don’t care about 2) that much.
However, anybody for whom 2) is relevant is in a dilemma, because the best you can hope to achieve is consistency of your moral guidelines, but that doesn’t make them in any sense better than any arbitrary set of rules you find written in a book.
I can see a benefit here when discussing with religious people about morality, that when they shout “god!”, you can shout “reason!” back.
Now particularly to Ozy: I assume this is all familiar to you, and you have thought about those arguments before, but wouldn’t it be more honest to say, “I like to help people, when I can”, than what I am taking away “When presented with two possible choices I would take the one that maximizes total expected utility”, which is probably not descriptive of how you would act most of the time, and moreover would have to be supplemented with “, after figuring which factors are relevant for the utility function”.
LikeLike
stargirlprincess said:
@The Smoke
Almost all utilitarians think utilitarianism describes what you should do. It is not meant to describe what a person would actually do. Utilitarianism is almost always meant as a prescriptive not a descriptive ethic.
LikeLiked by 1 person
The Smoke said:
What I wanted to say at the end is that least for UTILITARIANS it should have some descriptive power (except they are just super-theoretically-minded), in order to demonstrate at least some practicality, which is important when one wants to justify it as a prescriptive ethics.
Somehow commenting is the opposite of doing math, since it is all clear in your head until you try to write it down.
LikeLike
stargirlprincess said:
@The Smoke
I think you are thinking about this differently from how utilitarians think about it. No one I know believes that utilitarianism describes how they (or any other human acts). Utilitarianism is meant as a description of how an ideal moral agent would act. Utilitarians do not expect anyone to act like an ideal moral agent.
However utilitarianism does allow one to make practical decisions in many areas! For example a utilitarian framework can be used to determine which charity to give money to. Utilitarian frameworks are also very useful for understanding political issues. It can, for example, be applied to understanding the regulation of psychiatric drugs.
However no one thinks they act like an ideal utilitarian agent. Nor is anyone claiming this.
LikeLiked by 1 person
LTP said:
@Stargirlprincess
Of course, if an ideal utilitarian agent is a being so foreign to and disconnected from human psychology and experience as to be impossible for any human being to be, then that is a strike against utilitarianism in my eyes. To say an ideal moral agent would be a perfect utilitarian is as absurd as to say an ideal moral agent should gain Superman’s powers and use them the change the world. Further, such an idea would seem to only breed scrupulosity and indecision, for any act that doesn’t maximize utility is a wrong act.
Of course, you’re gesturing at the view that utilitarianism is true, but that the best moral course of action is for everybody to act as if it wasn’t true. There are some utilitarian philosophers who even suggest that the educated elite should lie about morality to the masses. I find that while that perspective is not a refutation of utilitarianism, it is very odd and is one of the reasons I’m wary of it.
LikeLike
The Smoke said:
I argue that the first example has nothing to do with utilitarianism, since the situation is already such that one has a certain amount of money and wants to do the most good with it, maybe with certain constraints as to what group of people the money should go. So here a utilitarian framework is at best tautological.
As a guideline for political decision-making, as in your second example, I admit that it it at least seems like something you could do to me, as the whole process is normally arbitrary at best.
But I get the feeling, that the point most EA-people are making (e.g. Scott Alexander) is that politics should be more rational and consistent in their decision making (yes, of course!), which only resembles utilitarianism as you will do some utility calculations.
I feel noone actually would endorse it as an overall moral framework. (If you do, why would you? WHY WOULD YOU WANT TO DO THAT?)
To be clear, I don’t have so much of a problem with the extreme results you obtain in certain thought experiments, I just don’t get that tiny step where one goes from “this is an interesting idea” to “this is how we should act”.
LikeLike
stargirlprincess said:
@LTP
“Of course, you’re gesturing at the view that utilitarianism is true, but that the best moral course of action is for everybody to act as if it wasn’t true.”
No, I think the best course of action would be for everyone to follow utilitarianism. I just do not expect this to happen.
“ideal moral agent should gain Superman’s powers and use them the change the world”
If possible an ideal moral agent would gain superman’s powers and use them to make the world better (I do not like the phrase “change the world” since this implies making large scale changes which are dangerous without very thorough knowledge). Obviously most people are incapable of gaining superman’s powers. But one could think of “earning to give” as a very mild form of “making yourself more powerful in order to do good.” Utilitarianism does imply you should make yourself more powerful if your power will be used for good.
The main problem with “making yourself more powerful in oder to do good” is the vast majority of people who try to gain power “for good” wind up doing massive amounts of harm. All this implies however is that one should be very conservative in how you use your power and influence to make the world better. Do things that are guaranteed to make the world marginally better instead of trying for radical change. It so happens we live in a world where there is massive uncertainty about how to make things better. This does not change the goal of “make things better if you can.” I will note Elizier realizes all of this and even wrote an article saying “For the good of the tribe do not take power for the good of the tribe.” Humans run on corrupted hardware. But if you were superman you could surely do some low risk non-combat stuff that makes the world better.
LikeLike
stargirlprincess said:
Another PoV on utilitarianism:
Here is a useful coment from LW: “Let’s call people who view morality as what is obligatory as Moralos, and people who view morality as what is preferable as Moralps.
Moralos will view Moralps as unjustly demanding and completely hypocritical – demanding payments on a huge debt, but only making tiny payments, if any, toward those debts themselves. Moralps will view Moralos as pretty much hateful – they don’t even prefer a better world, they want it to be worse.”
Utilitarianism, as used by most people, does not naively say any action is “wrong.” It just implies some actions are worse than other actions. Its not clear you can ever know which action “maximizes utility” so its impossible for the theory to ask you to maximize utility. However utilitarianism does let you compare options.
LikeLike
Zykrom said:
I tend to see it more as “the supposed ‘utility monsterness of humans is a paper thin veil for speciesism” than the utility thing being how people actually think and the speciesism being incidental.
LikeLike
Ezra said:
Humans are utility monsters compared to pigs, what about a creature or species that is a utility monster compared to humans?
LikeLiked by 1 person
Ano said:
For example, what if we encountered an alien species that was more advanced than us? Or what if we created an AI with many times the capacity for pleasure than any human? Would they be justified in destroying our planet to build an interstellar superhighway?
Maybe the AI risk guys have been going about it all wrong. Maybe the increase in utility means that we’re actually morally obligated to create a superintelligent AI as soon as possible and let it wipe us out.
LikeLiked by 1 person
stargirlprincess said:
A serious risk of Superintellegent AI is that we might create a powerful aI that is not conscious. Or that does not experience “eudamonia” or “utility” from achieving its goals. If such an AI wipes out the earth we have effectively killed off all the known, morally relevant life.
LikeLiked by 1 person
Rangi said:
…What about them? Morally, they matter more than we do, just as we matter more than pigs.
We could ignore that, of course, and try to fulfill our own values at their expense, but that wouldn’t be moral, any more than fulfilling your tribe’s values at the expense of everyone else.
LikeLike
OrneryOstrich said:
What about the inverse utility monster? Someone who derives zero or negative utility from life doesn’t deserve to live, right?
LikeLike
davidmikesimon said:
Such a person wouldn’t want to live.
LikeLiked by 1 person
Creutzer said:
If you’re a preference utilitarian, then this is the answer and you’re fine. If you’re a hedonic utilitarian, then this is more worrisome, because it might be the case that that person does want to live (clinging to the last, unwarranted hope for things to get better).
LikeLike
ozymandias said:
In theory, sure, kill the person.
In practice, if someone says they don’t want to die, it is more likely that you are mistaken about their hedonics than that they are.
LikeLiked by 3 people
J_Taylor said:
The utility monster doesn’t just experience more pleasure than humans. The utility monster experiences more pleasure than any aggregate of humans. That is, the utility monster does not receive diminishing returns from eating all the delicious meals, leaving none for humans or pigs. (Obvious consideration is a sadistic utility monster.)
>that is, humans are utility monsters
Probably yes, but only most of the time. My biggest problem with utilitarianism is that humans are complicated and will self-modify into utility monsters if that is in their interest.
LikeLiked by 2 people
Ghatanathoah said:
>My biggest problem with utilitarianism is that humans are complicated and will self-modify into utility monsters if that is in their interest.
It seems like we should precommit to not honor the monstrousness of self-modified utility monsters, for game theoretic reasons.
LikeLiked by 1 person
ozymandias said:
Depending on what you mean by “self-modify into utility monsters.” If my friend is depressed and incapable of deriving pleasure from anything, and then self-modifies into a nondepressed person, he is certainly moving more in a utility-monster-ward direction, but that seems quite positive. Conversely, self-modifying to be more hurt by things so you get your way is anti-social behavior. (But how often do people do that?)
LikeLike
Ghatanathoah said:
Maybe a depressed person is a bit of a disutility monster, so becoming nondepressed is less monstrous.
I don’t think people often consciously modify to be hurt more by things so they get their way. But I do think that people sometimes become more hurt by offensive stimuli the more they limit their exposure to it. So people demand things that hurt them be removed, which reduces their exposure to it, which makes them hurt more easily, which makes them want to ban more things because more things are hurting them.
LikeLiked by 1 person
Autolykos said:
> I don’t think people often consciously modify to be hurt more by things so they get their way.
But subconsciously, people do that a lot (see also: codependency). And your suggested precommitment makes just as much sense in that case – nothing good will ever come from staying in a dysfunctional relationship, and feeding the utility monster only lets it grow.
LikeLike
J_Taylor said:
>It seems like we should precommit to not honor the monstrousness of self-modified utility monsters, for game theoretic reasons.
It seems like we may have all ready done so, or something akin to this. We clearly are not homo economicus.
LikeLike
stargirlprincess said:
” The utility monster experiences more pleasure than any aggregate of humans. ”
“My biggest problem with utilitarianism is that humans are complicated and will self-modify into utility monsters if that is in their interest.”
This seems like equivocating between two very different meanings of “utility monster.” Humans are not even close to capable of self modifying so they experience more utility than large aggregations of random other humans.
LikeLike
J_Taylor said:
>This seems like equivocating between two very different meanings of “utility monster.”
You are entirely correct in this. A more formal account is really necessary, and I am sadly lacking in that department currently.
LikeLike
Hedonic Treader said:
The “no diminishing returns” part is important. Humans are clearly not utility monsters in this regard.
You might say, “Humans can derive more pleasure out of a given resource than pigs” (and even that is debatable), but that does not make humans utility monsters.
Even if you see no diminishing returns in the *number* of humans, and therefore want to make more of them, it is hard to argue humans are *optimal* for turning resources into pleasure.
Therefore, the optimal utilitarian answer would be to invent artificial utility monsters and subsidize those, rather than humans, once technology allows it.
So if one wants to be speciest, they can be speciesist, but the utility monster concept doesn’t help them defend it specifically.
LikeLike
argleblarglebarglebah said:
Part of the point of the original thought experiment is that the utility monster doesn’t just derive more pleasure by a factor, it derives more pleasure asymptotically from the same thing.
Or in other words: to actual people, most things are not as fun the ten-thousandth time as they are the first time. If you have a million dollars, another dollar is not worth as much to you compared to if you had one dollar. The utility monster is a thought experiment that said “what if there was a thing that valued its millionth dollar as much as its first dollar?”. Since this thing values stuff more than anyone else, it should obviously have all the stuff, or at least almost all the stuff, and most other life on Earth ends up going extinct because it gets fed to the utility monster.
THAT’s the problematic conclusion, not just “it’s repugnant to have a thing that’s more morally valuable than people” but “the existence of a single creature can apparently make it a moral act to kill a planet.”
LikeLiked by 1 person
ozymandias said:
That’s true, that is problematic, but I’ve seen “utility monster” used a lot to refer to the first thing and that is what I am talking about.
LikeLiked by 1 person
arielbyd said:
The survival of a single human isn’t obviously more important than the survival of a multicellular-life-free planet.
LikeLike
Illuminati Initiate said:
…Its not?
LikeLike
Siggy said:
I don’t believe in (objective) utility monsters because I think the relative scaling of the utility of different people is arbitrary. That we scale every human’s utility to be about equal is a choice we make about our ethical system, not a fact of reality.
Of course, then I’m not really sure I believe in utilitarianism in the first place.
LikeLiked by 2 people
Illuminati Initiate said:
There is something complicated about qualia going on her, but I’m not sure how to put it into words (and I’m tired and need to study). Vaguely, I’m not sure what it feels like from the inside to be a utility monster.
LikeLike
Ghatanathoah said:
Yeah, I’ve often wondered if utility monsters are as ridiculous as amps that go up to 11.
LikeLike
Ginkgo said:
“I don’t believe in (objective) utility monsters because I think the relative scaling of the utility of different people is arbitrary.”
Arbitrary and amoral, except in specific settings, where utility is actually criterial.
For instance medical care in the Army is typically rationed by the relative utility of the patient. Pilots and generals get priority, then other officers, then enlisted people, then civilian dependents. the only reason this is acceptable is that everyone in the system has effectively bought into it primary premise, the primacy of the mission. Whatever advances the mission is , whatever impedes it is bad. I can’t think of many other settings where this condition applies.
LikeLike
Lambert said:
what about within species? should people who are very emotional be valued more than those who do not feel strong emotion.
LikeLiked by 1 person
tailcalled said:
Of course. This is obvious if we consider a concrete moral dilemma:
Suppose I slightly prefer italian to chinese food, while you can’t stand the taste of italian food. We are going to a restaurant together. Should your preferences weigh more than mine, so that we go to a chinese restaurant?
LikeLiked by 2 people
taradinoc said:
Therefore, it’s in your interest to amplify your preferences, say by convincing yourself Chinese food is terrible, in order to improve the chance of getting your way.
LikeLike
stillnotking said:
What if a cannibal would prefer to eat you more than you would prefer not to be eaten?
LikeLiked by 1 person
code16 said:
It seems like if something is so important to you you’re going to go feelings-hack over it, then that is *already* not a slight preference. Which most easily makes sense if you have an existing preference for some thing that’s involved but not being made explicit – like, maybe every time you go to restaurants the other people choose and you’d like a chance to choose. In which case, we should just add those preferences to the ‘voting’ to begin with rather than having people decide to try to force-turn those preferences into surface-level restaurant preferences.
LikeLike
Patrick said:
I’ve never understood why the above rejoinders are considered valid attacks on utilitarianism.
Lets say that preferences are sufficiently malleable that I can convince myself to amplify my own preferences so they weigh in a utilitarian calculus. So what? That entails that the utilitarian could respond, “the utilitarian calculus works out better if you don’t do that, so you have an obligation not to, and if you have done it, you have an obligation to undo it. Also, you might have an obligation to use that malleability to care less about those of your interests that are in conflict with others, so that you can more effectively share.”
And as for the cannibal- he doesn’t, and I’m pretty sure he can’t. I’m not sure that the utility monster is even a coherent concept. What would it look like, or feel like, to get more utility from eating someone than they lose from being murdered? I’m pretty sure this is like responding to a mathematician with, “Ok, I hear you, but what if I divide by zero ANYWAY?”
LikeLiked by 2 people
stillnotking said:
There has to be some way the cannibal’s utility function could be greater than yours, unless utility functions are bounded, which is the essence of the problem!
LikeLike
Sniffnoy said:
The problem (or potential problem) isn’t deliberate self-modification, the problem is that brains can recognize that sort of incentive and respond to it, no conscious decision needed (or the possibility of such).
It seems to me this may be less a problem with utilitarianism though than with certain naïve attempts to implement it. After all, if your implementation leads everyone to engage in a feelings arms-race, this will probably lead to a net decrease in utility (however you’re measuring it). So you shouldn’t do things that incentivize emotional arms-races. (The question of course then becomes, what do you do? I have no idea. Of course, I’m not a utilitarian. 😛 )
LikeLiked by 1 person
tailcalled said:
The utility monster cannibal is essentially the utilitarian analysis of people eating animals.
LikeLiked by 1 person
Alex Godofsky said:
There are plenty of formulations of utilitarianism in which pigs are categorically irrelevant, not just “less important”. And not all of them are just arbitrarily speciesist.
LikeLike
davidmikesimon said:
Maybe it’s more accurate to see the human vs. pig moral inequality as being about similarity to the mind running the ethical system. We value human consciousness more than pig consciousness because it’s more familiar, not because humans have “more” consciousness (if it even makes sense to make that comparison).
From a human’s ethical standpoint, humans are the most valuable; there’s no utility monster more monstrous than us that we have to worry about unexpectedly giving priority to.
LikeLike
Protagoras said:
I always do this kind of comparison when I teach about utilitarianism. To further weaken the intuition that this kind of case is any kind of a problem for utilitarianism, whatever is to humans as humans are to pigs are perhaps reasonably described as gods; lots of humans have thought sacrificing their own interests for the interests of gods they believed in made perfect sense.
LikeLiked by 2 people
LTP said:
I think it still is a problem for utilitarianism, because the “solution” reveals another weakness of utilitarianism: the denial of the value of particularist moral attachments. I believe my species is vastly more morally relevant than anything else, and I’m not sure you could convince me otherwise. I also believe my family is, and my friends and loved ones are vastly more morally relevant to me than other humans.
As an aside, utilitarianism could deny this problem by saying that pigs and cows aren’t morally relevant at all, only humans.
LikeLike
Protagoras said:
Much less obviously a weakness. Plenty of moral philosophers who weren’t at all utilitarian have thought impartiality is essential to ethics, so this is one of those “you can’t satisfy everyone’s intuitions” issues rather than a special problem where utilitarianism is at odds with everybody’s common sense.
LikeLiked by 2 people
LTP said:
You’re right, that’s very true.
LikeLike
Zakharov said:
Subjective biased utilitarianism is a perfectly valid form of utilitarianism.
LikeLike
Autolykos said:
It even makes kind of sense to care more about the utility of family and friends – just because you know their utility functions a lot better than anybody else’s.
On the other hand, you probably derive utility from making people close to you happy (and lose utility from seeing them unhappy), and making them happy draws them closer. This may allow you to turn families and circles of friends into small utility monsters. And it seems that society even respects this to a degree (like not forcing you to testify in court against relatives, or accepting “I have family” as a valid excuse for not taking risks).
LikeLike
Anon256 said:
This would seem to imply that people who are less intelligent or less able to feel emotions are less deserving of resources and e.g. medical care? But you just told off Scott for defending that position.
LikeLike
Autolykos said:
That may be correct in theory, but implementing it can get a little hairy. Who gets to decide how deserving which people are of what resources?
In practice, we seem to have settled for “is alive and member of homo sapiens” as a Schelling Point for deserving most basic resources (including medical care, at least in civilized countries), and leaving anything optional to the market (if you care more about it, you’ll pay more; if you’re smart and care about wealth, you’re likely to become rich).
LikeLike
thepenforests said:
I think the Culture series did a decent job of portraying the Minds as somewhat sympathetic utility monsters. I remember in one book or another there was an offhand reference to how the killing of a human or a drone by another civilization would only be a minor diplomatic incident, whereas the killing of a Mind would be a Big Deal. And at least while reading that passage I couldn’t really disagree with the logic. Yeah, *of course* Minds should receive more moral weight – they’re *Minds*!
(Of course the fact that the Mind themselves cared deeply about humans kind of complicates the moral calculus, but the overall point still stands)
LikeLiked by 1 person
Ghatanathoah said:
Here’s another rather obvious example of a creature that gets way more utility from resources than other people, but most people don’t find problematic:
Imagine a lifeboat with three people. There is enough water on the lifeboat to extend everyone’s life by one day, or one person’s life by three days. The lifeboat will reach land in three days. In this case it seems obvious that the three people should draw lots and give the water to the winner.
The winner is a utility monster, they will derive decades of utility from the water, whereas the three people together will only derives days. But most people don’t consider this a problem.
LikeLike
Zakharov said:
I hypothesize that if one were capable of recognizing an entity as a utility monster, it would make perfect sense to offer great sacrifices to the utility monster. Protagoras describing them as a “god” makes more sense than describing them as a “monster”.
LikeLiked by 1 person
Zakharov said:
By “would make perfect sense”, I mean “would be intuitive”
LikeLiked by 1 person
Pingback: Link Archive 4/1/15 – 5/6/15 » Death Is Bad
Pingback: Frivolity, Part I | To Be Anything At All