One very common critique of hedonic utilitarianism is the wireheading objection. If you try to fill the universe with beings experiencing as much pleasure as possible, then the perfect world would consist of nothing but rats– a larger or more intelligent animal would use up resources better spent on new morally relevant beings– with a steady drip of heroin into their systems, and the infrastructure necessary to keep them alive and drugged. (If you don’t happen to think animals are morally relevant, feel free to replace “rats” with “humans”.) This seems, to put it lightly, counterintuitive.
That seems to be one of the major reasons that my friends have adopted preference utilitarianism instead. Unfortunately, preference utilitarianism leads one inevitably to the conclusion that the morally right action is to create beings whose preferences are as satisfied as possible. That suggests that the perfect world would consist entirely of immortal beings who don’t want anything except that the speed of light continue to be 186,000 miles per second.
That seems even worse than the hedonic utilitarian’s perfect world.
I think tiling problems are a problem with any utilitarianism that maximizes something that isn’t absurdly vague and handwavey, and the only reason that the vague handwavey ones are exempt is that it’s hard to figure out what exactly they’d support tiling. For any brain state you’d choose to maximize, there is a being who has the maximum amount of that brain state, and the correct thing to do is to repeat that being as much as you can.
One solution is to be a preference utilitarian and to say that it is ethical to create beings with any set of preferences, no matter how unsatisfied their preferences are likely to be. This is subtly different from saying that it’s okay to create any being: it would not be okay to create a child with Tay-Sachs, because children with Tay-Sachs would have the same desires as anyone else, they are just also in tremendous pain, which makes it harder for them to satisfy their preferences, most notably the preference not to be in tremendous pain. However, it is perfectly ethical to create people who want to fall in love and are capable of doing so, even though they will probably want their love to be returned and be heartbroken when (as is inevitable) this doesn’t happen.
I have thought of a few perverse edge cases created by this solution. First, it would be ethical to create various beings whose situations we perceive as exploitative (e.g. cows that want to be eaten, house elves). Second, it would be ethical to create beings that want to die and then kill them immediately after birth, or children with a hypothetical Masochistic Tay-Sachs condition that makes them want to suffer horribly and die young. Honestly, I am inclined to accept both of those.
More thorny for me is the idea that it would be ethical to create beings with unsatisfiable preferences. You could create a being whose burning all-consuming passion is that light travel at 168,000 miles per second and pi be four and doom it to a life of dissatisfaction, and this formulation of preference utilitarianism wouldn’t say shit. (Interesting question for atheists: is the God-shaped hole in the human heart an example of this sort of preference? If so, would it be ethical to eliminate it?)
My friend dataandphilosophy has proposed a different solution: valuing diversity. He argues that very few people have an objection to the existence of rats on heroin or beings that want the speed of light to stay where it is or whatever; our moral intuitions merely object when the entire universe consists of them. Therefore, he argues, we ought to value diversity of minds in addition to satisfiability of utility functions.
I have a couple problems with this one, too. First, it leads to really weird outcomes if the universe is infinite and therefore everyone exists an infinite number of times and therefore no one is morally relevant. Second, it suggests that if you are a hyperintelligent computer and you run identical simulations of the same person a million billion times, changing nothing, then that is exactly morally equivalent to running it once. To be honest, my intuitions don’t give me a good answer about that one, perhaps because the situation is so unfamiliar. Third, it has very weird implications about identical twins and people with a lot of family members.
So I don’t have a fully satisfying anti-tiling position, but I think it’s an interesting problem. Anyone else have thoughts?
wireheadwannabe said:
I’m incredibly surprised that you chose to bite the bullet on the masochistic Tay Sachs example. Creating agents who you know will have net suffering seems like a fairly black and white example of cruelty unless it’s somehow instrumental for the greater good.
Let’s take this further. How do you feel about creating agents who desire to suffer for eternity? Like, no masochistic pleasure involved, just genuine misery. Is that wrong?
LikeLiked by 1 person
heelbearcub said:
“Let’s take this further. How do you feel about creating agents who desire to suffer for eternity? Like, no masochistic pleasure involved, just genuine misery.”
Isn’t that statement self contradicting? What is your definition of desire?
LikeLiked by 1 person
MugaSofer said:
They might feel guilty and think they deserve to be punished by being miserable, to pick a human-level example.
LikeLiked by 2 people
wfenza said:
If we’re talking about humans, our brains have different functions for wanting something vs. liking it. It’s very possible for us to feel compelled to make ourselves miserable.
LikeLiked by 3 people
heelbearcub said:
@MugaSofer:
But in that case they desire penance. The punishment is a means to the ultimate end
@wfenza:
Feeling compelled and desiring something are two different things. I might feel compelled to gamble, but I desire winning, or (more realistically) I desire the endorphin rush I get out of putting money at risk, but I don’t desire the crash that comes with losing.
I can’t think of the case where the negative side effects of the compulsion are what are actually desired.
LikeLike
MugaSofer said:
>Feeling compelled and desiring something are two different things.
They are both, however, preferences.
>But in that case they desire penance.
Says who?
I’ve known people who believed Bad People inherently deserved to be miserable for their crimes, and I’ve known people who believed they were Bad People and therefore deserved to be miserable. Why can’t the same two coexist? I’d be staggered if they don’t coexist in many actual people.
LikeLike
LiraelDianne said:
There are most definitely a lot of Christians who believe they are Bad People and should suffer Eternal Punishment, except God/Jesus saves them from this fate by suffering in their place.
LikeLike
Ghatanathoah said:
>”Creating agents who you know will have net suffering seems like a fairly black and white example of cruelty unless it’s somehow instrumental for the greater good.”
I think that in Ozy’s example that masochistic pleasure and satisfaction outweighs the pain.
But I do think it would be good to modify the principle to “It is ethical to create creatures with any set of preferences, even if those preferences are less satisfied than other possible creatures, so long as their preferences aren’t so unsatisfied that they would rationally consider their life to not be worth living.
LikeLiked by 1 person
ozymandias said:
Teeeeechnically I’m a really weird form of hedonic utilitarian who uses preference utilitarianism as a heuristic, but sure, if the eudaimonia of some being were to suffer a lot and then die I’m not sure I would have a problem with it.
LikeLike
stargirlprincess said:
“First, it leads to really weird outcomes if the universe is infinite and therefore everyone exists an infinite number of times and therefore no one is morally relevant. Second, it suggests that if you are a hyperintelligent computer and you run identical simulations of the same person a million billion times, changing nothing, then that is exactly morally equivalent to running it once”
If you value diversity you can still value other things. These problems only seem to exist if diversity dominates your other values*.
*However many ethical theories do run into the problem where one value has to become dominant.
LikeLike
Patrick said:
I’ve never found these arguments particularly worrying. Utilitarianism is an algorithm for dealing with life as it exists. I don’t feel bad that it generates screwy results when you input data outside the algorithms intended range. Asking utilitarianism to be our guide to sci fi scenarios about the creation if as yet not extant life seems to me to be like demanding that zero be a better denominator. Non existent people don’t have interests or preferences, so an interest or preference based system can’t be expected to work on them. The application of utilitarianism to the wire head scenario should be in terms of how it affects extant people, not hypothetical legions.
LikeLiked by 1 person
Summer said:
the problem with this view is that with selective abortion we really can choose (very roughly) what kind of person a person will be. and that kind of thing should we continue to study the human genome (which i think there is a moral imperative to do)
LikeLike
Hedonic Treader said:
I have no worries about tiling problems because we will never have a consensus for any one thing to tile the universe, and if someone could perform such a power grab in real life, the outcome is a gamble that does not depend on our opinion.
So a hedonium shockwave scenario is ethically best but not attainable in practice. It may be more realistic to get the world to spend a small fraction of future GDP on creating something some orders of magnitude less efficient than hedonium, but I’m not sure even that will go through. People have a tendency to become angry over disagreement and then screw up the potential on purpose.
Realistically I’d say we’re lucky if we can prevent the worst mass torture on a cosmic scale and have some leisure ems in the system.
LikeLike
greenergrassgrowing said:
My version of preference utilitarianism only respects the preferences of beings that already exist. Which is to say, regardless of the [i]expected[/i] preferences of not-yet-existing beings, their preference weight is 0 until they actually come into being. That maps well onto our refusal to create heroin rats. We don’t want them to exist, and who cares what non-existent heroin rats want?
LikeLiked by 1 person
ozymandias said:
That leads to the unusual result that if parents wanted to have a child who suffers unbearable pain constantly, it would be ethical to create this child, and then as soon as they existed it would be ethical to kill them.
LikeLiked by 2 people
Vamair said:
A version of preference utilitarianism I like consider both preferences of future beings and preferences of people already dead. It’s not so counterintuitive once I’ve thought about that for a while. But when you try to design the future, there is a lot of possible creatures that strongly prefer to exist, and their relative moral relevance in the end is probably the result of preferences of the people of the past.This way we rule out tiling the universe with ever-satisfied beings. Yeah, the justification seems flimsy and I wouldn’t like to bet our future on this, so it all needs work. And diversity is one of our values, but not the only one, as it’s possible but not ethical to turn the Universe into a diverse Hell. I’d bite the bullet on your “thousands of identical simulations” question, though, as I believe them all to be a single person. Which also raises some interesting questions.
LikeLike
greenergrassgrowing said:
I want to say that the collective preferences of everyone else in the universe overrule the hypothetical parents, but I *really* don’t want morality to be decided by popular vote.
I’m not sure there even is a solution for this. Pretty much anything that rules out intentionally creating people who suffer unbearably as compared to us, also rules out creating *us* as opposed to something that doesn’t suffer at all. Like a god. Or a rock.
LikeLike
Patrick said:
Only if you bound the scope of your consideration of preferences to the parents and child.
LikeLike
ozymandias said:
Patrick: Feel free to sub “the parents” with “51% of the population.”
LikeLike
Patrick said:
Ok, this is going to get rapidly out of the scope if what I can do on mobile.
But in short, thinking about hypothetical beings isn’t hard, but we suck at it.
John and Jane Doe are considering having another child. John suggests that since the child will likely have a good life, it would be immoral not to do so. John has screwed up- he is weighing the hypothetical happiness of an imaginary person against the non hypothetical non existence of an imaginary person.
John and Jane are going to have another child. Jane is considering whether to take mutagenic drugs that will deform the fetus. She reasons that since the fetus doesn’t exist yet, it’s preferences don’t matter. Jane has screwed up. She is discounting the preferences of a not yet existent person even though those preferences will come into existence.
In the first scenario the choice is between a world with a person with preferences, and a world without. The existence if the preferences of the hypothetical person can’t guide us.
The second scenario represents a choice between a world where a person comes into being with certain preferences under certain conditions, and a world where they come into existence under other conditions. Their current non existence isn’t relevant they’re future existence is presumed in both scenarios.
Utilitarianism can guide us in both scenarios as long as we remember what choices we’re making and weigh utility within them.
LikeLiked by 1 person
ozymandias said:
Mm. I’m not sure it’s as simple as you claim.
First, it seems to me that Mutagenic Drug Child and Non-Mutagenic Drug Child are different people. They have different DNA! So it is two separate problems: one of bringing Mutagenic Drug Child into existence vs. not doing so, one of bringing Non-Mutagenic Drug Child into existence vs. not doing so. Their existence is mutually exclusive, but (say) if I abort a fetus and get pregnant again the same day those are clearly two different people, even though the existence of the fetus I aborted and the fetus I am currently pregnant with is mutually exclusive.
Second, your argument comes to the unusual conclusion that it is perfectly ethically fine for Jane to take mutagenic drugs as long as she solemnly promises that if she didn’t take mutagenic drugs she would abort the fetus.
LikeLike
Patrick said:
You’re continuing to focus on ethics via maximization of variables instead of maximization of variables as a guide to decision making. In the scenario you’ve offered, Jane isn’t making a choice. If we insert in a choice- say, getting pregnant on the mutagenic drug while promising to abort any in mutated fetuses versus just not doing that, the problem evaporates.
Utilitarianism doesn’t ethically code things. It advises how to choose between things, and suggests what we should care about when making that choice. If you only have or are only willing to consider one option, utilitarianism has nothing to say to you.
LikeLike
AlexanderRM said:
I know I’m a few months late on this, but I just want to chime in- my personal *basis* for preference utilitarianism is something more along the lines of the Golden Rule, or the idea that I’m not morally more important than other people, and preference utilitarianism or preference rawlsianism (not of the maximin variety, the actual calculated risk variety) is my best current heuristic for evaluating how to do that.
I have a feeling a lot of the people in these sorts of discussions probably have similar ideas (although obviously the typical mind fallacy is influencing me here), so I feel like it might be useful to take a step back to that with this question.
…I’m not really sure what conclusion to draw from that, though. The only useful thing to ask here is “if I were this specific potential person, would I want to be created?”. I think this does help with the “creating a person who wants to die” situations, at least.
Personally though, I don’t feel very disturbed at the idea of not having existed, so I tend towards seeing death as far worse than not creating someone- but there are definitely potential people who would disagree with me on that.
Obviously I have no clue how to weigh different potential people against one another; the only way I can imagine doing that- besides the diversity criterion- would result in tiling the universe with… I would say people who would be really, really disturbed at the idea that they might have *not* existed, which I don’t really feel is ideal.
My personal feeling up to this point has been (originally derived from Greg Egan works, which are what introduced me to both transhumanism and post-scarcity economics) just that we should solve preferences of people who currently exist, and then allow them to bring other people into existence within certain guidelines to avoid negative-value lives… but I’m not sure what guidelines those should be and have generally been coming to feel that private parentage is mostly grandfathered-in to my morality.
LikeLike
Jacob Schmidt said:
Mine don’t. My self preservation objects very strongly, my sense of dignity even stronger but my moral intuition is basically like “A universe with the greatest possible pleasure density? Well that’s obviously the most optimal universe.”
LikeLike
wfenza said:
That’s pretty much where I am. I have yet to see a coherent argument against this other than “moral intuitions” or equally faith-based reasoning.
LikeLike
heelbearcub said:
I think one issue with contemplating tiling is that it assumes that nothing is extant before tiling. And it assumes a flat and uniform distribution of energy in the universe. Neither of these is true of our universe.
If you simply contemplate tiling the only known biosphere, you can easily see that the amount of destruction required is immense and unfathomable. And the imperfect distribution of resources means that the tiles will necessarily have different utility outcomes. It’s not clear to me that the cost of tiling the existing biosphere (in utility) can be made up for by the utility gains of the tiles.
And then you need to also calculate the probabilistic value of utility loss in the event that you fail to successfully tile (for example, what are the odds you miss someone/thing when bulldozing the existing world, and this something results in a total loss of the tiles.)
Finally, tiling would not possibly maximize the typical existing human utility function, so there is no surprise it it seems intuitively incorrect.
LikeLiked by 2 people
Ghatanathoah said:
>”More thorny for me is the idea that it would be ethical to create beings with unsatisfiable preferences. You could create a being whose burning all-consuming passion is that light travel at 168,000 miles per second and pi be four and doom it to a life of dissatisfaction”
There’s a fairly simple hack to fix this. Just say add a lower limit to the principle, so it is okay to create creatures whose preferences are unsatisfiable, as long as enough of their preferences are satisfied that they consider their life worth living. Creatures with preferences so unsatisfiable that their lives would not be worth living, and they regret ever being born, are bad to create. But a creature with some unsatisfiable preferences, but that would also have enough satisfied preferences that it overall leads a good life, would be okay.
Overall, I like this principle. It matches a lot of my moral intuitions. For instance, in the the traditional form of the Non-Identity Problem, I have a fairly typical moral intuitions. But when you modify so that one of the potential child’s preferences are less satisfied because they are more ambitious than the others, rather than because the child is disabled, it doesn’t seem obvious to me at all that it would be better to have one child than the other. I might prefer the more ambitious child, all other things being equal, as long as their life was worth living.
Another modification that might avoid some troublesome edge cases is to stipulate that the preferences have to be “human-like” within certain parameters, although diversity within those parameters may be desirable. In particular, I would consider the creation of what Nick Bostrom calls “Non-Eudaemonic Agents” to be a bad thing.
LikeLiked by 2 people
rageofthedogstar said:
“Unfortunately, preference utilitarianism leads one inevitably to the conclusion that the morally right action is to create beings whose preferences are as satisfied as possible.”
Is this actually true? If my goal is to satisfy as many people’s preferences as possible, this seems like it only counts for people and preferences that actually exist. I consider myself a preference utilitarian, but I don’t feel particularly compelled to create more beings out of a moral imperative to maximize the number of preferences I satisfy.
In practice, people do want to create more beings, and so this doesn’t really get us out of the dilemma. I just don’t think the conclusion inevitably follows from endorsing preference utilitarianism.
LikeLiked by 1 person
intangirble said:
Heh, just saw this after I posted. But yes!
People do want to create more beings. But for most people, this seems to cap at or around a single-digit level. I don’t personally know anyone who actually wants to tile the universe with beings (i.e. this makes them happy as an end goal, aside from feeling that it’s their moral duty to do so).
LikeLike
intangirble said:
Here from Tumblr, not a utilitarian, so might be asking really 101-level questions here, but – I’ve read a fair bit of LW, SSC and similar blogs, and I remain baffled by the idea that tiling is a good thing.
There seems to be this idea in utilitarianism that maximising pleasure per cubic inch of the universe is the goal – that it’s more moral to pack as much pleasure into the space we have available as possible. I don’t get this, for probably the same same reason I don’t get dust specks vs. torture.
As far as I can tell, pleasure is experienced on an individual level, not as an aggregate. Nobody experiences the sum total of pleasure in the universe, and increasing that sum total in terms of “raw numbers of beings who are happy” doesn’t add anything positive to each individual experience, thus it doesn’t add anything at all because individual experiences are all we have. To any one being, the difference between whether ten million or ten trillion other beings exist to experience maximum pleasure is surely only a tiny blip on its utility radar if it matters at all, and it’s the same for those other ten million/trillion beings. Having more beings experiencing maximum pleasure doesn’t really “increase” anything, because there is no objective “total amount of pleasure in the universe” variable that matters to anyone, unless you posit a deity who oversees all beings and cares about their pleasure in aggregate.
Surely what you want to optimise, for the practical purposes of “beings in the universe are happy and don’t suffer” (and if utilitarianism’s goal is anything more abstract than “trying to make actual living/future to-be-created beings happy and non-suffering”, then I don’t think it has much applicability to real human morality in the here and now and is more of a paperclip maximizer), is the happiness of those beings who exist or will come to exist. The number of them is completely irrelevant, except in the sense that I suspect that part of the maximum-happiness equation is coming up with the optimal number of beings to exist in a given space. I’d hazard a guess that this is more than 1, if only for redundancy’s sake, and significantly less than tiling the universe.
tl;dr my position leads me to think that 250 perfectly-happy beings is just as good as 250 billion perfectly-happy beings, and may be more so. The only reasons to want more are diversity of ideas (which presumably caps out at a certain level – could we find it?) and redundancy (i.e. the species doesn’t go extinct), and I’d imagine the latter wouldn’t be a problem in the type of scenario we’re talking about.
LikeLiked by 4 people
ozymandias said:
What you’ve reinvented is called average utilitarianism: instead of trying to maximize the amount of happiness times number of people, you’re trying to maximize the amount of happiness divided by the number of people.
Average utilitarianism also gives a counterintuitive result: that if you have two hundred and fifty trillion happy people, and twenty-five people who have exactly identical lives to the two hundred and fifty trillion except also they got to see a mildly amusing cat video, the situation with the twenty-five people is better.
LikeLiked by 1 person
intangirble said:
Good to know there’s a name for it!
Honestly, if we’re paring it down to minor fluctuations in utility beyond what’s necessary to feel that one’s life is generally enjoyable (i.e. I think a minor gain in utility that pushes a person’s life from unbearable into tolerable, or tolerable into pleasant, is “worth” a lot more than another utilon on top of an already ecstatic life), I would choose the twenty-five people and the cat video. I wouldn’t strongly favour it, but I would favour it, providing all other things really were equal.
In practice, I suspect that having only twenty-five living beings in the world would negatively impact utility in some way, so it wouldn’t pan out. But if everything else was truly equal, bring on the cat video.
LikeLike
sullyj3 said:
I feel like that’s only unintuitive if you’re already coming from a framework where greater net utility is desirable, like preference or hedonistic utilitarianism. The only problem I have with the 25 people is the lack of diversity.
LikeLiked by 2 people
tailcalled said:
“I think a minor gain in utility that pushes a person’s life from unbearable into tolerable, or tolerable into pleasant, is “worth” a lot more than another utilon on top of an already ecstatic life”
This is related to risk-averseness.
Imagine a simplified world where people are (kinda) money-maximizer. If their utility function is u(money) = money, you get the case where moving money from the rich to the poor has no effect. If their utility function is u(money) = log money, you get the case where moving money from the rich to the poor is extremely efficient (and giving just a single dollar to someone with no money yields infinite utility).
However, this is not the only effect; in the first case, people are risk neutral – they don’t care whether there’s 50% chance of 2 dollars or that they are just given a single dollar directly.
It seems somewhat elegant to me to implement the helping-poor-people via ordinary risk-averseness, but YMMV.
LikeLiked by 1 person
blacktrance said:
Tiling the universe with rats on heroin seems counterintuitive, but it’s the conclusion of hedonic utilitarianism, so if hedonic utilitarianism is correct, then this is the right thing to do even though it seems counterintuitive. But if to begin with we accept utilitarianism because it’s intuitive, and it turns out to produce counterintuitive conclusions, maybe there’s no way to reconcile them (there’s no guarantee that two intuitions can be accommodated), and that’s a mark against a theory accepted on intuitive grounds. There doesn’t have to be a satisfactory conclusion.
LikeLiked by 2 people
blacktrance said:
But within the framework of utilitarianism, one way to avoid counterintuitive tiling conclusions is to adopt average rather than total utilitarianism. It’s likely that average utility would be higher if the universe just had a few larger brains experiencing great pleasure, and that seems more intuitively correct to me.
LikeLike
Belobog said:
As I understand it, preference utilitarianism says you should satisfy people’s preferences, whatever they happen to be. It seems to me that once you step up a meta level and add a requirement that people should have the “right” preferences, whether to satisfy a concern for diversity or any other reason, you’re no longer a utilitarian at all; you’ve just reinvented virtue ethics. I find virtue ethics somewhat attractive, so I don’t think this is necessarily a bad thing. It just shifts the burden to defining and defending what makes a preference “right” to have.
LikeLike
ozymandias said:
Dataandphilosophy’s claim is that we should have terms for *both* preference satisfaction *and* diversity of minds in our utility function. Different things.
LikeLike
Marcel Müller said:
Not sure about this one, but perhaps the coherent answer is to embrace tiling / wireheading. We don’t have much experience with true wireheading, since drug addicts suffer severe consequences to their health and ability to compete in our economy what makes them miserable after a short while. Also the effect of the drug wears off with repeated use.
My intuition says that if we had experience with heroin use without these side effects most (reasonable) people would want to repeat the experience.
LikeLiked by 3 people
Psycicle said:
Well… I don’t think preference utilitarianism naturally leads to the dead-end of “tile the universe with beings who prefer that the speed of light stay where it is”. Ultimately, it does end up with tiling the universe with beings whose preferences are as satisfied as possible.
However, there are multiple stable states, each corresponding to a fulfillable set of preferences that are satisfied.
The preference of “experience a wide variety of nice brainstates, and not by direct neural hacking, but by interacting with some sort of external world (a simulated upload world counts) and other people” seems pretty satisfiable to me, and I’d be pretty thrilled if something like that got tiled across the cosmos.
Also, I wouldn’t say it’s ethical to create a being with any set of fulfillable preferences, like masochistic Tay-Sachs. Existing beings are the ones that get to determine what sort of beings should be created. Possible beings don’t get that vote (and I’m not sure about past beings), so if the creation of a new creature doesn’t fulfill the preferences of existing beings, it shouldn’t be done. I (and many others) don’t prefer that masochistic Tay-Sachs exists, so the creation of someone like that in the first place actually goes against preference utilitarianism.
So, the “repugnant conclusion vs kill the unhappy people” total vs average problem can be solved by just not giving a shit about potential people, and only bringing one into existence when it makes the already existing beings better off. You can have a preference ordering that goes “X never exists > X exists > X is brought into existence and then killed”
Also, preference utilitarianism, when paired with “potential people don’t get a moral vote” and the complexity of value, leads to the fulfillment of the complex preferences of people (no moral obligation to turn into speed-of-light-valuers), lots of ethically squicky mind designs/preference sets just not being brought into existence in the first place (because it would not fulfill the preferences of the existing beings), and tiling with something that actually seems pretty worthwhile.
LikeLiked by 1 person
Vadim Kosoy said:
My moral philosophy is that ethical = what person X prefers. I simply don’t know any other relevant definition of “ethical”. Obviously it depends on choice of person X. This is not a bug, it’s just the way it is.
Human values are complex (http://wiki.lesswrong.com/wiki/Complexity_of_value). Relatively simple descriptions such as hedonic utilitarianism can only be crude approximations at best. We care about many things such as love, friendship, sex, humor, art etc. (no, we don’t only care about them because they cause pleasure: we prefer these things to having dopamine injections into our brains). Preference utilitarianism also captures just one aspect of what we care about: the satisfaction of other beings (at least if they are sufficiently humanlike).
> My friend dataandphilosophy has proposed a different solution: valuing diversity.
I completely agree that it is a component of our preferences.
> …First, it leads to really weird outcomes if the universe is infinite and therefore everyone exists an infinite number of times and therefore no one is morally relevant. Second, it suggests that if you are a hyperintelligent computer and you run identical simulations of the same person a million billion times, changing nothing, then that is exactly morally equivalent to running it once.
These problems go away if you do the math right. Your utility is a sum over all possible universes of an expression containing time and space discount. Some of these universes are parts of other universes (see some details at http://lesswrong.com/lw/lo7/anatomy_of_multiversal_utility_functions_tegmark/). You can think of the result as a series of terms corresponding to more and more non-local contributions. I think that an antiferromagnet in an external magnetic field can be considered a simple analogy: if you only had the external field (corresponding to the more local terms such as “pleasure”) then all the spins would align with the field (“tiling”) but because of the interaction term (“diversity”), the spins select different orientations.
All of this skips over another deficiency of utilitiarianism, namely that utilitarianism gives equal weight to everyone whereas people usually consider their loved ones to be more important. If you take game theoretic cooperation into account the asymmetry becomes weaker but it doesn’t disappear.
LikeLike
petealexharris said:
I think an infinite universe full of infinite copies of every variation of everybody can still leave every individual morally relevant *within their own light-cone*. This is not the usual meaning of moral relativism, but whatever. Everything else has to obey the speed of light limit, so ethics can too.
I’m not too bothered about ethical questions about things that aren’t actually possible. Yes they test the boundaries of our intuitions. But we already know our intuitions are hugely irrational. So the idea of making as many wireheading rats as possible, as an abstract ethical goal, just makes me go “huh, so what, we aren’t gonna.”
Even if the rats wanted us to, I notice it doesn’t get me anything. From this I deduce that abstract ethical goals are bullshit. You have to get there from here, and nowhere along the path from here to there does anyone benefit as much from setting up the opium-poppy-and-rat farms as they would from following their own bliss. Same applies to all tiling problems: any agent that isn’t one of the tiles will benefit more from dismantling them and enjoying some barbecued heroin-spiced rat for supper.
LikeLike
wfenza said:
I think the reason that rationalists think about this sort of thing all the time is that they are anticipating the need to program an AI in the not-too-distant future, and it’s important to define its utility function in a way that doesn’t produce horrible results.
LikeLiked by 1 person
petealexharris said:
I know, but still, I think we’re screwed if we try and jump ahead to solving that kind of problem in a big-picture way while we have showed nearly no competence at solving it even on a small scale. Corporations and governments are already slightly-larger-than-human intelligences with inhuman goals, and we suck at getting them to not kill us 😦
LikeLike
Lambert said:
we’re even screweder if we make AI without having tried to solve ethical problems at all.
LikeLike
Vadim Kosoy said:
> More thorny for me is the idea that it would be ethical to create beings with unsatisfiable preferences.
Depends on what you mean by “preferences”. If you just mean “utility function” then an “unsatisfiable preference” is just a meaningless constant term which doesn’t affect anything. If the unsatisfiable preference is associated with an experience of suffering, then this suffering carries negativity utility (for the “observer”).
> Interesting question for atheists: is the God-shaped hole in the human heart an example of this sort of preference?
Are you sure it is unsatisfiable? What if we build a god? 🙂 It wouldn’t literally be omnipotent, but I’m not sure it’s a requirement.
LikeLike
John said:
Use a theory of personal identity that treats identical minds as the same “person” for the purposes of aggregating utility. Thus universe tiled with pleasure brains isn’t a million different brains for aggregation purposes, but the same one a million times.
[I have a more complicated information theory based consequentialism that this post is too small to contain]
LikeLike
Autolykos said:
I think the easiest way to solve tiling problems would be not counting exact copies (it actually *is* the same individual rat – a bazillion of them is exactly as morally relevant as a single one). It would also require discounting the moral value of slightly imperfect copies enough that their total value converges if their number goes towards infinity. In that case, it may be moral to, e.g. cover the entire surface of the Moon with (different) wireheaded rats, and after that start making other life forms with more complex preferences happy.
This approach has enough neat features (I’ve never before seen the value of natural diversity in environmentalism deducted from axioms) that I’ll probably just adopt it for now.
LikeLike
Corwin said:
http://lesswrong.com/lw/l3/thou_art_godshatter/
“The blind idiot god’s single monomaniacal goal splintered into a thousand shards of desire. And this is well, I think, though I’m a human who says so. Or else what would we do with the future? What would we do with the billion galaxies in the night sky? Fill them with maximally efficient replicators? Should our descendants deliberately obsess about maximizing their inclusive genetic fitness, regarding all else only as a means to that end?”
It’s the SAME THING for EVERY value. Maximizing one at the expense of all others would be flatly terrible no matter which it would be. Maybe a better utilitarianism would consist of reducing disutility and helping augment utility for all agents everywhere, but not MAXIMIZE something at the expense of everything else, or MINIMIZE something to zero no matter what else would be thrown away in the process.
Look : “The blind idiot god’s single monomaniacal goal splintered into a thousand shards of desire. And this is well, I think, though I’m a human who says so. Or else what would we do with the future? What would we do with the billion galaxies in the night sky? Fill them with maximally efficient [monomaniacal agents obsessing over ONE VALUE]? Should our descendants deliberately obsess about maximizing their [ONE VALUE], regarding all else only as a means to that end?”
That’s EXACTLY the problem with Clippy! Why are utilitarians even thinking about monomaniacal maximization? Don’t maximize, augment; don’t minimize, reduce. I don’t think the endgame should, or will, be “the universe tiled with maximizers”.
LikeLike
ozymandias said:
Yes, but I flatter myself that it is at least slightly novel to point out that “doing what people want” also has this problem.
LikeLike
The Smoke said:
The title of this post got me excited since I at first expected recreational math problems instead of pointless discussions of personal preferences of utility functions.
I would be disappointed if the speed of light wouldn’t be so incredibly awesome exactly the way it is.
LikeLike
Alex said:
I _really_ like the diversity approach to addressing tiling (I read it here and it went click). I like how it connects to Scott Alexander’s “Answer to Job” (http://slatestarcodex.com/2015/03/15/answer-to-job/) and to Scott Aaronson’s “The Ghost in the Quantum Turing Machine” (http://arxiv.org/abs/1306.0159).
If I assume that (relative) location (possibly in time) is relevant to experience, but two identical experiences at two different locations very similar and so valued less than to distinct but equally positive experiences, I also like how it captures diminishing returns on tiling without going full-on average utilitarian.
There’s a weird gap inconsistency in my intuitions about tiling, though: if I imagine a universe tile with happy rats, I view adding another happy rat as having a small positive value; if, however, I imagine a hellish universe tiled with rats suffering unbearably, I view it as a large negative to add a new suffering rat. I don’t have this intuition about suffering human simulations.
LikeLike
tailcalled said:
“Second, it suggests that if you are a hyperintelligent computer and you run identical simulations of the same person a million billion times, changing nothing, then that is exactly morally equivalent to running it once.”
I tend to assume that running the same person a million billion times is equivalent to running them once. There are lots of ways to justify this:
If Omega comes up to you and offers you to make a copy of the entire universe for $10 000 (assuming you believe the universe is a good thing and that it is worth more than $10 000), would you do it?
Suppose it is discovered that because of an evolutionary artifact, men’s brains compute everything doubly. Would men then be twice as important as women?
Generally, I assume that the more similar two agents are, the less they count, morally speaking. This leads directly to valuing neurodiversity among positive-utility people.
It still has the problem with an infinite universe, but here we have to pick up some bigger canons: timeless decision theory. We shouldn’t decide not to do something because someone somewhere else is also doing it, because if we decide not to do it then we acausally change their decision so they don’t do it either.
LikeLike
Cliff Pervocracy said:
I actually don’t have a problem with the heroin rats thing. If the rats are as happy as can be, and no one is around to feel sad that classical music and oil painting no longer exist… seems fine to me.
Sure, I tend to think that I’d rather have varied experiences and relationships than be a rat on heroin, but that’s just because I can’t really appreciate how damn happy that rat would be. I only have my ordinary experiences of happiness and unhappiness to go off of, and I enjoy learning and creativity and challenge, but that’s not because those are the best things in the world. It’s because I’m not lucky enough to be an idealized heroin rat.
LikeLike
Joe said:
Luckily my ethics don’t run in to any of these problems… I just believe that *my* preferences should be implemented 😉 And my preferences include helping other people achieve their preferences–most of the time, at least. I make exceptions for having babies with Tay-Sachs brought in to existence for instance. So there 😉
LikeLike
Joe said:
(I also have a preference for building a planet out of orgasmium because it seems like a cool idea)
LikeLike
lilietb said:
The heroin rat test has been retried with rats actually having companionship, a playground and generally other options to amuse itself than heroin. Turned out they didn’t like heroin that much after all!
Which brings me to the logical fallacy behind the problem y’all are postulating here. It starts with the assumption that someone put you in charge of the world, and you are expected to be competent enough to instantly bring it to the best possible state.
Here’s the thing: nobody did, and you are not.
Putting a person through maximum possible uncomfortable and terrifying situations does not, in fact, make them stronger. It does not make them more able to deal with the world around them. Giving little kids horrible ethical dilemmas and insisting they present a definite solution does not, in fact, make them better at ethics.
You don’t have to make a decision for the hypothetical future when you are put in charge of everything right now. Your ethics is good enough without accounting for that exclusively theoretical possibility. Just because it can be projected into ridiculous extremes that make it conflict with common sense, doesn’t mean it should be.
Honestly, the idea that one person (you, me, anyone) is put in charge of managing the world as a whole with apparently godlike powers conflicts with my ethics a lot more than any of the ’tiling solutions’ you present here.
And the heroin rat experiment I started with is a great illustration why. We are all flailing blindly here, trying our best to account for others’ preferences and happiness without being telepaths. Which we are not. Which is why we shouldn’t be put in charge of world as a whole with godlike powers. Which is okay.
LikeLike
lilietb said:
And if you extend it to programming an AI, that changes exactly nothing. Nobody put that AI in charge either, and programming it to consider itself in charge is ethically wrong.
LikeLiked by 1 person
Pingback: Why consequentialists should read Aristotle – Some Turns of Thought
Pingback: Perfecting the motion – Natália Mendonça
Autism Candles said:
Reblogged this on Autism Candles.
LikeLike