Tags

,

One very common critique of hedonic utilitarianism is the wireheading objection. If you try to fill the universe with beings experiencing as much pleasure as possible, then the perfect world would consist of nothing but rats– a larger or more intelligent animal would use up resources better spent on new morally relevant beings– with a steady drip of heroin into their systems, and the infrastructure necessary to keep them alive and drugged. (If you don’t happen to think animals are morally relevant, feel free to replace “rats” with “humans”.) This seems, to put it lightly, counterintuitive.

That seems to be one of the major reasons that my friends have adopted preference utilitarianism instead. Unfortunately, preference utilitarianism leads one inevitably to the conclusion that the morally right action is to create beings whose preferences are as satisfied as possible. That suggests that the perfect world would consist entirely of immortal beings who don’t want anything except that the speed of light continue to be 186,000 miles per second.

That seems even worse than the hedonic utilitarian’s perfect world.

I think tiling problems are a problem with any utilitarianism that maximizes something that isn’t absurdly vague and handwavey, and the only reason that the vague handwavey ones are exempt is that it’s hard to figure out what exactly they’d support tiling. For any brain state you’d choose to maximize, there is a being who has the maximum amount of that brain state, and the correct thing to do is to repeat that being as much as you can.

One solution is to be a preference utilitarian and to say that it is ethical to create beings with any set of preferences, no matter how unsatisfied their preferences are likely to be. This is subtly different from saying that it’s okay to create any being: it would not be okay to create a child with Tay-Sachs, because children with Tay-Sachs would have the same desires as anyone else, they are just also in tremendous pain, which makes it harder for them to satisfy their preferences, most notably the preference not to be in tremendous pain. However, it is perfectly ethical to create people who want to fall in love and are capable of doing so, even though they will probably want their love to be returned and be heartbroken when (as is inevitable) this doesn’t happen.

I have thought of a few perverse edge cases created by this solution. First, it would be ethical to create various beings whose situations we perceive as exploitative (e.g. cows that want to be eaten, house elves). Second, it would be ethical to create beings that want to die and then kill them immediately after birth, or children with a hypothetical Masochistic Tay-Sachs condition that makes them want to suffer horribly and die young. Honestly, I am inclined to accept both of those.

More thorny for me is the idea that it would be ethical to create beings with unsatisfiable preferences. You could create a being whose burning all-consuming passion is that light travel at 168,000 miles per second and pi be four and doom it to a life of dissatisfaction, and this formulation of preference utilitarianism wouldn’t say shit. (Interesting question for atheists: is the God-shaped hole in the human heart an example of this sort of preference? If so, would it be ethical to eliminate it?)

My friend dataandphilosophy has proposed a different solution: valuing diversity. He argues that very few people have an objection to the existence of rats on heroin or beings that want the speed of light to stay where it is or whatever; our moral intuitions merely object when the entire universe consists of them. Therefore, he argues, we ought to value diversity of minds in addition to satisfiability of utility functions.

I have a couple problems with this one, too. First, it leads to really weird outcomes if the universe is infinite and therefore everyone exists an infinite number of times and therefore no one is morally relevant. Second, it suggests that if you are a hyperintelligent computer and you run identical simulations of the same person a million billion times, changing nothing, then that is exactly morally equivalent to running it once. To be honest, my intuitions don’t give me a good answer about that one, perhaps because the situation is so unfamiliar. Third, it has very weird implications about identical twins and people with a lot of family members.

So I don’t have a fully satisfying anti-tiling position, but I think it’s an interesting problem. Anyone else have thoughts?