• About
  • Comment Policy

Thing of Things

~ The gradual supplanting of the natural by the just

Thing of Things

Tag Archives: utilitarianism it works bitches

Hogwarts House Primaries

20 Monday Nov 2017

Posted by ozymandias in rationality

≈ 26 Comments

Tags

ozy blog post, sortinghatchats, utilitarianism it works bitches

[Edited after publication for clarity.]

I used to get into a lot of really frustrating arguments about normative ethics where the person I was talking to and I were just talking past each other. When I found the Sorting Hat Chats system for classifying personality types, I originally just thought of it as being another elaborate fiction-based classification system, which is another one of my guilty pleasures. Sorting Hat Chats is divided into primary and secondary houses. While the secondary houses are just a new gloss on the standard Hogwarts houses, the primary houses are an inexplicably good system for classifying people’s opinions about normative ethics and scrupulosity. Now, instead of going “okay but that thing is wrong, why are you still arguing about it,” I go “ah, Ravenclaw primary” and move on. So I thought I would write up my understanding of the system somewhere more permanent than Tumblr.

(Note: Sorting Hat Chats house primaries only vaguely resemble the Hogwarts houses they are named after, and it is best to put aside your preconceptions about the houses when considering this system. Similarly, no knowledge of Harry Potter is required to understand the system. While this is my personal understanding of the system, I do not claim to know what the creators of the Sorting Hat Chats system intended and may very well be misunderstanding their original intent. I think it is fairly unlikely that this system covers literally all possible orientations towards normative ethics, but as-is it is useful enough that I think it’s worth sharing.)

Ravenclaw Primary. There is an infallible single-question test for identifying Ravenclaw primaries: “is normative ethics boring and/or completely disconnected from any actual moral reasoning you do in your everyday life?” If your answer is “yes”, you are not a Ravenclaw primary. If your answer is “no”, welcome to Team Ravenclaw.

Ravenclaw primaries believe that you should figure out what the right thing to do is through logic and reason. They often have a particular fondness for moral philosophy and ethical thought experiments. Ravenclaw primaries are particularly likely to identify as utilitarians, Kantians, and virtue ethicists. Other sorts of primaries only rarely identify as these categories unless they have to regularly talk to Ravenclaw primaries. For some reason, Ravenclaw primaries have a particular attraction to Catholicism and Judaism; I suspect I would know a lot of Ravenclaw primary Muslims if I knew more Muslims.

Please note that “Ravenclaw primary” is not the same thing as “moral realist.” Many Ravenclaw primaries are not moral realists, although they have a distinct tendency to fall into the “error theorist in metaethics class, utilitarian in normative ethics class” bucket. A Ravenclaw moral non-realist can be recognized by (a) the fact that they really really care about the difference between error theory and noncognitivism and (b) their insistence on trying to come up with some ethical system from first principles anyway.

It is commonly assumed that all effective altruists are Ravenclaw primaries. This is not actually true, although we do have a lot of them.

Gryffindor Primary. Like Ravenclaw primaries, Gryffindor primaries care about principles. Unlike Ravenclaw primaries, Gryffindor primaries tend to follow their hearts and their intuitive sense of what goodness is; they don’t view moral intuitions as raw material for a systematized moral system, but as justifications in themselves.

It is common for Gryffindor primaries to pursue certain values, such as justice or kindness or freedom or the flourishing of others or their family or their own happiness. It is also common for Gryffindor primaries to feel a strong intuitive sense that one should follow certain rules, such as letting everyone speak freely or avoiding blasphemy or being loyal to your friends.

Gryffindor primaries are perfectly capable of systematizing; a Gryffindor primary who intuitively values the greatest good for the greatest number will probably use a lot of numbers to figure out what the greatest good for the greatest number is. However, when it comes right down to it, when asked to justify their beliefs, a Gryffindor primary will go “because it’s WRONG”. When pressed, they will say “because it JUST IS. It’s OBVIOUS.” Occasionally they will engage in circular reasoning like “you should pursue beautiful things because they are beautiful!”

Not all Gryffindor primaries have an ethical system that involves caring about people. Oscar Wilde and Patti Smith are both excellent examples of Gryffindor primaries who are devoted to beauty and art.

I am a Gryffindor primary.

Hufflepuff Primary. Unlike Ravenclaws and Gryffindors, Hufflepuffs care about people. They believe in the inherent worth and dignity of individuals, and want to engage in moral behavior because they have empathy for others. (A Ravenclaw, on the other hand, would start wondering how you could measure worth and dignity, and a Gryffindor might decide they’re pursuing the principle of INHERENT WORTH AND DIGNITY FOR ALL HUMANKIND! without ever really caring about individual humans.)

Hufflepuffs are perhaps best modeled with the idea of circles of concern. Some Hufflepuffs have very small circles: perhaps they care about their family, or their friends, or themselves, or anyone who happens to be personally suffering in front of them at this moment. Some have larger circles: they care about a community, or people who have suffered the same thing they have suffered, or people who also practice their religion, or their country. Some Hufflepuff primaries’ circles embrace all of humankind, or all sentient beings, or ecosystems.

It is common for Hufflepuff primaries to have multiple circles and to care more about people in the inner circles than people in the outer circles. A Hufflepuff primary of my acquaintance occasionally comments that they care equally about their spouse and the entire continent of Africa.

In my experience, effective altruist Hufflepuff primaries often have a formative experience that causes them to have empathy with animals or people in the developing world: for example, they may have visited a developing country, gone to a museum exhibit that included an exhibit of rice equivalent to how much a person in the developing world eats in a day, or viewed a Mercy for Animals factory farm video.

Slytherin primary. Of the house primaries, Slytherins are the most likely to be parsed as amoral. The Slytherin primary cares about individuals: they almost always care about themselves; they may also care about their friends, partners, coworkers, or family. (Interestingly, some Slytherin primaries generalize this and agree that everyone else should care about their families too, sometimes promoting this principle at some cost to themselves; my father, a Slytherin primary, threatened to quit his job if one of his employees was fired for missing work because his child was in the hospital.)

A rough guideline for distinguishing Slytherin primaries from Hufflepuff primaries is that Hufflepuff primaries naturally care about groups (“my family”) while Slytherin primaries naturally care about individuals (“my dad, my mom, my sister, my husband, my child”). Hufflepuff primaries also tend to be more other-centered (“I care about you because you’re suffering”), while Slytherin primaries tend to be more self-centered (“I care about you because you are one of the six people I have chosen to care about”).

Most Slytherin primaries are not particularly altruistic. They sometimes engage in activism or charity donation if it’s in their own interest or the interest of the individuals they care about: for example, a trans Slytherin primary may advocate for trans rights; a Slytherin primary whose partner died of cancer may raise money to fight cancer. A small number of Slytherin primaries may take up altruism for reasons expressed eloquently in the following quote from Terry Pratchett’s Wee Free Men:

All witches are selfish, the Queen had said. But Tiffany’s Third Thoughts said: Then turn selfishness into a weapon! Make all things yours! Make other lives and dreams and hopes yours! Protect them! Save them! Bring them into the sheepfold! Walk the gale for them! Keep away the wolf! My dreams! My brother! My family! My land! My world! How dare you try to take these things, because they are mine!

It can sometimes be hard to determine someone’s primary. A Ravenclaw primary may behave similarly to a Hufflepuff primary if they’ve been reasoned into it; a Gryffindor primary who believes in the principle of helping people close to you may be difficult to tell apart from a Slytherin. But in my experience, if you question why someone believes what they believe thoroughly, you can almost always classify them into a house primary.

Why is this useful? First, you will avoid frustrating arguments because you are aware that other primaries differ from you. Slytherin primaries can recognize that altruism is psychologically important to other people and, while they don’t have to understand it, they do have to accept it. Ravenclaw primaries can avoid patiently repeating thought-experiment-based arguments to people who respond with “huh, that’s confusing” and then keep doing what they were going to do anyway. Gryffindor primaries can stop having arguments that end in “YOU SHOULDN’T DO WRONG THINGS BECAUSE THEY ARE WRONG, WHY IS THIS SO HARD TO UNDERSTAND.” Hufflepuff primaries will stop explaining that, you see, these people are people and they suffer and you should have empathy for them.

It can also help you strategize about how to convince someone to adopt your values. In my experience, philosophical arguments tend to only move Ravenclaw primaries. Gryffindor primaries respond best to Secular-Solstice-style attempts to make doing the right thing seem grand and beautiful. Hufflepuff primaries respond best to things that trigger empathy, such as Give Directly Live. Slytherin primaries… look, if you can appeal to their self-interest, do, but most of the time you’d be better off locating a Ravenclaw and leaving the Slytherin to do their own thing.

I also think the primaries have very different kinds of scrupulosity, and tactics that work to address one primary’s scrupulosity issues are incoherent or useless with another primary. Sorting Hat Chats calls scrupulosity issues a “burned primary.” I’ve noticed miscommunication particularly with Gryffindor and Ravenclaw primaries, perhaps because they’re the only ones I’ve seen burning around me.

Burned Ravenclaws lose faith in their ability to find the truth at all. They may be troubled by moral nihilism, the inability to understand everything that’s going on in the world, or the fact that any action has many unknowable consequences and you’ll never be able to know for sure if you did the consequentially right thing. I’d add that Ravenclaws often have guilt issues if they adopt a moral system they can’t live up to; that’s relatively treatable through persuading the Ravenclaw to adopt a more livable system.

Burned Gryffindors are no longer able to trust their own internal compass to point them to what’s right. The burned Gryffindor sometimes develops a coping mechanism, such as relying on a person or a system to tell them what’s right; this can allow them to function, but often makes them feel depressed and soulless, and does not fail gracefully if the person or system abuses their power. Some worry that every moral claim anyone makes is actually correct and spend hours worrying that perhaps they’re doing great evil by watching a movie that at least one person on the Internet disapproves of. In my experience as a recovering burned Gryffindor, the solution is not to try to come up with less demanding rules or force yourself to stop listening to random people’s moral claims by force of will; instead, it is to get in touch with your own felt sense of morality, whatever that is, and fiercely defend your ability to make moral decisions for no other reason except that it is right.

I have not personally encountered a burned Hufflepuff or Slytherin. (The Slytherins do need to cut it out with the “have you considered becoming a Slytherin?” approach to scrupulosity issues though.) According to Sorting Hat Chats, a burned Slytherin feels it is too dangerous to have loved ones or value anyone but themselves, while a burned Hufflepuff aches to be allowed to have a community and care about more people but for whatever reason feels this is not possible. I’m interested in burned and formerly burned Slytherin/Hufflepuff opinions on how correct that is, as well as burned and formerly burned Ravenclaws and Gryffindors who want to add new experiences to my analysis.

Distinctions Between Natalism Positions

19 Thursday Oct 2017

Posted by ozymandias in effective altruism, utilitarianism

≈ 34 Comments

Tags

ozy blog post, parenting, utilitarianism it works bitches

I have noticed that several distinct positions tend to be collapsed into two positions, “pro-natalism” and “anti-natalism”. I think discussions about natalism would work better if people made more distinctions.

When I searched for information on it I found that people had previously made a distinction between global and local anti-natalism, where local anti-natalism holds that at least some people shouldn’t reproduce and global anti-natalism holds that everyone shouldn’t reproduce. I feel this is not a very satisfactory division; for one thing, I’m not sure if there is anyone who isn’t a local anti-natalist by this taxonomy.

So here are my proposed replacements:

Very strong anti-natalism. It is morally wrong to have children. The human race should slowly go extinct. For example: Human beings cause irreversible harm to the biosphere, which is intrinsically valuable. It is possible to harm a person by creating them but not to benefit them (nonexistent people are not harmed by being deprived of good things), so bringing people into existence is always a great harm to them.

Strong anti-natalism. In general, people should not have children; there are a very few exceptions. For example: Most human lives, even in the developed world, are not worth living and unless you have a strong reason to suspect your child’s life would be worth living, it is wrong to have a child.

Weak anti-natalism. People should err on the side of having children less than they currently do. For example: Raising children is a waste of resources that are better spent improving the lives of already existing people. Most people don’t enjoy interacting with children, and having children tends to worsen marriages and make people more stressed and unhappy (please note that while the first few chapters of this book are anti-natalist, overall the book comes to a pro-natalist conclusion). It would be easier to solve environmental problems if there were fewer humans.

Natalism neutrality. It is difficult to make general conclusions about whether people should have children. Some people should err on the side of having more children than they currently are, while other people should err on the side of having fewer children. For example: many traits are genetic and only people with desirable traits should have children. Many people who would be good parents have few or no children, while many people who are really crappy parents have children anyway.

Weak pro-natalism. People should err on the side of having children more than they currently do. For example: the effort of parenting is upfront while the good parts are later in life, and many people parent in a way that makes them stressed and unhappy and thus have an inaccurate idea of how pleasant parenting can be. We need more people to support our aging population; the more people there are, the fewer taxes people have to pay to provide public goods such as scientific research, weather forecasting, and military defense (which do not increase in cost when the population increases), and all else equal the lower the per capita national debt.

Strong pro-natalism. In general, people should have children; it is morally wrong not to do so. For example: most people’s lives are happy, and creating happy people is a great good, one of the greatest benefits you can provide a person. The purpose of human life from an evolutionary perspective is to reproduce, and people should obey their evolutionary imperatives.

Very strong pro-natalism. It is morally wrong not to have children (except perhaps in a handful of extreme cases). For example: Some Quiverfull belief systems. A philosophy in which prospective people with net-positive lives are harmed by not being created and therefore we should create as many of them as possible. Some variants of total utilitarianism.

I am personally natalism neutral, although weakly pro-natalism for people sufficiently similar to me, and strongly anti-natalist for farmed animals. (I do not think there is sufficient evidence to be anything but agnostic on wild animal natalism.)

Deontologist Envy

23 Saturday Sep 2017

Posted by ozymandias in feminism, meta sj

≈ 35 Comments

Tags

my issues with anti sj let me show you them, not like other ideologies, ozy blog post, utilitarianism it works bitches

Many consequentialists of my acquaintance appear to suffer from a tragic case of deontologist envy.

In consequentialism, one makes ethical decisions by choosing the actions that have the best consequences, whether that means maximizing your own happiness and flourishing (consequentialist ethical egoism), increasing pleasure and decreasing pain (hedonic utilitarianism), satisfying the most people’s preferences (preference utilitarianism) or increasing the number of pre-defined Good Things in the world (objective list consequentialism). Of course, it’s impossible to figure out all the consequences of your actions in advance, so many people follow particular sets of rules which they believe maximize utility overall; this is sometimes called “rule consequentialism” or “rule utilitarianism.”

In deontology, one makes ethical decisions by choosing the actions that follow some particular rule. For example, one might do only the actions that you’d will that everyone do, or actions that involve treating other people as ends rather than means, or actions that don’t violate the rights of other beings, or actions that don’t involve initiating aggression, or actions that are not sins according to the teachings of the Catholic Church. While it’s allowed to care about whether things are better or worse (some deontologists I know call it their “axiology”), you can only care about that within the constraints of the rule system.

In spite of my sympathies for virtue ethics, I do think it is generally better to make decisions based on whether the outcomes are good as opposed to decisions based on whether they follow a particular set of rules or are the decisions a person with particular virtues would make. (I continue to find it weird that these are the Only Three Options For Decision-Making About Ethics, So Says Philosophy, but anyway.) So do most people I know.

I have some consequentialist beliefs about free speech. For instance, I support making fun of people who say sexist or racist things in public. I think it is fine to call someone a bigoted asshole if they are, in fact, saying bigoted asshole things. I appreciate Charles Murray refusing to speak at an event Milo Yiannopoulous is at because he is “a despicable asshole” and I wish more people would follow his example. And when I express my consequentialist beliefs about free speech a surprising number of my consequentialist friends respond with “but what if your political opponents did that?”

I did not realize we are all Kantians now.

I think there are three things that people sometimes mean by “but what if everyone did that?” The first is simple empathy: if it hurts you to be shamed, then you should consider the possibility that it hurts other people to be shamed too, no differently from how you are hurt. I agree that this is an important argument, and we could all stand to be a little bit more aware that people we disagree with are people with feelings. But even deontologists agree sometimes it’s necessary to hurt one person for the greater good: for example, even if you are very lonely and it hurts you not to get to talk to people, you don’t get to force people to interact with you against their will. So I don’t think that the mere fact that it hurts people implies that (say) public shaming should be off-limits.

The second is a rather touching faith in the ability of people’s virtuous behavior to influence their political opponents.

Now, if it happened that my actions had any influence whatsoever over the behavior of r/TumblrInAction, that would be great. I don’t screenshot random tumblr users and mock them in front of an audience of over three hundred thousand people, so the entire subreddit would close down, which would be a great benefit to humanity. While we’re at it, there are many other places people who read r/TumblrInAction could follow my illustrious example. For instance, they could be tolerant of teenagers with dumb political beliefs, remembering how stupid their own teenage political beliefs were. They could stop making fun of deitykin, otherwise known as “psychotic people with delusions of grandeur,” because jesus fucking christ it is horrible to mock a mentally ill person for showing mental illness symptoms. They could stop with the “I identify as an attack helicopter” jokes; I mean, I don’t have any ethical argument against those jokes, it’s just that there is exactly one of them that was ever funny. 

In general people rarely have their behavior influenced by their political enemies. Trans people take pains to use the correct pronouns; people who are overly concerned about trans women in bathrooms still misgender them. Anti-racists avoid the use of slurs; a distressing number of people who believe in human biodiversity appear to be incapable of constructing a sentence without one. Social justice people are conscientious about trigger warnings; we are subjected to many tedious articles about how mentally ill people should be in therapy instead of burdening the rest of the world with our existence.

Therefore, I suspect that if supporters of social justice universally became conscientious about representing their opponents’ views fairly, defaulting to kindness and using cruelty only as a last resort when it is necessary to reduce overall harm, and not getting people fired from their jobs, it would not have any effect on how often opponents of social justice represent opponents’ views fairly, behave kindly, and condemn campaigns to fire people. In fact, they might end up doing so more enthusiastically, because suddenly kindness and charity and not getting people fired are Social Justice Things, and you don’t want to support Social Justice Things, do you?

(I’m making this argument with the social justice side as the good side, but it works equally well for literally any two sides in the relevant positions.)

Third, there’s an argument I personally find very compelling. Nearly everyone who does wrong things, even evil things, thinks that they’re on the side of good. Therefore, the fact that you think you’re on the side of good doesn’t mean you actually are. (The traditional example is Nazis, but I think Stalinism is probably better, because in my experience most people agree that your average rank-and-file Stalinist supported an ideology that killed millions of people because they had a good goal but were horribly mistaken about how to bring it about.) So it’s important to take steps to reduce the harm of your actions if you’re actually doing evil.

Like I said, I find this argument compelling. But you can’t get an entire ethical system out of trying to avoid being a Stalinist. Lots of generally neutral or even good things are evil if a Stalinist happens to be doing them, such as trying to convince people of your point of view or going to political rallies or donating to causes you think will do the most good in the world. If you were a Stalinist, the maximally good action you could do, short of not becoming a Stalinist anymore, is sitting on the couch watching Star Trek reruns. This moral system has some virtues– depressed people the world over can defend their actions by saying “well, actually, I’m one of the best people in the world by Not-Having-Even-The-Slightest-Chance-Of-Being-A-Stalinist-ianism”– but I think it is unsatisfying for most people.

(I can tell someone is about to say “you can donate to the Against Malaria Foundation, there’s no possible way that could be evil!” and honestly that just seems like a failure of imagination.)

That’s not to say that trying to avoid being a Stalinist should have no effects on your ethical system at all. Perhaps most important is never, ever, ever engaging in deliberate self-deception. Of almost equal importance is not hiding inconvenient facts. If you know damn well the Holodomor is happening, do not write a bunch of articles denouncing everyone who says the Holodomor is happening as a reactionary who hates poor people. On a less dramatic level, if there’s a study that doesn’t say what you want it to say, mention it anyway; if you can massage the evidence into saying something that it doesn’t really say, don’t; take care to mention the downsides and upsides of proposed policies as best you can. These are most important, because they directly harm the ability of truth to hurt falsehood.

And there are some things that I think it’s worth putting on the list of things you shouldn’t do even if you have a really really good reason, because it is far more likely that you are mistaken than that this is actually right this time. Violence against people who aren’t being violent against others, outside of war (and no rules-lawyering about how being mean is violence, either). Being a dick to people who are really weird but not hurting anyone (and no rules-lawyering about indirect harm to the social fabric, either). Firing people for reasons unrelated to their ability to perform their jobs. I’ve added “not listening to your kid and respecting their point of view when they try to tell you something important about themselves, even if you disagree,” but that’s a personal thing related to my own crappy relationship with my parents.

But that’s not a complete ethical system. At some point you have to do things. And that means, yes, that there’s a possibility you will do something wrong. Maybe you will be a participant in an ongoing moral catastrophe; maybe you will make the situation worse in a way you wouldn’t have if you sat on your ass and watched Netflix. On the other hand, if you don’t do anything at all, you get to be the person sitting idly by while ongoing moral catastrophes happen, and those people don’t exactly get a good reputation in the history textbooks either. (“The only thing necessary for the triumph of evil is for good men to do nothing,” quoth Edmund Burke.)

The virtue of consequentialism is that it pays attention to consequences. It is consistent for me to say “feminist activism is good, because it has good consequences, and anti-feminist activism is bad, because it has bad consequences.” (Similarly, it is consistent to say that you should lie to axe murderers and homophobic parents, but not to more prosocial individuals.) This is compatible with me believing that if I had a different set of facts I would probably be engaged in anti-gay activism, and in fact many loving, compassionate, and intelligent people of my acquaintance do or have in the past. Moral luck exists; it is possible to do evil without meaning to. There would be worse consequences if everyone adopted the policy of never doing anything that might possibly be wrong.

There is a common criticism of consequentialism where people say “well if torture had good consequences then you’d support torture! CHECKMATE CONSEQUENTIALISTS.” Of course, in the real world torture always has bad consequences, which is why consequentialists oppose it. If stabbing people in the gut didn’t cause them pain or kill them, and in fact gave them sixteen orgasms and a chocolate cake, then stabbing people would be a good thing, but it is not irrelevant to consequentialism that stabbing does not do this.

Some people seem to want to be able to do consequentialism without ever making reference to a consequence. If you just find enough levels of meta and use the categorical imperative enough, then maybe you will be able to do consequentialism without all that scary “evidence” and “facts” stuff, and without the possibility that you could be mistaken. This seems like a perverse desire, and in my opinion is best dealt with by no longer envying deontology and instead just becoming a deontologist.

You Don’t Have To Be A Utilitarian To Be An EA

13 Tuesday Sep 2016

Posted by ozymandias in effective altruism, utilitarianism

≈ 16 Comments

Tags

ozy blog post, utilitarianism it works bitches

Effective altruism is a question.

The question that it is is something along the lines of “how can I do the most good with the resources that are available to me?” Of course, that’s not precisely accurate, because that question elides certain assumptions that effective altruism makes about how you define ‘the most good’. Effective altruism does not permit religious arguments about what the Good is; effective altruism judges the goodness of an action by whether our actions reduce bad things or increase good things; effective altruism does not care about people in one’s home country or that one is related to more than the global poor.

And, of course, defining effective altruism as a question does not mean that all effective altruists approach effective altruism with a spirit of curiosity and non-attachment, ready to go where the winds of evidence blow them. Most humans are quite ideological. Effective altruism being a question is something that can only be approached as an ideal, not something that we can assume we’ve embodied.

But nevertheless effective altruism is, at its core, a question.

I see no reason that only utilitarians should be interested in the answer to this question.

I expect most effective altruists actually agree with me here. After all, according to the latest EA survey, 56% of EAs are utilitarians, which implies that 44% of EAs are not utilitarians. (I could probably just post that and this post would be done, but eh. I like hearing myself talk.) In my personal experience, it’s hard to spend much time as an effective altruist without noticing the many valuable contributions from people who aren’t utilitarians, some of whom may wish to out themselves in the comments. This post is primarily directed at people who are interested in effective altruism but feel reluctant to join in because they don’t agree with utilitarianism, as well as tiresome people who think that the demandingness objection to utilitarianism somehow means effective altruism is bad and terrible.

Of course, there’s a very obvious reason that effective altruists and utilitarians are conflated. The two groups are closely related. After all, most people who could be considered ‘founders’ of effective altruism are utilitarians, and the earliest person who said proto-effective-altruist ideas was Peter Singer, a utilitarian philosopher who wrote a famous paper arguing that it is morally required to devote all of one’s resources to helping the poor.

However, there is actually no requirement that effective altruists agree with Peter Singer about everything. Effective altruists may disagree with Peter Singer about many questions, such as “is it morally permitted to murder babies?”, “how many severely disabled people have lives worth living?”, “should we care about animals?” and “is AI a serious concern that may wipe out humanity within the next few hundred years?” I see no reason that we can’t include normative ethics on the list of things that effective altruists may be permitted to disagree with Peter Singer about.

It’s true that a lot of effective altruists argue for effective altruism from a utilitarian viewpoint. This is quite natural. A lot of effective altruists are utilitarians. And an intellectually honest utilitarian in the modern world pretty much has to be an effective altruist. But there is a distinction between “utilitarianism is commonly used to argue in favor of effective altruism” and “all effective altruists are utilitarians and effective altruism is an inherently utilitarian endeavor.” Christianity is commonly used to argue in favor of giving to charity, but that doesn’t mean that everyone who donates to charity is a Christian.

I have a personal interest in this topic. I myself can’t do universalizing morality. I don’t like it when beings suffer and I want them to suffer less, in much the same sense that I don’t like it when I wind up waiting in long lines at the Social Security Administration and I want to do that less. While I am close enough to being a utilitarian that I tend to round myself off to one, I tend to part from utilitarians when they start going on about moral obligations and drowning children and so on; I consider “I want to spend this much of my resources on altruism and no more” to be a perfectly good reason to spend that amount of resources on altruism. So I have a natural interest in the subject of being an effective altruist without fully buying into utilitarianism.

And I don’t think I’m the only one. Off the top of my head, here are some people who are not utilitarians and who might be interested in the question of effective altruism: A virtue ethicist cultivating the virtue of compassion. A deontologist doing supererogatory good deeds. An ethical egoist who knows that the warmfuzzies of truly helping someone is the best way to improve her own personal happiness. A Christian who knows that what we do unto the least of these we do unto him. A Jewish person who is performing tikkun olam. A Buddhist practicing loving-kindness. Someone who cares about fairness and doesn’t think it’s fair that they have so much when others have so little. A basically normal person who feels sad about how much suffering there is in the world and wants to help.

Even more people might be interested in bits and pieces of the effective altruist project, even if they aren’t interested in the whole thing. A purely self-interested person has an obvious reason to be concerned about existential risk; someone who cares primarily about freedom might be interested in the best ways to help animals in factory farms.

Now, some of the people I named might choke on some of the effective altruist assumptions I listed: a religious person might object to the secularism, while a virtue ethicist might feel she has a particular duty to those closest to her. Certainly, the particular assumptions that effective altruism has are probably related to it being founded by a bunch of utilitarians. It would have different assumptions if it were founded by a bunch of deontologists.

I agree that we should be wary of effective altruism changing its assumptions. If deontologists wish to have a movement about being the best deontologist you can be, they must start their own movement and not piggyback on ours. I would be concerned if more than, say, ten percent of the effective altruist movement was self-interested people who are concerned about existential risk, freedom-lovers who are worried about factory farms, people who feel they have a special duty to those close to them but who don’t care literally zero percent about Africans, and other people who are not on board with the effective altruist project as a whole. It’s important to balance the contributions from talented allies with the risk of values drift.

But I don’t think everyone who isn’t a utilitarian poses a risk of values drift. Lots of religious people are, in fact, fully capable of compartmentalizing and using secular reasoning in secular contexts. Most non-consequentialists agree that good things are better than bad things and their non-consequentialism mostly comes up in contexts unrelated to effective altruism: after all, Give Directly very rarely involves either lying to Nazis or making decisions about whether a trolley should run over one person or five people.

To be clear: an effective altruist must be on board with the effective altruist project. I do not suggest outreach to people who think proximity is a morally important trait, the consequences of one’s actions are completely irrelevant, or we can find the optimal charity through clever use of the Bible Code. I just suggest that people who aren’t utilitarians can also be on board with the effective altruist project.

 

Against The Drowning Child Argument

05 Monday Sep 2016

Posted by ozymandias in utilitarianism

≈ 17 Comments

Tags

ozy blog post, utilitarianism it works bitches

There is an argument commonly known as the Drowning Child Argument, which goes something as follows:

Imagine that you’re walking across a shallow pond and you notice that a small child has fallen in, and is in danger of drowning…Of course, you think you must rush in to save the child. Then you remember that you’re wearing your favorite, quite expensive, pair of shoes and they’ll get ruined if you rush into the pond. Is that a reason for not saving the child? I’m sure you’ll say no it isn’t, you just can’t compare the life of a child to the cost of a pair of shoes, no matter how expensive.

The problem with this argument is that the current best estimate of the cost to save a life is about $3500. I do not think that most people own shoes that cost $3500. Even their favorite most expensive pair of shoes probably isn’t $3500; it might be a tenth of that. However, the problem with specifying the cost of the shoes in the analogy is that most people will assume the person is rich, and therefore can trivially afford to buy another pair of absurdly expensive shoes.

A more correct analogy is something like this:

Imagine we live in a fantasy world in which children are sometimes teleported away from their homes to drown in ponds. However, it is possible to go on a quest to save the children. The average person can complete one quest in about five weeks. However, some people are very skilled, and can complete a quest in a week or even a day; others are less capable, and may take months to finish the quest.

(Also some people argue that instead of focusing on the teleporting drowning children, one should help spread norms of not slaughtering orcs or work on preventing evil wizards from destroying the world.)

My intuitions for this world suggest that it is not actually mandatory to spend all of your time going on quests, unless you happen to be extraordinarily good at quests, and a person who completes one quest a year has probably discharged their duties with regards to the teleporting baby epidemic and can spend the rest of their time farming dirt or counting how many lice they have or whatever people do in medieval fantasy settings.

Moving that back into this world, the teleporting drowning child argument suggests that one should donate about ten percent of one’s income, unless one happens to be wealthy in which case one should donate more.

This is why I don’t trust thought experiments.

Ethicists Are Less Ethical

10 Wednesday Aug 2016

Posted by ozymandias in utilitarianism

≈ 14 Comments

Tags

ozy blog post, utilitarianism it works bitches

The research of philosopher Eric Shwitzgebel appears to show that ethicists are less ethical. Katja Grace argues that that’s exactly what you ought to expect. Since ethicists are supposed to change our understanding of ethics, we should expect ethicists to behave unethically according to our common-sense understanding of ethics. If not, why are we employing them?

However, I think her argument is flawed.

There are two minor flaws. First, ethicists tend to behave less ethically across a wide variety of different measures. While it might be true that it is morally obligatory to talk during American Philosophical Association presentations, in spite of the general consensus that people who talk during presentations are dickbags, it would be very strange if it were also equally obligatory to steal ethics books, slam doors, and leave your trash behind in conference rooms. Surely common sense morality has to be right about something. In addition, these issues are rarely addressed in ethical debate; as far as I am aware, ethicists do not generally work on the subject of whether it is morally obligatory to talk during presentations, and thus it would be very strange if they’d collectively decided that it was.

Second, Shwitzgebel also included peer ratings of the ethics of ethicists. Presumably, if ethicists were consistently behaving according to a morality that makes more sense than common-sense morality, they would be rated by their peers as more ethical, not about the same. (Unfortunately, Shwitzgebel does not include a breakdown of whether ethicists believe other ethicists are more ethical than non-ethicists do, so it is possible that non-ethicist philosophers simply haven’t gotten the memo.)

More importantly, Katja Grace’s argument depends on eliding the difference between unethical acts and ethically neutral acts. Most formulations of ethics– “everything not permitted is forbidden” utilitarianism aside– have a category for acts that ethics doesn’t care about much at all. Ethics does not have a strong opinion on whether I drink coffee, tea, milk, or nothing in the morning. Ethics research might very well say that an act believed to be unethical is actually ethical, but it might also very well say that an act believed to be ethically neutral is ethical. Indeed, several famous points of disagreement between ethicists and non-ethicists fall in the latter category– most notably charity donations and vegetarianism.

Most people see eating meat as a morally neutral action and donating large amounts of money to charity as, if not neutral, certainly not obligatory. Ethicists are more likely than the general public to believe that eating meat and not donating to charity are both wrong. Nevertheless, the evidence appears to suggest that ethicists are statistically indistinguishable from non-ethicists in their meat consumption and charity donation habits. This is frankly kind of embarrassing, because you’d think at least the Peter Singer fans would drive up the average.

In conclusion, I think it is still probably true that thinking about ethics doesn’t make you a better person.

Why I’m Skeptical Of Thought Experiments

02 Thursday Jun 2016

Posted by ozymandias in rationality

≈ 19 Comments

Tags

ozy blog post, rationality, utilitarianism it works bitches, world's worst vegan

Recently, I’ve grown extremely skeptical of thought experiments as a way of finding truth.

Consider the Chinese Room thought experiment. A man who does not speak Chinese receives a series of Chinese characters from the outside. He flips through a very big book which tells him which set of Chinese characters to send out as a response. Intuitively, the system could not be said to really understand Chinese. The conclusion is that mere symbol manipulation, such as that performed by a computer program, cannot be said to understand things.

Now consider an objection I’ve read somewhere but I tragically haven’t been able to find. [ETA: It’s made by Scott Aaronson!  Thanks to Anonymous Colin, AlexR, and embrodski.] A dictionary that could take any sentence of Chinese and come up with a coherent response would have to be huge— maybe the size of a planet, maybe larger. Even to look things up in that dictionary in a timely manner would be a huge endeavor, probably involving all manner of machines and robots and computer programs and other such things. If you imagine that, suddenly it gets a lot harder to say that the system doesn’t understand Chinese.

Or compare an objection in “Animal Rights: Legal, Philosophical, and Pragmatic Perspectives” (link goes to book containing the essay), an essay by Richard Posner I read recently, that said that utilitarian views on animal rights were wrong because it implies that one should help a stuck pig before a stuck human if the stuck pig is suffering more. Of course, that framing brings to mind (at least to me) the idea of both a human and a pig having a mild cut. If one instead says that the pig is suffering unimaginably brutal tortures while the human has a stubbed toe, obviously the pig should be helped first. But if you grant that– as Winston Churchill said to the beautiful woman– we already know what you are, and now we’re just haggling about the price.

My point here is not about the Chinese Room thought experiment or animal rights or Winston Churchill. Hopefully, even if you do not find my examples change your opinions on the thought experiment, you can understand how they would change it for other people. My point is that the intuitive response to thought experiments is based on small details of how they’re framed that we might not even recognize consciously– like the idea that the man is sitting in a relatively small box with a book in front of him on the desk, or that both the pig and the human are cut a little bit– and that if you change those details the intuitive response changes greatly.

I think a lot of this is because we might know the answer our intuitions got, but we don’t know how it got there. If you actually knew that the reasoning process was “both pig and human have a cut, pig seems to be in more distress, humans generally matter more than pigs, the pig isn’t in that much more distress, I will help the human”, then this would obviously not cause you to disagree with Peter Singer’s ideas about animal rights. But if all you have is “I will help the human”, you can imagine all sorts of things about how you got there– and unless you happen to think of the thought experiment that proves that your imagination is wrong, you won’t ever notice.

This means that thought experiments are not terribly reliable for establishing knowledge. You may think “yes! I have established the consistent ethical principle that pigs only matter to the extent that humans care about them!”, but in reality you have only established the principle that you care less about pigs that have been cut a little bit than about humans that have been cut a little bit. That is a good principle of great utility in veterinary triage, but not exactly the sort of thing you want to ground a philosophy of animal rights on.

For this reason, I have started to back away from using thought experiments. Whenever possible, I refer to specific details of factual situations that I am thinking about; when it is not possible, I try to keep my thought experiments narrowly tailored, and I keep an eye out for details that may cause the reader to have intuitive reactions for reasons they don’t necessarily endorse.

Sacred Values Are How Ethical Injunctions Feel From The Inside

21 Thursday Apr 2016

Posted by ozymandias in utilitarianism

≈ 29 Comments

Tags

ozy blog post, utilitarianism it works bitches

[content warning: torture used as a hypothetical without details]

Ethical injunctions are basically any rule one adopts of the form “don’t do X, even when it’s a really good idea to do X.” Don’t deceive yourself, even when it’s a really good idea to deceive yourself. Don’t torture people, even when it’s a really good idea to torture people. Don’t kill children, even when it’s a really good idea to kill children. Don’t break confidentiality, even when it’s a really good idea to break confidentiality.

A perfectly rational being, of course, would have no need for ethical injunctions. But we’re monkeys with pretensions. We’re self-interested and prone to rationalization. If we say “it’s okay to torture people in very extreme cases that are never going to happen”, then you can talk yourself into thinking that this is a very extreme case, even though the actual reason you want to torture the guy is that he’s a horrible person and you want to see him suffer, and next thing you know you’re the U S Government.

An ethical injunction is a good thing to have in any case where it’s more likely that you’ve made a mistake about whether this thing is a good idea than that it’s an actually good idea. For instance, torture basically doesn’t work, so there’s really no practical reason to torture anyone; therefore, it’s off-limits.

The human brain implements a lot of strategies for thinking, a lot of cognitive algorithms. There are two ways these algorithms can be implemented. Sometimes, you know the algorithm and you are deliberately choosing to execute it: for instance, you might look at the problem 99 + 16 = ? and think “take one from the 16 and add it to the 99… that’s 100 and 15… the answer’s 115”, in which case you’re using an algorithm for how to do mental math.

But not every algorithm the brain uses works that way. For instance, most people’s brains have an algorithm for choosing a partner; they probably evolved because in the past that particular algorithm maximized your inclusive genetic fitness. However, you don’t consciously think “hmmm, that person seems like a good person to maximize my inclusive genetic fitness with, following these rules I’ve figured out about how to maximize my inclusive genetic fitness.” You think “sexy!”

Thus we say: “this is what the algorithm from inclusive genetic fitness feels like from the inside.”

Morality, like choosing a partner, is often intuitive: for most people, the conscious reasoning process is subordinate to the instinctive feeling of “that’s wrong!” or “that’s hot!” So its algorithms are probably similarly unconscious.

Most people have something called sacred values— things they refuse to compromise on, no matter what. For instance, one person might hold life as a sacred value– refusing to take life even to prevent tremendous suffering. Another person might hold autonomy as a sacred value– refusing to violate another person’s bodily autonomy even to save their life or the lives of others. This is tremendously vexing to consequentialists. We are like “okay, but you have to admit that in theory it is possible that a million billion people could be saved by that person having the tiniest pinprick on their finger, and in that case would we be justified in violating their bodily autonomy?” And then that person is like “no” and in many cases accuses us of being in favor of violating people’s autonomy.

But a sacred value is how an ethical injunction feels from the inside. It doesn’t feel like a calm, level-headed endorsement of the statement “it is more likely that you made a mistake about torture being right than it is that torture is right.” It feels like torture being unthinkable. Unimaginable. Like getting morally outraged at the thought that you might torture someone.

If you think about the benefit you might get from torturing someone, then you might be tempted to torture them. So you feel repulsed at the idea of contemplating a situation in which it is beneficial. You might get angry because someone even brought it up. How dare they? Don’t they know torture is wrong?

Eudaimonia, Part Two

28 Monday Mar 2016

Posted by ozymandias in utilitarianism

≈ 14 Comments

Tags

ozy blog post, utilitarianism it works bitches

[Survey request: If you are reading this blog, you are in the target market for the LW Diaspora Survey! Please take the survey. If you dislike LW, there’s a question where you get to opine that it’s a cult and everything.]

I.

I got two big criticisms of my post on eudaimonia almost a year ago (how time flies when you have a blog post you’re idly poking at in your drafts), one of which is that I am basically a preference utilitarian, and one of which is that I am basically a virtue ethicist. I find these criticisms to be hilarious, mostly because someone should inform the virtue ethicists and the preference utilitarians that, by the transitive property, they are basically each other.

II.

My position is, in fact, influenced by virtue ethics. However, I think the subtle difference is that I’m a consequentialist. Virtue ethicists want the individual to cultivate virtue/arete; I want people to cultivate arete insofar as this increases the overall amount of arete in the world. See the classic work of moral philosophy, Serenity:

Capt. Malcolm Reynolds: Why? Do you even know why they sent you?
The Operative: It’s not my place to ask. I believe in something greater than myself. A better world. A world without sin.
Capt. Malcolm Reynolds: So me and mine gotta lay down and die… so you can live in your better world?
The Operative: I’m not going to live there. There’s no place for me there… any more than there is for you. Malcolm… I’m a monster.What I do is evil. I have no illusions about it, but it must be done.

As a utilitarian, I have to support the Operative’s general argument, although the specific better world in question is not, actually, better. (To his credit, he recognizes this by the end of the movie.) The Operative is doing evil, making himself a less virtuous person, in the service of a greater good. Conversely, I don’t think virtue ethics has a place for becoming less virtuous to increase the amount of virtue in the world; the Operative’s evil is simply evil.

The idea of doing evil to create good is a dangerous one for humans, who are– after all– rationalizing animals. It is all too easy for doing evil to simply be evil, and quite often people talk about the pressing moral dilemmas of whether they will choose to kill one to save five, ignoring that most of the time our actually existing pressing moral dilemma is whether we will bother to get off our ass and stop watching Netflix to save five. But with those caveats I do think that it is possible for me to individually become less virtuous in a way that increases the amount of virtue in the world, and thus I am not a virtue ethicist.

III.

[Content warning: I talk about physical fitness as part of eudaimonia.]
[First Disclaimer: Unfortunately, some humans are less capable of eudaimonia than other humans; very often, this is because those people are marginalized. I would like to make it very clear that having less capability to reach eudaimonia is not the same thing as being “less of a person” or having less moral worth.]
[Second Disclaimer: discussions of the good life are at high risk of being nothing but applause lights and of ignoring dark pains and dark joys. To ameliorate the former problem, I’ve made sure to choose specific examples; I’m not sure how to ameliorate the latter without having a giant shitstorm about my examples.]

I think that physical fitness is part of a eudaimoniac life for most humans. Of course, what physical fitness caches out to is different for different people: for a yogi, it might be flexibility; for a weightlifter, being able to lift a whole lot of heavy things; for someone with a chronic illness, the ability to do a single jumping jack. If you asked me to justify this, it would probably involve a lot of references to mens sana in corpore sano and fulfilling your capabilities as best you can and and the joy of physical movement.

I used to have a whole “nerds don’t exercise, that’s a jock thing!” going on. I think that in that case my preference was simply incorrect. I guess you can argue that my preferences about physical fitness were buried inside of me– I “really” wanted to exercise regularly, even though I consciously preferred not to exercise on every single meta-level– but I feel like that is stretching the definition of “want” to the breaking point. I think the actual difference between me and a lot of sophisticated preference utilitarians is how comfortable we are with the concept of preferences people do not recognize as preferences.

Prominent bioethicist Leon Kass also has some opinions about the eudaimoniac life. Take it away, Leon:

Worst of all from this point of view are those more uncivilized forms of eating, like licking an ice cream cone –a catlike activity that has been made acceptable in informal America but that still offends those who know eating in public is offensive.

I fear I may by this remark lose the sympathy of many reader, people who will condescendingly regard as quaint or even priggish the view that eating in the street is for dogs. Modern America’s rising tide of informality has already washed out many long-standing traditions — their reasons long before forgotten — that served well to regulate the boundary between public and private; and in many quarters complete shamelessness is treated as proof of genuine liberation from the allegedly arbitrary constraints of manners. To cite one small example: yawning with uncovered mouth. Not just the uneducated rustic but children of the cultural elite are now regularly seen yawning openly in public (not so much brazenly or forgetfully as indifferently and “naturally”), unaware that it is an embarrassment to human self-command to be caught in the grip of involuntary bodily movements (like sneezing, belching, and hiccuping and even the involuntary bodily display of embarrassment itself, blushing). But eating on the street — even when undertaken, say, because one is between appointments and has no other time to eat — displays in fact precisely such lack of self-control: It beckons enslavement to the belly. Hunger must be sated now; it cannot wait. Though the walking street eater still moves in the direction of his vision, he shows himself as a being led by his appetites. Lacking utensils for cutting and lifting to mouth, he will often be seen using his teeth for tearing off chewable portions, just like any animal. Eating on the run does not even allow the human way of enjoying one’s food, for it is more like simple fueling; it is hard to savor or even to know what one is eating when the main point is to hurriedly fill the belly, now running on empty. This doglike feeding, if one must engage in it, ought to be kept from public view, where, even if WE feel no shame, others are compelled to witness our shameful behavior.

I feel like from a preference utilitarian perspective one’s only response to Mr. Kass must be “okay, you don’t want to see people engage in animal-like, un-self-controlled, or otherwise undignified behavior. Unfortunately, other people’s desire to engage in that sort of behavior is much stronger than your own desire not to see it, so we can’t do much except advise you to steer clear of ice cream parlors.”

But to me Kass’s statements feel like something I can argue with. I can cite Lorde’s perspective on the erotic to argue with his disdain of the bodily. I can point out that his belief that animal-like behavior is shameful implies that sex is shameful, particularly procreative sex (humans were the only animals to invent birth control). I can point out the neuroticism of constant self-monitoring and advocate for frankness. I can say that many of the most beautiful experiences of human life are undignified, from joy to love. And I can say that base physical pleasure is important and all too often undervalued.

It might be putting it too strongly to say that there’s a fact of the matter. This debate seems to me to be similar to arguing about fiction. There is no way you can settle the argument about whether Rent is good musical or not. But it seems facile to reduce the quality of a work of fiction to popularity. I don’t respond to “Rent is a bad musical” with “well, that’s your preference, and preferences can’t be wrong or right by definition”, I respond to it with “but what about the amazing songwriting? And the depth of characterization?” And it is possible that I can win the argument: I’ve certainly been brought around to particular authors by people pointing out all the neat things they’re doing that I missed the first time through.

IV.

Unfortunately, the existence of Leon Kass makes me ask the question: what if I am Leon Kass? What if my beliefs about physical fitness are as inaccurate as Kass’s beliefs about ice cream cones? How can I come up with moral rules that pass the Enemy Control Ray test?

This is where I get into preferences as a heuristic. I don’t think that people always do what’s right for them: after all, I did once believe that physical fitness was Just Not For Me. But I think, in general, most people do want to reach their personal eudaimonia, and they will take actions that they believe (rightly or not) will get them there. And I think most people have better information about what their eudaimonia is than other people do, because they know their feelings and desires from the inside, where other people have to go by that person’s self-report. So I think, in general, we should default to the assumption that when a person says “my eudaimonia would be maximized by having C-cup boobs,” that they are in fact reporting their preferences accurately.

The advantage here is that people are often prone to typical mind fallacy. I believe that physical fitness is part of everyone’s eudaimonia because it’s part of mine; Leon Kass believes that not eating ice cream in public is part of everyone’s eudaimonia because it’s part of his. However, human minds are very different from each other. I see nothing wrong with eating ice cream in public, but that doesn’t mean it would be appropriate for Leon Kass to eat ice cream in public, and I should not assume that it is. He is a very different person from me! Perhaps he wishes to separate himself from his bodily nature while I wish to revel in it, and neither is worse nor better, any more than me having a gender identity is worse than someone else lacking one.

Can this default be overridden? Certainly. If the person has bodily dysmorphic disorder, they are probably mistaken about whether the C cup breasts are optimal for them, and many plastic surgeons won’t operate on them because of it. But one should strive, in general, to do minimalist interventions. Sincere advice to a friend is better than coercion; nudges from the government are better than banning something outright. In that way, we minimize the harm in the case that we are mistaken about what other people’s eudaimonia truly is.

Utilitarianism With Game Metaphors

05 Friday Feb 2016

Posted by ozymandias in utilitarianism

≈ 17 Comments

Tags

ozy blog post, utilitarianism it works bitches

Imagine hedonic, preference, and eudaimoniac utilitarians as three players of an MMO who all decide that they want to improve the game for everyone playing.

A hedonic MMO utilitarian would decide that the best way to improve the game experience is to get everyone to the highest level, with all the best equipment and maxed out skills and attributes.

A preference MMO utilitarian would decide that the best way to improve the game experience is to satisfy all of the players’ preferences about the game, from graphics to metaplot, paying more attention to strong desires and desires that are about desires.

A eudaimoniac MMO utilitarian would decide that the best way to improve the game is to make it the most fun for everyone.

All three of the MMO utilitarians would agree on a lot (particularly if they can’t reprogram the game and are stuck acting as characters, which is similar to the position humans are actually in). Most of the time, if you want to make the game more fun for players, you should do what the players want, and if you want to do what the players want, you should make the game more fun. And even the actions that make someone higher level are often correlated with the actions that make the game more enjoyable or that satisfy player preferences.

However, the MMO utilitarians would also disagree on a lot. The hedonic MMO utilitarian, for instance, would unleash a virus which makes all the numbers as high as possible, while the other two would not. Unfortunately, having very high numbers actually isn’t a whole lot of fun, and most people would rather play the game than be given high numbers by fiat. Indeed, the hedonic MMO utilitarian’s desired end state would be viewed as a terrible game by the vast majority of players.

On the other hand, the eudaimoniac and preference MMO utilitarians don’t necessarily agree about everything. For instance, the preference MMO utilitarian might notice that players seem to care about graphics a lot and not really care about load times. On teh other hand, the eudaimoniac MMO utilitarian might notice that realistic graphics don’t really affect how much people enjoy the game, but long load times increase the amount of time they spend staring bored at the screen, which is not fun at all. In that case, the preference MMO utilitarian would support hyperrealistic graphics that take forever to load, while the eudaimoniac utilitarian would support minimalistic graphics that take far less time. While the eudaimoniac utilitarian is willing to listen to players’ preferences– after all, players are experts on their own happiness– she is willing to override them when she has evidence that they are simply mistaken about what makes good game design.

(Long load times here are a metaphor for commutes. Commutes are evil.)

 

← Older posts

Like My Blog?

  • Amazon Wishlist
  • Buy My Time
  • Patreon
  • Thing of Things Advice

Blogroll

  • Aha Parenting
  • Alas A Blog
  • Alicorn
  • Catholic Authenticity
  • Defeating the Dragons
  • Dylan Matthews
  • Effective Altruism Forum
  • Eukaryote Writes Blog
  • Eve Tushnet
  • Expecting Science
  • Glowfic
  • Gruntled and Hinged
  • Heteronormative Patriarchy for Men
  • Ideas
  • Intellectualizing
  • Jai With An I
  • Julia Belluz
  • Julia Serano
  • Kelsey Piper
  • Less Wrong
  • Love Joy Feminism
  • Neil Gaiman's Journal
  • Order of the Stick
  • Otium
  • Popehat
  • PostSecret
  • Rationalist Conspiracy
  • Real Social Skills
  • Science of Mom
  • Slate Star Codex
  • Sometimes A Lion
  • Spiritual Friendship
  • The Fat Nutritionist
  • The Pervocracy
  • The Rationalist Conspiracy
  • The Unit of Caring
  • The Whole Sky
  • Tits and Sass
  • Topher Brennan
  • Yes Means Yes

Recent Comments

Tulip on On Taste
nancylebovitz on Disconnected Thoughts on Nouns…
nancylebovitz on Against Asshole Atheists
nancylebovitz on Against Asshole Atheists
Richard Gadsden on Sacred Values Are How Ethical…
Richard Gadsden on The Curb Cut Effect, or Why It…
Review of Ernst Cass… on Against Steelmanning
Timberwere on Monsterhearts Moves List
Articles of Interest… on Getting To A Fifty/Fifty Split…
Eric on Bounty: Guide To Switching Fro…

Blog at WordPress.com.

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy
  • Follow Following
    • Thing of Things
    • Join 1,133 other followers
    • Already have a WordPress.com account? Log in now.
    • Thing of Things
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar