I think I’m probably making some incorrect ethical choices.
I know a lot of thoughtful, compassionate people who have come to different conclusions than me about ethics. Many of our disagreements are based on hard judgment calls about complicated, thorny issues– things like whether animals have moral weight, or whether we can have a meaningful impact on the far future. It seems pretty unlikely to me that I’ve come to the right conclusion on every one of those complicated, thorny issues. When I look at historical figures’ opinions on issues we now consider to be settled, such as slavery, patriarchy, or homosexuality, I find that essentially no one was correct about everything.
One thing I’ve tried to do, when I notice a persistent disagreement with people I respect, is moral hedging. I look for actions that are low-cost if I’m correct about morality and have a big benefit if I’m wrong.
One issue I’ve morally hedged on is abortion. I do not think that killing a fetus is as wrong as killing an adult human. However, I use the implant as my form of contraception. The implant is highly reliable; it results in less than one pregnancy per thousand person-years of use. By switching to a more reliable form of contraception, I have prevented myself from needing to have an abortion in the future, and thus prevented myself from committing a murder.
Another issue I’ve hedged on is AI risk. I don’t think AI superintelligence is a near-term existential risk. However, a while ago, I asked myself what would be the most valuable thing I could do, assuming that AI superintelligence is ten years away. The answer is that I should continue to be critical of AI. Most critics of AI risk are uninformed and have a hard time getting through an entire essay without typing the phrase “rapture of the nerds.” If thinkers are not challenged by intelligent criticism, they tend to get sloppy and make avoidable mistakes. For various reasons– most notably that I don’t particularly enjoy being simultaneously dogpiled by r/slatestarcodex and r/sneerclub– I’ve tended to discuss my opinions about AI risk privately, but moral hedging is one of the reasons I’m considering discussing my beliefs more publicly.
There are two primary benefits from hedging. Most obviously, I reduce the harm I cause if I’m wrong. Even if killing a fetus is as bad as killing a person, I can rest assured that I have not personally committed any murders. But I also find that hedging tends to promote cooperation among compassionate, thoughtful people. When I tell someone that I hedge about their belief system, they tend to believe that I’m taking their concerns seriously. That opens up space for discussion and makes it easier to work together on the issues we agree about.
It can be difficult to figure out how best to hedge. It’s not an accident that I have both pro-life and AI-safety-supporting friends; I could talk to them about the best strategies for hedging. I have difficulty figuring out how to hedge if I don’t have any friends who support a particular belief system. I think it would make sense to make a list of ways to hedge for various sets of beliefs. (Effective animal advocates have made some small steps in this direction, such as by encouraging people to replace chicken with beef whenever possible.)
Do you practice moral hedging? Do you have advice for how people you disagree with can morally hedge based on your own beliefs?
Kinda focusing on an unimportant detail, but I’m not sure why you would get dogpiled by /r/sneerclub for being critical of AI risk ? (Incidentally IME /r/slatestarcodex and /r/sneerclub have similar views of AI risk.)
LikeLike
Well, probably I would say something vaguely positive about any AI safety work over the course of the post, and that makes me the second coming of Eliezer Yudkowsky, Paul Christiano, and Dario Amodei combined.
LikeLike
I think that’s unfair to /r/sneerclub posters’ ability to understand what you’re saying. looks at what /r/sneerclub recently said about you I retract that.
LikeLike
That first one is interesting, Ozy talked about the undesirability of having an eugenic society eliminate all low-IQ and otherwise imperfect people. Somehow you’d think that would be agreeable to those who sneer, yet they mock something that Ozy never said (the only claim was that this specific part is not dystopian, not that the entire book isn’t).
LikeLike
Yeah /r/sneerclub is pretty awful, and this isn’t even the worst thing about them, between the shaming rationalists for having taste aversives (obvious content warning for ableism, less obvious content warning for ageism) and other shit like this (to pick the most recent one).
LikeLike
typo: replace beef with chicken (instead of vice versa)
LikeLike
From an animal cruelty perspective, replacing chicken with beef is an improvement. Beef cattle are not treated particularly well, but there’s nothing more wretched than a factory-farmed chicken, and whereas one cow will provide a large number of meals, one dead chicken provides a very small number of meals.
From an environmental perspective, however, replacing beef with chicken is an improvement. Beef production is very resource-intensive, and contributes to climate change much more than chicken production.
Replacing chicken or beef with tofu is an improvement from either perspective.
LikeLiked by 2 people
Interesting! I was also wondering why beef over chicken; I guess I have only heard the environmentalist side, not the animal cruelty side.
LikeLike
Brock is correct. From an animal welfare perspective, you should eat less chicken. The only reasonable argument in favor of replacing beef with chicken, from an animal-welfare perspective, is that some people believe cows are smarter. But the difference is actually not that large, and certainly doesn’t outweigh the effects of beef cows’ size (the average American eats a tenth of a cow per year) and the fact that beef cows generally do not experience nearly as much torture as chickens do.
LikeLike
This is really interesting. Can you link to where this comes from? I’m curious about (a) how much meat that cashes out to (to see if I can tell how I compare to the average American) (b) how much of other animals we tend to eat. (Of possible use in figuring out animal welfare offsets, though I’m not sure when I’ll get around to that.)
LikeLiked by 1 person
Re “I know a lot of thoughtful, compassionate people who have come to different conclusions than me about ethics.”
While I see why this might make you conclude that you are probably making incorrect ethical choices, an alternate conclusion you could draw is that there are no such thing as correct or incorrect ethical choices.
This is more in line with the scientific evidence (no one’s observed physical manifestations of ethical right and wrong) and philosophical evidence (no one’s found a set of ethical axioms that are universally endorsed by high-functioning adults from different cultures that can be used to logically derive a comprehensive ethical theory).
A more plausible explanation for ethical concepts is a contractualist one, where ethical rules are agreed-on norms that allow communities / societies to function. This both explains the importance of ethics to people and also why there is widespread disagreement about ethical rules: different cultures have different norms, and there’s continual renegotiation within and between communities about what the rules should be, just like the formal legal system is in a constant state of flux.
I find this picture empirically compelling, but it’s also a good place to be mentally: it avoids having to worry about if you’re making an ethical mistake, since the concept of being “mistaken” is incoherent, and instead directs your attention to understanding, following, enforcing, and improving the norms of the communities you belong to, which I think is a much more productive use of energy.
LikeLike
The problem with contractualism is that from a contractualist framework there’s no reason to care about animals or very severely disabled people, except that people who actually matter care about them, as neither animals nor very severely disabled people are capable of upholding their part of the bargain. I do care about both animals and severely disabled people. Your particular contractualist framework is also unsatisfying, because I disagree strongly with both my community of origin and my present community about various ethical issues. My community of origin holds that it is morally wrong to give ten percent of your income to the global poor instead of helping your family; my present community holds that AI safety is the most important ethical issue of our time.
If it really bothers you, you can pretend the entire post is about moralitee, the thing Ozy has that makes them want all beings to flourish, and which is very different from morality.
LikeLiked by 1 person
@ozy
Contracts can be way smarter than dumb quid-pro-quo’s. I would argue that people actually tend to base their societal contract partly on a dumb quid-pro-quo and partially on “from each according to his ability, to each according to his needs.”
So they demand less from mentally handicapped Bob for the same level of ‘-quo’ than they do for smart and pretty Jack. However, if Jack provides a lot of quid, they will likely be willing to provide more quo than for Bob.
Regardless of whether you consider it good grounds for ethics, I would argue that it is the model that most if not nearly all people naturally use. I would argue that many, if not most of the changes that many see as advances in ethical behavior, were primarily caused by lowering costs. So people didn’t so much get disgusted at earlier behavior
per se, but at the poor cost-benefit ratio. This then gets rationalized as total rejection, rather than conditional rejection which it actually is, by many.
Of course, costs and benefits are different by individual & by community and get perceived differently as well due to cultural beliefs.
LikeLike
That’s a great response, and you changed my mind a little bit. I agree contractualism doesn’t do a good job explaining compassion towards others who don’t have power.
(I do think it’s net beneficial to dissociate compassion from ethical obligation, and I think even within a contractualist framework you can say “hey, this community has shitty norms that make everyone in it unhappy” without appealing to moral absolutes. But still, point taken).
LikeLike
When you’re talking about “incorrect ethical choices”, is that more along the lines of “choices that the 2018 version of Ozy would find unethical if given more accurate data” or “choices that a hypothetical version of Ozy that grew up in the far future would find unethical”?
I agree very much that it’s important to hedge against the first type of choice, but I’m much less sure about the second. My suspicion is that, just as someone from the past might be horrified by our current attitudes to something like homosexuality, current Ozy might horrified by some aspects of the consensus morality in 2200. Does that mean that your current positions are mistakes?
LikeLike
Interestingly, both the deaths of animals and fetuses are natural occurrences, so a total rejection of both is rather problematic, as they result in such beliefs as strict anti-natalism and/or wanting to kill all predators in nature.
I personally put a lot of weight on long-term well-being of the planet as a nice ecosystem and humans especially. So that means to not only focus on the well-being of individuals, but also to put a lot of weight on the well-being of larger entities, like species, societies, etc.
LikeLike
One can reject murder while accepting natural death. Hence why abortion debates generally turn on whether fetuses have personhood, rather than whether this instance of (arguendo) person-killing is acceptable.
LikeLike
Moral hedging (or harm reduction) is the basis for my libertarianism. The more people I force to conform to my ethical code (either individually or as part of a group) the more likely I am to be doing something really bad to someone else. Belief in a “higher calling” has been a prerequisite of all the large-scale horrors.
LikeLike
Isn’t that self-contradictory ? Isn’t libertarianism a moral code too ?
(Ozy: please delete the above moderation-awaiting comment, it was caused by a typo in me typing my e-mail address)
LikeLike
Isn’t that self-contradictory ? Isn’t libertarianism a moral code too ?
LikeLike
(Ozy: I apologize for the moderation inconveniences caused by my bad typing skills)
LikeLike
It can be, in the same way that militant atheism can be a religion. Or, it can be not one, in the same way that IDGAF agnosticism is not a religion.
If you believe that “I don’t know the best way for total strangers to live their lives, and probably neither does anyone else,” or at least “If someone claims they DO know how strangers should live their lives, the probability of their belief being true is much lower than the probability of them being either mistaken or a power-hungry liar,” the you wind up with something that might not technically be “Libertarianism” according to Cato or the Mercatus center, but is close enough for jazz.
LikeLike
You are making moral claims about what you should and shouldn’t do, which is a moral system you have to hedge.
LikeLike
No, I’m making epistemological claims.
LikeLike
“you shouldn’t force people to conform to your ethical code” is a moral claim
LikeLike
@Fisher
One problem with libertarianism is that people naturally tend use rather mediocre decision making heuristics to partly or fully base our decisions on, like:
– it feels good
– others do it
– an authority tells me it’s true
Nowadays hacking attempts are made on us through ads, certain foods, software, etc, which make our heuristics happy, but which are actually bad for us.
Our upbringing and education (which is mandatory to some extent) allows us to partly move past these simple heuristics and/or learn where they don’t work, but this only works to some extent and heavily depends on our natural intelligence & upbringing.
So in other words, the dumber and/or less wise people are, the more they will tend to make bad choices when given freedom. So libertarianism is ultimately a rather elitist ideology, which benefits the smart & well-educated. Thus it is not a surprise that libertarians are overwhelmingly smart & well-educated.
As freedom to make choices has increased in the West, the life expectancy gap between those with low and high socioeconomic status has been increasing as well.
Ultimately, I believe that in the absence of a way to improve decision making by large segments of the population, a decent amount of paternalism by an elite is necessary to keep these kind of gaps in check.
LikeLike
@Aapje,
You are begging at least three rather large questions:
1. That there is such a thing as a “bad” decision, and that such a thing is knowable.
2. That there exist other people who can make “good” decisions for other people, and that these people can be identified with relative certainty.
3. That the harmful effects of the people identified in (2) imposing these “good” decisions on their subjects is “better” than the alternative of letting their subjects make their “bad” decisions.
(1) is plausible. (2) strikes me as self-evidently false, especially the second half of it. (3) seems true to me on the scale of a parent and child, but unlikely to be true on larger scales — even on something the size of a classroom.
@leftrationalist
I don’t remember actually stating that moral claim. I’m not using a mandatory/forbidden binary.
LikeLike
@Fisher
I think that there is fairly wide agreement that certain life outcomes are bad and that people with high g are better at avoiding those negative outcomes.
It is also generally accepted that people with very low ability need to be taken care of by others. For example, the severely mentally handicapped or those with extremely low IQs. Our society already thinks itself so good at determining this that some people are given the legal rights of being a guardian for another person. Of course, that is merely one form of paternalism where we condense a spectrum into a binary and where the paternalism is quite heavy handed.
If you don’t disagree with guardianships, I would say that you agree with me that paternalism is sometimes warranted and all that is left is object-level decision-making about the costs and benefits of certain paternalistic interventions, that we can then accept or reject on their merit.
I would argue that certain paternalism, like forcing everyone into having health-care insurance, is highly beneficial on the whole.
PS. Note that the paternalism I advocate is rarely single individuals choosing for other single individuals, but much more often groups of experts making policy. Furthermore, society should of course have the right to accept or reject such paternalism through democratic means.
PS 2. Note that not having this kind of paternalism can be considered class warfare, as less paternalism generally benefits high g people at the expense of low g people.
LikeLike
@Aapje
Again, you’re begging (2). Explicitly so in your first postscript. These “groups of experts” that you postulate and will defer to do not actually exist in any demonstrable form other than by self-definition. Rule of populations by groups of experts has a poor track record when compared against decentralized self-rule. To make it more concrete, if economics were an actual science, and if Scandinavian bankers were good at identifying expert economists, then all a command economy would need to do to be successful is to hire Nobel Prize winners and the socialist utopia would be realized. That this hasn’t actually occurred indicates that at least one of the premises are false.
The other thing is that you are treating specific ideas as generalize-able when they are not, or at least not demonstrated to be. Paternalism in the case of an actual parent may be completely appropriate without making paternalism a universally applicable idea. Just because it is justifiable to prevent harm to one’s child by covering all electrical outlets in the house does not mean that it is justifiable to ban all privately owned vehicles to prevent even greater harm to more people.
LikeLike
Um … my ethical system holds that people skeptical of AI risk as an EA cause area should maximize their production of blog posts elaborating their critique so I have interesting things to read?
LikeLiked by 1 person
Do you practice moral hedging? Not really. I respect people who have different moral conclusions then I do (and can help them advice their own moral principles) but I don’t deliberately change my own actions based on the likelihood that I am wrong.
Do you have advice for how people you disagree with can morally hedge based on your own beliefs?
Abortion: Like you already suggested, contraceptive use. There are people who do think contraception is wrong but I’m not aware of anyone who thinks contraceptives are worse then abortion.
Value of unskilled workers: At the very least you can be consistently polite to service workers. If you need to yell at somebody then ask to speak with the manager.
Global Warming: Be conscious on using energy needlessly. Don’t idle in vehicles, turn off lights, stop using incandescent light bulbs unless an incandescent is specificity needed, when replacing appliances pay attention to energy efficiency, etc. Not only do these activities lessons one’s own carbon footprint but mostly they save one money as well.
Animal welfare: Don’t throw away non-rotten food. Put in in the fridge to have later, ask for a carryout box, etc.
Religion: Instead of focusing on what someone declares their beliefs to be, ask about how their beliefs impact their lives and behavior.
Universal love: Even if you don’t get to the point where you are loving everybody at-least minimize the number of people one hates. Hating someone almost always does one more harm then it does the object of the hate. (I’m only making this claim with respect to people; I am not making a claim about other types of hate here today).
That’s all I can think of at the moment. Sorry.
LikeLike
Hedges I support:
:- Be nice. Nice isn’t the same thing as good – putting a murderer in prison isn’t nice, and I can believe that sometimes the most effective way to bring about positive social change may be through nastiness (although I think most people overestimate how often that’s true).
But I think that niceness has the two great virtues that 1) it’s much easier to distinguish nice from nasty than good from evil, and 2) provided your nice, you’re very unlikely to do much harm even if you’re morally wrong – all the great atrocities and crimes are committed by people who are not just evil but also nasty.
:- Protect the legal rights of the severely mentally disabled. I think that probably the definition of personhood that makes the most sense is in terms of mind, not species, and that the same moral principle that says that abortion is OK and that we should treat sentient aliens as people with rights if we ever meet any probably implies that we don’t have to value the lives of the severely mentally handicapped as much as those of others. But I’m not confident of that, and if I’m wrong then the moral cost is obviously extremely high, whereas the cost of treating all humans as people is extremely low.
:- Conservation, of both animals and historical objects. If you don’t know how much future generations will value something irreplaceable, keep it around if you can.
LikeLiked by 1 person
Supporting federalism seems like a good way of morally hedging on a lot of issues. You can work on turning Berkeley CA into a paradise based on your morality, while letting other communities try different things, as long as there are strong exit rights.
LikeLiked by 1 person
I don’t think federalism is necessarily low-cost– for example, protecting indigenous groups’ traditional cultures sometimes involves letting people murder disabled children, which is a painful and upsetting moral dilemma.
LikeLike
Fair point, it probably varies issue by issue. The bill of rights post 14th amendment seems like a good system, but we should probably consider amending it a little. The US amendment process is difficult, but I’m not sure I support any moral rule enforced by a national government over a state if it couldn’t get that much support.
LikeLiked by 1 person
I like the idea of moral hedging a lot but haven’t consciously done much of it – I don’t really even have the energy to fully live my own values, let alone someone else’s.
I guess I could be said to be doing roughly the same thing as you on abortion, though (I mean, different mechanics, but same overall bent).
Perhaps in addition to case-by-case moral hedges specific to certain moral issues, we can think of broader rules or frameworks as motivated by moral hedging – for instance, a general heuristic against irreversible changes that might not be good, or things like democracy that mostly don’t let idiosyncratic individuals make wide-ranging policy.
LikeLike