Tags
my issues with anti sj let me show you them, not like other ideologies, ozy blog post, utilitarianism it works bitches
Many consequentialists of my acquaintance appear to suffer from a tragic case of deontologist envy.
In consequentialism, one makes ethical decisions by choosing the actions that have the best consequences, whether that means maximizing your own happiness and flourishing (consequentialist ethical egoism), increasing pleasure and decreasing pain (hedonic utilitarianism), satisfying the most people’s preferences (preference utilitarianism) or increasing the number of pre-defined Good Things in the world (objective list consequentialism). Of course, it’s impossible to figure out all the consequences of your actions in advance, so many people follow particular sets of rules which they believe maximize utility overall; this is sometimes called “rule consequentialism” or “rule utilitarianism.”
In deontology, one makes ethical decisions by choosing the actions that follow some particular rule. For example, one might do only the actions that you’d will that everyone do, or actions that involve treating other people as ends rather than means, or actions that don’t violate the rights of other beings, or actions that don’t involve initiating aggression, or actions that are not sins according to the teachings of the Catholic Church. While it’s allowed to care about whether things are better or worse (some deontologists I know call it their “axiology”), you can only care about that within the constraints of the rule system.
In spite of my sympathies for virtue ethics, I do think it is generally better to make decisions based on whether the outcomes are good as opposed to decisions based on whether they follow a particular set of rules or are the decisions a person with particular virtues would make. (I continue to find it weird that these are the Only Three Options For Decision-Making About Ethics, So Says Philosophy, but anyway.) So do most people I know.
I have some consequentialist beliefs about free speech. For instance, I support making fun of people who say sexist or racist things in public. I think it is fine to call someone a bigoted asshole if they are, in fact, saying bigoted asshole things. I appreciate Charles Murray refusing to speak at an event Milo Yiannopoulous is at because he is “a despicable asshole” and I wish more people would follow his example. And when I express my consequentialist beliefs about free speech a surprising number of my consequentialist friends respond with “but what if your political opponents did that?”
I did not realize we are all Kantians now.
I think there are three things that people sometimes mean by “but what if everyone did that?” The first is simple empathy: if it hurts you to be shamed, then you should consider the possibility that it hurts other people to be shamed too, no differently from how you are hurt. I agree that this is an important argument, and we could all stand to be a little bit more aware that people we disagree with are people with feelings. But even deontologists agree sometimes it’s necessary to hurt one person for the greater good: for example, even if you are very lonely and it hurts you not to get to talk to people, you don’t get to force people to interact with you against their will. So I don’t think that the mere fact that it hurts people implies that (say) public shaming should be off-limits.
The second is a rather touching faith in the ability of people’s virtuous behavior to influence their political opponents.
Now, if it happened that my actions had any influence whatsoever over the behavior of r/TumblrInAction, that would be great. I don’t screenshot random tumblr users and mock them in front of an audience of over three hundred thousand people, so the entire subreddit would close down, which would be a great benefit to humanity. While we’re at it, there are many other places people who read r/TumblrInAction could follow my illustrious example. For instance, they could be tolerant of teenagers with dumb political beliefs, remembering how stupid their own teenage political beliefs were. They could stop making fun of deitykin, otherwise known as “psychotic people with delusions of grandeur,” because jesus fucking christ it is horrible to mock a mentally ill person for showing mental illness symptoms. They could stop with the “I identify as an attack helicopter” jokes; I mean, I don’t have any ethical argument against those jokes, it’s just that there is exactly one of them that was ever funny.
In general people rarely have their behavior influenced by their political enemies. Trans people take pains to use the correct pronouns; people who are overly concerned about trans women in bathrooms still misgender them. Anti-racists avoid the use of slurs; a distressing number of people who believe in human biodiversity appear to be incapable of constructing a sentence without one. Social justice people are conscientious about trigger warnings; we are subjected to many tedious articles about how mentally ill people should be in therapy instead of burdening the rest of the world with our existence.
Therefore, I suspect that if supporters of social justice universally became conscientious about representing their opponents’ views fairly, defaulting to kindness and using cruelty only as a last resort when it is necessary to reduce overall harm, and not getting people fired from their jobs, it would not have any effect on how often opponents of social justice represent opponents’ views fairly, behave kindly, and condemn campaigns to fire people. In fact, they might end up doing so more enthusiastically, because suddenly kindness and charity and not getting people fired are Social Justice Things, and you don’t want to support Social Justice Things, do you?
(I’m making this argument with the social justice side as the good side, but it works equally well for literally any two sides in the relevant positions.)
Third, there’s an argument I personally find very compelling. Nearly everyone who does wrong things, even evil things, thinks that they’re on the side of good. Therefore, the fact that you think you’re on the side of good doesn’t mean you actually are. (The traditional example is Nazis, but I think Stalinism is probably better, because in my experience most people agree that your average rank-and-file Stalinist supported an ideology that killed millions of people because they had a good goal but were horribly mistaken about how to bring it about.) So it’s important to take steps to reduce the harm of your actions if you’re actually doing evil.
Like I said, I find this argument compelling. But you can’t get an entire ethical system out of trying to avoid being a Stalinist. Lots of generally neutral or even good things are evil if a Stalinist happens to be doing them, such as trying to convince people of your point of view or going to political rallies or donating to causes you think will do the most good in the world. If you were a Stalinist, the maximally good action you could do, short of not becoming a Stalinist anymore, is sitting on the couch watching Star Trek reruns. This moral system has some virtues– depressed people the world over can defend their actions by saying “well, actually, I’m one of the best people in the world by Not-Having-Even-The-Slightest-Chance-Of-Being-A-Stalinist-ianism”– but I think it is unsatisfying for most people.
(I can tell someone is about to say “you can donate to the Against Malaria Foundation, there’s no possible way that could be evil!” and honestly that just seems like a failure of imagination.)
That’s not to say that trying to avoid being a Stalinist should have no effects on your ethical system at all. Perhaps most important is never, ever, ever engaging in deliberate self-deception. Of almost equal importance is not hiding inconvenient facts. If you know damn well the Holodomor is happening, do not write a bunch of articles denouncing everyone who says the Holodomor is happening as a reactionary who hates poor people. On a less dramatic level, if there’s a study that doesn’t say what you want it to say, mention it anyway; if you can massage the evidence into saying something that it doesn’t really say, don’t; take care to mention the downsides and upsides of proposed policies as best you can. These are most important, because they directly harm the ability of truth to hurt falsehood.
And there are some things that I think it’s worth putting on the list of things you shouldn’t do even if you have a really really good reason, because it is far more likely that you are mistaken than that this is actually right this time. Violence against people who aren’t being violent against others, outside of war (and no rules-lawyering about how being mean is violence, either). Being a dick to people who are really weird but not hurting anyone (and no rules-lawyering about indirect harm to the social fabric, either). Firing people for reasons unrelated to their ability to perform their jobs. I’ve added “not listening to your kid and respecting their point of view when they try to tell you something important about themselves, even if you disagree,” but that’s a personal thing related to my own crappy relationship with my parents.
But that’s not a complete ethical system. At some point you have to do things. And that means, yes, that there’s a possibility you will do something wrong. Maybe you will be a participant in an ongoing moral catastrophe; maybe you will make the situation worse in a way you wouldn’t have if you sat on your ass and watched Netflix. On the other hand, if you don’t do anything at all, you get to be the person sitting idly by while ongoing moral catastrophes happen, and those people don’t exactly get a good reputation in the history textbooks either. (“The only thing necessary for the triumph of evil is for good men to do nothing,” quoth Edmund Burke.)
The virtue of consequentialism is that it pays attention to consequences. It is consistent for me to say “feminist activism is good, because it has good consequences, and anti-feminist activism is bad, because it has bad consequences.” (Similarly, it is consistent to say that you should lie to axe murderers and homophobic parents, but not to more prosocial individuals.) This is compatible with me believing that if I had a different set of facts I would probably be engaged in anti-gay activism, and in fact many loving, compassionate, and intelligent people of my acquaintance do or have in the past. Moral luck exists; it is possible to do evil without meaning to. There would be worse consequences if everyone adopted the policy of never doing anything that might possibly be wrong.
There is a common criticism of consequentialism where people say “well if torture had good consequences then you’d support torture! CHECKMATE CONSEQUENTIALISTS.” Of course, in the real world torture always has bad consequences, which is why consequentialists oppose it. If stabbing people in the gut didn’t cause them pain or kill them, and in fact gave them sixteen orgasms and a chocolate cake, then stabbing people would be a good thing, but it is not irrelevant to consequentialism that stabbing does not do this.
Some people seem to want to be able to do consequentialism without ever making reference to a consequence. If you just find enough levels of meta and use the categorical imperative enough, then maybe you will be able to do consequentialism without all that scary “evidence” and “facts” stuff, and without the possibility that you could be mistaken. This seems like a perverse desire, and in my opinion is best dealt with by no longer envying deontology and instead just becoming a deontologist.
blacktrance said:
Being virtuous is unlikely to cause your opponents to become virtuous on a large scale, but that’s just a specific case of your behavior likely not having much effect on a large scale in general. But it has an effect locally – people you interact with relatively frequently respond to how you treat them. If the A-ists in your community start treating the B-ists poorly, they should expect the possibility of the B-ists responding in kind. If you get a friend of a friend fired from their job, don’t be surprised if they retaliate if given the opportunity.
Also, the goal is not only to be virtuous yourself but also to promote virtue in others. You can say that we have this great norm where we usually don’t fire people for their views, and it’d be terrible to erode that regardless of what your more object-level views are – because the norm has very good consequences.
LikeLiked by 5 people
Viliam said:
The whole part about “but if we would abstain from obviously evil actions, people wouldn’t oppose us any less, so why bother at all” is horribly mindkilled.
Basically, it is based on the assumption that people who disagree with “Social Justice” do it because they simply hate the label, regardless of its content, as their terminal value. If you hate “Social Justice” and SJWs start tweeting about ice cream, you will avoid ice cream for the rest of your life, because that’s simply how non-SJWs are. And if you see a prominent SJW kick a puppy, you will start caring about puppies, but not because the puppies mean anything for you, emotionally; it’s just to spite the SJWs. You whole life is reduced to opposing anything related to “Social Justice”; you have no other values, no other goals in life.
Well, the world is large and contains many kinds of people, so perhaps there is a person or two out there who actually fit the description. But there are more people who instead start with the dislike of certain kinds of behavior (such as bullying), repeatedly notice people doing that behavior in the name of X, and as a consequence develop a strong dislike of people who talk about X too much. (This can work regardless of how much they agree or disagree with any dictionary definition of X.) Abstaining from activities these people find repulsive would reduce the number of those who end up opposing X.
tl;dr — “my behavior is completely irrelevant for my reputation” is wrong
LikeLiked by 3 people
ozymandias said:
Your argument seems to ignore the possibility that people could disagree on facts. When people, say, comment on r/gendercritical, this is often because they believe that transition causes material harm to both trans people and cis women. On the other hand, people who disagree with the toxicity of much trans discourse but agree that transition benefits trans people and causes little harm to women become me or Julia Serano or Zinnia Jones or any of the other innumerable trans activists who disagree tactically with other trans activists. My argument does not require that people hate social justice for no reason; it merely requires that they disagree with the factual claims of social justice advocates, which many people do.
LikeLike
Sophia Kovaleva said:
Being opposed to “Western values” and “American lifestyle” is one of the core and explicitly states guiding principles of Russian state ideology, so it is in fact the case that there are plenty of people whose “whole life is reduced to opposing anything related to “Social Justice”; [who] have no other values, no other goals in life.”
LikeLiked by 1 person
Holly said:
Ozy, do you happen to know of any good deontologist bloggers?
LikeLike
liskantope said:
I’m tempted to round this thesis off (possibly over-simplifying as I do so) as saying that while it’s good to separate the meta level from the object level, some people take this in the wrong direction where they privilege the meta level by completely ignoring any aspect of how the object level might affect it.
I’m reminded of how the classic SSC post “The Slate Star Codex Political Spectrum Quiz” has often been interpreted as implying that the correct way to consider an issue is only on the meta level, and yet I remember Rob Bensinger opining once that that post is an eloquent argument against viewing only the meta level while ignoring the object level.
LikeLiked by 2 people
LeeEsq said:
The consequentialist and deontologist split in ethics always seemed strange to me. In real life, a person striving to be ethical is going to have to split between consequentialist and deontologist ethics depending on the circumstances. Generally consequentialism will work fine in most cases but its negative side is that it can be used to justify some very self-centered behavior that can cause other people a great deal of emotional pain or worse on occasion, especially in its consequentialist ethical egoism or hedonic utilitarianism forms as you put it. In these circumstances, a person is going to have exercise some deontological discipline and act from a general principle that x is always moral or immoral for the most ethical course of action.
LikeLike
Machine Interface said:
As a moral antirealist reading rationalists debating the best moral system, I more and more understand how atheists must feel when they read theists having complex theological debates about the nature of God. It generally feels unparcimonious, unnecessary and based on strongly questionable premices.
It seems that at least among rationalists, most people are in agreement that morality cannot be grounded, but still believe that morality is necessary for enforcing cooperation. This seems absurd: all species, all biological systems cooperate just fine without morality — why would humans be an exception?
Mostly people seem to cooperate best when there are subjectively unplesant consequences (to them) for not cooperating, but the jump from this to consequentialism as a moral system seems unwarranted — negative stimuli avoidance is a causal mechanism that we find, again, in nature. A cell in a multicellular organism doesn’t need a moral system not to turn into a cancer — those are strictly mechanical things based on incentives and anti-defection devices.
The assumption that humans are somehow different only works if you inject the religious concept of free will into the mix, if you posit that people can somehow *freely chose*, without any influence from their life history or experience, to behave in a certain way or in another one, thus implying that someone can chose to do “evil”, to disobey their inner moral values even as they know these values are good, because they are an evil person. This has “typical mind fallacy” written in giant glowing letters all over it.
LikeLike
Aapje said:
@Machine Interface
I would argue that the self-destruction devices in a cell are part of a moral system. It’s self-sacrifice for the greater good. It’s not derived from reasoning, but I would argue that a moral system is defined primarily by its actions, not by how the result is achieved.
So I would argue that all species have morality and scientists have shown that those with higher cognitive functions make moral choices more similar to ours (like punishing defectors even at their own expense).
Humans can go even further with complex rationalizations to act in ways they feel benefit the greater good, for better or worse. Of course, you can ultimately always call it selfish, by making arguments as: the person had a selfish desire to leave a world with less suffering; but I think that is silly. I think it is more useful to accept that humans are capable of self-sacrifice for others (just like cells and many/most/all animals).
I don’t see how free will matters here, unless you want to argue that talking about ethics is useless because people will act deterministically. However, even if humans have no free will, they are still clearly impacted by external stimuli, such as debates about morals, so does it matter?
LikeLiked by 1 person
Machine Interface said:
That seems to stretch the concept of morality beyond the point of usefulness or meaningfulness. This is like how I’ve seen some people apply the term “creationist” to people who believe the universe was initially created by a god but anything after the initial creation proceeded as described by scientists. That may be an etymologically correct usage of the term “creationist” but that’s clearly not what most people mean when they apply it to someone.
Likewise, when people talk about morality, in the narrow sense it implies the conception of an absolute and objective scale of good and evil, the idea that some things and some states are *inherently* and *universally* desirable; and in a broader sense, if we reject the qualificatives of “absolute”, “objective” and “universal”, morality at least implies the notion of *ought* — discussion of morality are not just discussion about how humans behave, but about how they *should* behave.
This is why morality is an entirely human phenomena — because, as far as we know, only humans can conceptualize the notion of “ought”, only humans can see a situation that does not affect them or anyone they know and think of it as “unfair”, as an “injustice” that “should” be rectified.
But you broaden the concept even further by calling morality merely what *is*. That seems to completely dilute the concept. What does morality even mean if you can describe as having a moral behavior everything from a superorganism to non-living replicating systems like virus or prions? If all the incentive mechanisms present in nature are “moral”, then what point even is there in discussing the comparative value of different moral systems? Surely whatever humans end up doing will be moral anyway, since it will be the product of human nature!
If you go that way, then the complex rationalizations of humans are indeed just that; we debate for millenia how we ought to behave, without it having actually any significant influence on the way we do behave, which doesn’t change much over time — modern humans are less violent, but that seems to have more to do with rising IQ rate and thus less impulsivity, more awareness of consequences and more long time preferences overall in the population; that is, human nature hasn’t actually changed — only the distribution of humans within a trait has.
We then rationalize this as the effect of whatever moral theory we are currently suscribing to, even though there’s little evidence that deontologists behave significantly different behavior from consequentialists (or indeed from people who haven’t bothered with trying to define their moral orientation), or its converse, when we see things we don’t like and rightly or wrongly perceive them to be widespread, we rationalize *those* as the product of lack of morality/bad morality.
tl;dr: morality is usually understand as “what ought to be”; if we define it merely as “what is”, then every possible human action is moral and discussing which moral system is the best is a meaningless endeavour.
LikeLike
Aapje said:
I am not arguing that debating morality has no effect on a person’s behavior! However, believing in a moral system doesn’t necessarily make a person behave that way because humans aren’t purely rational.
Let’s transpose the discussion to physics. Imagine a water bottle on my desk. I desire for the water to enter my stomach, so I’m not content with the current physical state of the world. I now direct my arm and hand to put the bottle to my lips and drink. Through my knowledge of physics, I was able to manipulate physical reality to achieve an outcome.
Now imagine that I believe in telekinesis. I can believe in this 100% and/or believe that this is how the world should work, but the water will still not telekinetically move into my stomach. The rules of physics are what governs what is possible, not my mental model. My mental model can only tell me what is possible within the boundaries of reality and must be consistent with it to be useful. However, it can be consistent in many different ways (I could also choose to pour the water over my desk, for example).
To bring it back to morality: I would argue that humans have certain hardware which means that they cannot execute any morality. If a moral system places demands on people that they cannot meet, then the moral system is like telekinesis: something that cannot happen in reality regardless of how much some people want it to be true. So my argument is that morality must be subservient to what people can actually do. You can still argue that Moral System 1 is better than Moral System 2, but not based on which gives the better outcome if people follow the moral system perfectly, but rather how people actually behave if you make them believe in the moral system*.
* This is why I favor social-democratic capitalism over communism, for example, because while communism has a better outcome if people act as the theory demands, social-democratic capitalism has the better real outcome.
LikeLike
Machine Interface said:
If I understand correctly your point, you define morality as instructions for how to behave in order to reach certain goals. In that case, yes, I would agree that we can compare moral systems, even if the goals are arbitrary and relative.
However I do feel that this still fails to capture at least part of the definition of what people usually mean when they talk about morality. If we discuss morality only in terms of *how* goals should be best persued and which morality will make its believers act in a way that we desire, this neglects the discussion on the *desirability* of the goals themselves. Presumably “morality” isn’t just about how to best reach the goals, it’s about having good goals as well.
Because if we exclude that aspect of the question and concentrate solely on the adequacy of the means to the end, we are left with concluding that a moral system that allows you to (say) fulfill your desire to kill all the Jews is a better one than one that fails to fulfill your desire to lower antisemitism in the world. I have no problem with embrassing that conclusion *if truly morality is purely a question of adequacy of the means to the end*, but I suspect this would leave most people deeply unsatisfied (at least those that are not moral relativists).
LikeLike
Aapje said:
I’m not arguing that the goals are irrelevant, but rather that the goals should be possible for them to be worth taking seriously.
To bring it back to physics: If a person told you that their dream is to make people’s lives better through telekinesis, then my response would not be to debate the merits of a world with telekinesis vs a world without it, but rather to ask them for evidence that telekinesis is possible. It’s quite irrelevant how much better people’s lives would be with telekinesis when there is no prospect of making this into reality.
We know far less about human functioning than about physics, so it’s harder to predict what will work, but that there are strong limits on people’s ability to act in accordance to moral systems should be evident.
What I see is that a lot of people lose themselves in Utopian thinking, where they believe that just because an outcome is desirable, the best outcome results from pressuring people into it. Yet by demanding that people act in ways that they can’t, the end result is often that people pretend to follow the norm, yet subvert it in covert ways. So the outcome can then be (a lot) worse compared to working within people’s actual abilities as much as possible.
For example, the actual outcome of a harsh anti-drugs stance is not a world without drugs, but a world with drug mafia, polluted drugs and people switching to different drugs (like alcohol). As such, the anti-drugs moral system should be evaluated based on these effects, not based on an simplistic evaluation of how nice the world would be if no one used drugs.
LikeLike
Pingback: Rational Feed – deluks917
alwhite56 said:
I’d say that there’s a very powerful consequentialist argument for not publicly shaming people. It doesn’t work.
I would think the goal would be to get people like Milo to change their mind, and the process of getting people to change their minds starts with not shaming them. The path of not getting them to change their minds seems a lot more violent.
https://www.psychologytoday.com/blog/longing-nostalgia/201705/why-shaming-doesnt-work
https://www.psychologytoday.com/blog/creative-synthesis/201501/shame-and-motivation-change
LikeLike
ozymandias said:
There is essentially zero chance that Milo will ever read my thoughts on him, much less be in a position to change his mind based on it. I think “this behavior is horrible and evil” can often have quite positive effects when directed solely at an audience that won’t feel personally targeted.
LikeLike
alwhite56 said:
I think you just made my point though. “This behavior is horrible and evil” is perfectly fine and good for discourse. “This person is horrible and evil” tends to turn people off and serves to create more division.
LikeLike
ozymandias said:
I think people also feel shamed when informed that their behavior is horrible and evil.
LikeLike
Sigivald said:
(I continue to find it weird that these are the Only Three Options For Decision-Making About Ethics, So Says Philosophy, but anyway.)
Well, if you can find a coherent fourth that doesn’t just devolve into one of those three, you’ll be incredibly famous and at least in principle get a Nobel Prize In Philosophy.
(In principle, because there is no Nobel for philosophy.)
LikeLike
andrewflicker said:
Technically Philosophy is considered a subfield for the Nobel in Literature- there have been philosophy wins in the past, though not many.
LikeLike
Protagoras said:
Three is actually too many. There’s only consequentialism, with deontology and virtue ethics being alternative stories about which consequences matter (naive attempts to translate them fail, and people tend to foolishly conclude on that basis that it can’t be done, but more sophisticated approaches show otherwise).
LikeLiked by 1 person
Eric L said:
I think the categories work because they pretty neatly divide the universe of conceivable ethical systems on the axis of time. When determining the morality of an action, do you look forward in time to what will result from the action (consequentialism), or do you look backward to the motivations, emotional state, and personal qualities causing you to take that action (virtue ethics), or do you stick to the present and look at the action itself (deontology)? So it seems plausible to me that this categorization is exhaustive, the one hole being that virtue ethics is not necessarily exhaustive of backwards looking ethical systems because it only covers looking backward within the individual taking action, so maybe the Nobel awaits whoever discovers another backwards looking system.
LikeLike
vamair said:
The idea is probably that I’m (thinking of myself as trying to be) more loyal to “good” and “true” than to “my tribe’s views” and I don’t want the tribe to attack me when my view and theirs start to differ. I’d prefer them to try and discuss it charitably instead. Same for the other dissenters – see Ash conformity experiment. And being charitable to enemies means that the tribe is still going to be charitable to me and others like me even if they end up labeling me as the enemy – and an internal signal that dissent is not going to be punished harshly. This may also be a signal of me not being completely loyal, though.
You can say you’re not going to influence the behavior of your tribe, but you still do – at least by selection.
LikeLiked by 1 person
ozymandias said:
Obviously my behavior has an influence on other social justice people and I never claimed it didn’t. It does not, however, have much influence on my ideological enemies.
LikeLike
vamair said:
That was a general “you”. I should have probably said “someone” instead.
LikeLike
Walter said:
I definitely agree with this post. I talk with consequentialists from time to time, and while there are a very few who seem to be ‘for realsies’. Most of them just want to stick with the Ten Commandments and make up non embarassing reasons to behave that way.
I dunno where I read it, (maybe this blog), but there was a discussion of ‘fake consequentialism’, which was pretty great. The idea is that if only consequentialism can be used to determine if something is good/evil in the public eye, then people with an interest in judging stuff will invent consequences.
I think Haidt’s “The Righteous Mind” has a chapter or so about this. You are definitely right that a lot of fake consequentialists should just admit that they are deontologists.
LikeLike
ADifferentAnonymous said:
I think there *is* significant effect of one side’s behavior on the other.
For a specific example, I think the Eich affair materially increased the probability of a conservative company firing someone for being liberal, e.g. Hobby Lobby firing an executive revealed to have given to Planned Parenthood. Perhaps this is a double crux for us on the matter?
LikeLike
ADifferentAnonymous said:
Expanding my thoughts, I believe that repeated violations of an actual shared norm *will* undermine that shared norm. This matters a) there actually is a bipartisan shared norm*, and b) it’s a good norm.
a) gets confusing because it’s not always clear what exactly the shared norms are. Take free speech–it’s pretty clear no one has ever really accepted the straw version where all speech should be free from criticism, and almost everyone in the US accepts the restrictive version where publishing the wrong opinion shouldn’t subject you to state or vigilante violence. Things like firings and boycotts are a grey area in this respect.
A lot of appeals to the meta-level seem to skip b), but it’s entirely possible for a shared norm to systematically favor one side over the other, or possibly even be bad for both sides.
That said, I think that after these two caveats, the Kantian imperative is a pretty good heuristic for consequentialists.
*New norms can come about, but counting on it happening through unilateral action is probably not a good idea.
LikeLike
Murphy said:
2 sides in conflict often seem to adopt norms.
Hell look at warfare. Sure you can shoot hundreds of thousands of each other but don’t even think about using nuclear/chemical/biological weapons.
Even when one side sometimes violates the norm a bit it can be a bad step for the other to start violating it even harder.
You choosing to *not* pretend to surrender in order to ambush your enemies probably has little to no effect on what those SS troopers are doing a few countries over but once the norm is eroded to nothing it’s gone.
Norms quite often favor one side. In the case of 2 countries in conflict where one has superweapons a norm against using them may heavily favor the country without superweapons. You still probably don’t want to erode that norm.
LikeLike
Aapje said:
Norms don’t just have value because they make it harder for the opponent to do the same thing (or worse) back to you, but also because they provide legitimacy. ‘We are just and they are unjust’ is a typical claim. It’s a lot stronger if you can point to sacrifices that you make for justice and/or if you portray the other side as doing unjust things.
As for nukes, a major factor there is that both the US and Russia have nukes & it’s important from a game theory perspective to have a strong image that you won’t do first strikes, but will retaliate hard. Furthermore, from a accident perspective we want Russia & the US to assume that detected attacks are probably sensor malfunctions, rather than actual strikes. These assumptions have already saved us from nuclear war a few times. Nuking a 3rd party country makes it more likely that future observations are regarded as a real attack.
LikeLike
pansnarrans said:
I say things like “But what if your political opponents did that?” all the time. And it’s not because I believe that being reasonable will magically make my opponents reasonable. It’s not even that I think that there’s some kind of strategic value to be had out of it. It’s because inconsistency in moral arguments drives me insane. Regardless of who’s arguing with who, anyone who’s openly hypocritical will piss me off.
I recognise that this is because my brain turns ‘consistency’ into a sacred value, and that it’s probably not a thing to be proud of. But I don’t think I can get rid of it.
LikeLiked by 1 person
sniffnoy said:
I feel like this post is basically just looking at the weakest arguments against its position and essentially calling harmless things that really aren’t. I had intended to write a comment actually arguing for this but I keep on putting it off and I don’t think I’ll ever getting around to it. So uh sorry for just contradicting rather than arguing. But I still wanted to register my objection.
LikeLike
Pingback: Free Speech as Legal Right Versus Ethical Value | Thing of Things
enyeword said:
Against Niceness, Community, and Civilization
LikeLike
Autism Candles said:
Reblogged this on Autism Candles.
LikeLike