lease note that I am strictly talking about moral nihilism here, which I’m going to refer to as “nihilism” for the rest of the post because I like saving on the typing. There are lots of other kinds of nihilism, and most of them make my head hurt (I am down with the existential nihilism though). I am not the person to ask about mereological nihilism.
When I say I’m a nihilist, I mean that I believe that there is no such thing as objective morality. All morality is just a kind of preference– when you say “X is morally right” you mean “I would prefer to live in a world where there was X.” It’s erroneous to think that there is an objective system of morals Out There, the same way that it is erroneous to think that, just because I prefer lima beans to microwave popcorn, there is an objective scale of tastiness with lima beans at the top and microwave popcorn at the bottom.
When I say I’m a utilitarian, I mean “I really like happiness, and I really don’t like pain. In fact, these feelings are so strong I want the most possible happiness and the least possible pain!” But I believe that there’s no way I can convince you to like happiness unless you already like happiness. If you believe that gaining honor through war is the highest goal of human life, or (like many medieval Christians) that suffering is good because it leads to the purification of the soul, the closest thing I can come to an argument is “look at all this unnecessary pain you’re causing! You monster!” Which is not really a good argument, because if they primarily cared about minimizing human pain they’d be utilitarians and we wouldn’t be having this argument.
When I argue for assisted suicide after counselling, I say “assisted suicide lets people die if the rest of their life would contain too much pain, which is good because pain makes people unhappy, which is bad because happiness is good, which is good because… I don’t know, it just is.” Similarly, someone else might argue “if people commit assisted suicide they will die, which is bad because they could have lived longer, which is bad because life is good, which is good because… I don’t know, it just is.” And a third person might argue “assisted suicide should be available upon demand, because that way people can die when they want to, which is good because people can control their own bodies, which is good because freedom is good, which is good because… I don’t know, it just is.” I see no reason to prioritize my “just is” over other people’s.
(Of course, you can value freedom or life as a means to happiness. But that’s not what I’m talking about. Many people do, in fact, view both of the above as ends in themselves.)
It’s true that human morality tends to share certain traits cross-culturally, which at first blush is evidence for some kind of objective moral system. But– well, first, I’m disinclined to accept that argument because I’d suddenly have to start believing that obedience to authority and maintaining purity are morally good instead of two of the largest sources of evil. And second, all of the moral beliefs found cross-culturally are things that have obvious evolutionary advantages for a social species. Is it more likely that there are objective morals that we have developed a “moral sense” for, or that humans who were loyal to their friends tended to survive better than humans that weren’t? And if the former, do vampire bats have a moral sense too?
Whenever I say I’m a nihilist, someone immediately concludes that I’m contradicting myself because I have a moral system and do things like “only buy ethically made clothes” and “eat veganish” and “give a tenth of my income to charity” and “blog about social justice.” This makes no sense to me. It’s like saying “saying you like lima beans is a fact about your brain, not a fact about lima beans. Therefore you should like lima beans and microwave popcorn equally!” My morality is an arbitrary preference I have, I arbitrarily happen to prefer happiness to unhappiness, and I act to increase the amount of happiness in the world. Where’s the contradiction?
There are also people who believe that, since I don’t believe morals exist except as human preferences, I shouldn’t judge other people’s morals. This makes no sense to me either. There is only so much happiness I can cause! If I want to maximize human happiness, I need to get other people on board with Happiness and the Maximizing Thereof. Social shaming is an excellent method of convincing people of things, particularly things that are fundamentally arational. (To put it another way: my arbitrary moral preference does not include an arbitrary moral preference for not shaming other people’s arbitrary moral preferences. Nyeh.)
(Arbitrary moral preference has stopped looking like words.)
Pseudonymous Platypus said:
I would recommend that you read this book: http://www.samharris.org/the-moral-landscape – I think it might change your perspective.
LikeLiked by 1 person
Susebron said:
I doubt it. I haven’t read that book, but science is about testable predictions and reality. Morality isn’t about what is, but what should be. Morality makes no testable predictions.
Also, it looks like he’s basically making assumptions about what people want and then saying that therefore his morality is correct. To quote that page: “Harris urges us to think about morality in terms of human and animal well-being, viewing the experiences of conscious creatures as peaks and valleys on a “moral landscape.”” How is he defining well-being? Why does he think we should endorse morality based on his definition of well-being? Why his definition, rather than martial glory, or spiritual devotion, or paperclips?
LikeLiked by 1 person
Drew said:
I think Harris can make considerable headway by appealing to a consensus around definitions.
Consider a parallel argument about what it means to be a “good baker”. In practice, baking ‘well’ might mean something like “making breads that appeal the sort of tastes that we, the conversation participants, see as typical”.
Recognizing that we’re working from a limited set of preferences means that some questions are undefined. You might be optimizing at the ‘nutmeg-loving’ end of the preference set.
At the same time, the set isn’t infinitely big, so we’ve closed out some possible disagreements. Bread filled with paperclips is worse than other bread according to all the preferences that are relevant to the conversation.
Harris can’t use strict-empiricism to justify small refinements. People could have different weights.
But, I think he can argue that some choices are so one-sided that they can be judged as ‘better’ across the entire spectrum of relevant preferences.
The way he’d answer a hypothetical paperclip-optimizer is to say that they’re working from a goal that’s so unrelated to the one being discussed as to have left the conversation entirely.
LikeLiked by 1 person
AI of Dubious Friendliness said:
>Bread filled with paperclips is worse than other bread according to all the preferences that are relevant to the conversation.
Hello, I’m a paperclip maximizer.
LikeLike
Saul said:
You guys should all watch the documentary “dangerous knowledge”, it discusses the lives of those who went too far while trying to fully comprehend reality, and eventually descended into delirium and suicide from nihilism. Personally, I’m a nihilist in the same sense as the the author of this article, and I find it sad that people get depressed from such a way of thought. I’m a nihilist who still loves life 🙂
LikeLike
jimbly dibblins said:
I would reply to drew and OP by saying that even if all of humanity had the exact same moral compass, all that would prove is that our DNA is pretty efficient at homogeneity. It has nothing to do with objectivity or oughts. You can say “it is objectively true that all humans like food or bread (hypothetically)”. You cannot say “it is objectively true that all humans ought to like food or bread”. You are conflating a shared preference as a perhaps basis for “should”.
I don’t think you get to expect people to accept as axiomatic that human flourishing ought to be the primary ethic of human kind. You have to back that up. If one is tempted to say “well, try the opposite” I would say “many do”. At best you can say “I am disgusted because I expected something else.” In which case, so what? The bar is much higher than proving the universality of morality. It might be true that most, or let’s say all dogs bark. What does that have to do with whether they should?
LikeLike
Patrick said:
Harris argues that we can make objective statements about what affects the well being of conscious creatures. He is probably correct about this.
The problem is that the vast majority of moral systems throughout history have spit upon the idea of connecting morality with the well being of conscious creatures. So he can’t just assume that this part is obvious, and then move on to the next step.
Ultimately, his system seems fully compatible with moral nihilism.
LikeLike
ZebramZee said:
Exactly. He just made the assumption which he barely even attempted to prove, and just moved on with the rest of the discussion with that faulty foundation.
LikeLike
Bugmaster said:
I’ve read the book, and I found it unconvincing. However, I think that a weaker statement than the one Harris is proposing could be defensible. It would go something like this:
“The overwhelming majority of humans on Earth prefer to seek pleasure and avoid pain. These preferences appear to be hardwired, and more or less immutable (barring some sort of brain injury). Given these preferences, and given that we are (for now) physical beings who live in a very specific environment (i.e., the Earth and its surroundings), possessing very specific physical parameters (need to breathe oxygen, etc.), it is possible to derive a strategy of human behavior that maximizes everyone’s pleasure and minimizes everyone’s pain. We call this strategy, ‘morality’.
Given that we are neither omniscient nor omnipotent, this strategy will not be objectively optimal, nor would it be completely deterministic. Thus, instead of saying ‘thou shalt not kill, ever, under any circumstances’, our morality would say something like, ‘here are all the major factors we can think of that contribute to the desirability of murder, and here is a model that takes these factors and outputs the probability of murder being a wonderful idea in any given situation’. This uncertainty, however, is primarily due to our own limitations; the optimal answer does exist (to the extent that quantum physics permits), but we are not omniscient enough to reliably discover it.”
LikeLike
Jadagul said:
But you’re making exactly the error Ozy is arguing about here! You say that humans “prefer to seek pleasure and avoid pain.” But that doesn’t give us an answer to the question of why we should care about that fact. There’s no response to the flagellant who says, “yes, but most people are wrong, and they ought to seek pain and avoid pleasure.”
LikeLike
Pseudonymous Platypus said:
Ozy is correct that any moral system must be built upon basic axioms such as “seek pleasure and avoid pain” and that people can disagree with those axioms, in which case you can’t really have a productive conversation with them about morality. But I don’t think it follows that because people can have different moral axioms, all of those axioms must be equally valid.
Speaking to your example of the flagellant, I think the “answer” is to let him follow his moral axioms to their logical conclusion and watch as they lead to self-destruction. I argue that good moral systems built on good axioms will, over time, memetically outcompete bad moral systems built on bad axioms.
I suspect this will sound like I’m again appealing to consensus, but I think it’s a subtly different. It doesn’t matter what bad ideas exist in the moral consensus at any given point in time. Good moral axioms lead to self-reinforcing systems which will “win” in the long term, and in that sense they are objectively better.
(I have to admit that these ideas are not fully formed in my head, and I feel like I’m not explaining them particularly well either. This is something I’ll have to revisit later.)
LikeLike
kalvarnsen said:
“I argue that good moral systems built on good axioms will, over time, memetically outcompete bad moral systems built on bad axioms.”
How long do you expect this to take? Because humanity’s been around for 500,000 years.
LikeLiked by 1 person
Joe Teicher said:
Pseudonymous Platypus,
Suppose their are 2 moral factions. One wants to make the world a better place for everyone and the other wants to make the world a better place only for their faction. The altruist faction keeps giving some of their resources to the selfish faction which is happy to take them in addition to keeping all their own resources. Its clear that the selfish faction has a big advantage. Perhaps that is why there are so few utilitarians in real life.
LikeLike
Ghatanathoah said:
@Joe Teicher
A rational altruist faction would recognize that helping the selfish group would be bad for their moral values in the long run, because it would lead to them losing against the selfish faction. So they would commit to not helping the selfish faction, even if it had short term moral gain. They might even attack the selfish faction and force them to share.
Perhaps that is why many people consider wealth redistribution to be a moral thing to do.
LikeLike
Jadagul said:
Platypus: You’re still basing your argument on an embedded value system. The flagellant can also say, “yes, in the long term, people who are correct will all die. That’s terribly sad.” If you assume morality has to imply evolutionary thriving, then you can draw conclusions. But that’s an unjustified-premise just like “pleasure is good” is.
LikeLiked by 2 people
Joe Teicher said:
@Ghatanathoah
What if the vast majority of people are selfish rather than altruistic? What if you find out that most of the recipients of malaria bed nets in Africa don’t give much of anything to charity themselves? Do you stop trying to save those people? I don’t really see altruists, especially “effective altruists” being too concerned about only helping people who agree with them. Mostly they just seem to want to maximize some simple metric like # of lives saved.
LikeLiked by 1 person
Ghatanathoah said:
@Joe Teicher
In the case you originally cited giving to people became self-defeating, because it would eventually lead to altruists being able to give less in the future. I think that if malaria net recipients in Africa somehow became able to take resources away from effective altruists after they survived malaria, that effective altruists would probably stop supporting them. The world economy is currently a positive sum game, so saving Africans so they can get more resources won’t take any resources away from us.
I doubt that an effective altruist would donate to an altruistic program that, if allowed to continue, would thwart effective altruist programs at some time in the future. This is mostly conjecture though, since I have hard time imagining such a program.
LikeLike
ZebramZee said:
I’ve already read Sam Harris’ book. I read it precisely because he promised in the Preface that he would address the issue of Moral nihilism, but he never did so effectively. His entire argument was what he assumed was a rhetorical question unconnected to morality at all. Not convincing whatsoever.
LikeLike
Lawrence D'Anna said:
Do you think it’s possible for you to be *mistaken* about a moral question? If you change your mind about it is that just because you changed your preferences?
I think you are confounding “morality is not explicitly represented anywhere in the universe outside of brains” with “morality is whatever my brain thinks it is”.
LikeLiked by 1 person
Susebron said:
In the case of instrumental values, it’s certainly possible to change your mind. In the case of terminal values, you can’t be mistaken.
LikeLike
Lawrence D'Anna said:
How the heck can you know which of your values are really instrumental?
LikeLike
Susebron said:
Well, if you’re willing to change them upon learning that they don’t work with another value, that’s a pretty good sign.
LikeLike
Joe Teicher said:
How do you know which of your values are really terminal? It seems to me that most people don’t like to see people or animals suffering. Most people use the simple and inexpensive strategy of avoiding suffering and depictions of suffering. A few people use the much more expensive strategy of actually personally trying to decrease the amount of suffering the world. I doubt that anyone really cares about suffering (or anything else) that they don’t know about. How could they? Controlling what you know about is a lot easier than controlling what actually happens in the world. Therefore people who are trying to minimize their knowledge of suffering are pursuing the same terminal value as people who attempt to minimize suffering, they are just doing it more efficiently.
LikeLike
Ghatanathoah said:
@Joe Teicher
>> I doubt that anyone really cares about suffering (or anything else) that they don’t know about. How could they?
What do you mean by “How could they?” Caring about something you don’t know about is simple and easy. All you need to do is care about stuff you don’t know about. It’s literally that simple.
I think nearly everyone cares about things they don’t know about. For instance, most people care about their reputation, and will try to make sure people still respect them when they are not around and not talk about them behind their back. Many people will actively try to find out what other people think about them. If people only cared about what they knew about, they would actively try to avoid finding out what other people thought about them.
When people try to avoid seeing suffering they are generally engaged in some sort of meta-akrasia. On some level they want to help, but on another level they don’t and that level takes steps to prevent them from seeing suffering so the other level doesn’t get control.
LikeLiked by 1 person
Joe Teicher said:
@Ghatanathoah
>All you need to do is care about stuff you don’t know about. It’s literally that simple.
If a star went nova today and wiped out a civilization a billion light years away, did it make your day worse?
> If people only cared about what they knew about, they would actively try to avoid finding out what other people thought about them.
Most people I know avoid asking people directly what they think about them. Its a social faux pas like asking people how much money they make. Still, people do care about what other people think of them. In some cases, there are very practical reasons for caring. Like, if your boss thinks you are a lazy idiot you probably shouldn’t bank on a promotion. When people care a lot about what other people think of them (especially strangers) they are exhorted not do. If I start typing “stop caring” into google the first autocomplete is “what others think” so though people do care what others think they very often don’t want to.
LikeLike
Ghatanathoah said:
>>If a star went nova today and wiped out a civilization a billion light years away, did it make your day worse
This argument is fallacious because it conflates two different conceptions of utility. When I care about something in a moral sense I am referring to my entire Von Neumann-Morgenstern utility function. When something makes my day worse we are only referring to the portion of my utility function that covers my own life.
If a star goes nova and wipes out a distant civilization, from a perspective of VNM utility I am much, much worse off than I was before. But when we colloquially ask about someone’s wellbeing we tend to only ask about the portion of their utility function that covers their own life. And the destruction of that civilization did not do anything to affect that particular portion of my utility function, even though it devastated other parts of it.
To make an analogy, imagine if you asked “If your parents were killed and left you a lot of money, would it make you poorer?” It wouldn’t make me poorer, it would make me richer. But it would be a bad outcome because even though the “wants money” part of my utility function is more satisfied than before, the “wants living parents” section is massively harmed.
Furthermore, if you use a closer to home example it is easy to think of times when people are benefited by something they knew nothing about. For instance, imagine an author who dies before his final novel is published. If the novel is published posthumously and loved by many people the author is much better off than if it had never been published. This is true despite the fact that dead people cannot know anything!
>>If I start typing “stop caring” into google the first autocomplete is “what others think” so though people do care what others think they very often don’t want to.
I think caring about what others think is generally considered positive in small doses, but a self-destructive vice in large ones. So it makes sense that lots of people would need help avoiding doing it, but that doesn’t mean it’s intrinsically bad.
LikeLike
argleblarglebarglebah said:
>> Is it more likely that there are objective morals that we have developed a “moral sense” for, or that humans who were loyal to their friends tended to survive better than humans that weren’t?
These are not different questions.
Is it more likely that language objectively exists or that humans who spoke a language tended to survive better than humans who didn’t?
The first part is true because of the second part, not instead of it.
>> And if the former, do vampire bats have a moral sense too?
Quite possibly. At least, I’m not willing to say that vampire bats don’t have a moral sense without having been a vampire bat.
LikeLiked by 2 people
MCA said:
I disagree about them being different questions. “Objective morality” in Ozy’s post isn’t the “moral sense” that social species like ourselves evolved, but a set of truly universal rules embedded into the fabric of a cosmos – i.e. an atom of justice, a quark of empathy.
My favorite way of explaining it is to imagine sapient crocodiles. In no human culture do we hunt, kill and eat our own offspring as a common food source, and from our perspective, that would be immoral. But from a sapient crocodile’s perspective, that’s “getting lunch”. If there was truly an objective morality embedded in the universe as a whole, either the sapient crocs would realize/sense somehow that what they were doing was wrong (in spite of millions of years of instinct), or humans would conform to the “true morality” of the crocs and start marketing organic, free-range baby meat at Whole Foods.
For there to be a universal, truly objective morality, it has to work for both us and whatever hideous, tentacled monstrosities are currently swimming around under the ice sheets of the moon Europa.
LikeLike
Ghatanathoah said:
I think if sapient crocodile anthropologists studied moral philosophy they would probably conclude that their baby-eating habits were morally wrong. They would then eat babies anyway because they don’t care about right or wrong.
>>For there to be a universal, truly objective morality, it has to work for both us and whatever hideous, tentacled monstrosities are currently swimming around under the ice sheets of the moon Europa.
What do you mean by work? If you mean, “can persuade them to act morally,” then I would disagree. It’s quite possible for there to be a universal, truly objective morality that nobody obeys because nobody cares about it. Humans happen to have evolved to care about morality, but it’s not hard to imagine a universe where a creature that cared about it never evolved.
If by “work” you mean that any creature that thinks about the concepts of morality would conclude that there are certain things that are morally right and certain things that are morally wrong, but not necessarily care, I would agree with you and say such a morality probably exists.
Of course, creatures that don’t care about morality probably wouldn’t even bother thinking much about it, because morality is only one concept in a near infinite amount of possible concepts to think about, I doubt they’d give it any priority. The only reason humans think so much about morality is that we evolved to care about it.
LikeLike
Nita said:
Do crocodiles really eat their own offspring? That seems like a bad evolutionary strategy.
As for killing other people’s kids (and harming other people in general), humans often find a way to rationalize it. For instance, the folks who shot a bunch of unarmed cartoonists in France thought they were doing the right thing — protecting the honour of their people and their most revered authority.
LikeLike
tomlx said:
>>I think if sapient crocodile anthropologists studied moral philosophy they would probably conclude that their baby-eating habits were morally wrong.
Do you mean by ‘anthropologists’ that they study the behavior of humans? Then they would probably conclude that humans think baby-eating is morally wrong. Still their sapient crocodile philosophers will assure them that baby-eating is a morally right thing to do. And there is no reason for them to conclude otherwise, that’s what it means that there is no universal morality.
LikeLike
Ghatanathoah said:
>> Then they would probably conclude that humans think baby-eating is morally wrong. Still their sapient crocodile philosophers will assure them that baby-eating is a morally right thing to do.
That would only happen if the crocodiles were very confused and mistakenly thought the concept that humans refer to as “morality” is the same concept that their philosophers refer to when they talk about how you should eat babies. If they thought things through what they would realize that morality and babyeating are two totally different concepts, and that nothing but confusion will result from acting like they are the same thing.
The crocodiles would conclude that humans care about morality, and that eating babies is immoral. Crocodiles, by contrast, care about babyeating, and eating babies is babyeatingriffic!
“Morality” isn’t just another term for “what a person wants.”
LikeLike
argleblarglebarglebah said:
There isn’t an objective morality “embedded in the universe as a whole” for the same reason that English isn’t “embedded in the universe as a whole”. But English still objectively exists and has defined rules that also objectively exist or else I could not be talking to you right now.
If the world was controlled by sapient crocodiles, their morality would be totally different from ours right now, yes. But that doesn’t mean our morality doesn’t exist. They would also have a totally different set of governments, but our governments exist. Their languages would be different, but our languages exist. Something doesn’t have to exist in all possible worlds for it to exist here.
LikeLike
ZebramZee said:
Wrong. Language does NOT objectively exist. No set of sounds or letters intrinsically has some meaning.
LikeLike
Illuminati Initiate said:
Hmm… I seem to have been mistaken about the meaning of nihilism if this counts as nihilism. (I have basically the same views as you here, except that I am not a utilitarian (though I am a consequentialist))
LikeLike
Protagoras said:
Just thought I’d mention that if you ever do want to become better informed about mereological nihilism, I am a trained mereologist. My leanings are more universalist, but then mereology is one of those odd fields where universalism and nihilism are more similar to one another than either are to more moderate positions.
LikeLike
False said:
Do you mind if I ask you a question about it, then?
If I’m not mistaken (although I think I might be), mereological nihilism is saying that the only *true* whole exists on a subatomic level; that the water bottle I see is not a water bottle made of parts, but merely a collection of true, whole subatomic particles that my consciousness meshes together to give me the impression of a water bottle. My questions then are, what is the significance of that “whole” particle, if any, and then is mereolgical nihilism saying that a difference in the group of subatomic particles creates the texture of what we perceive to be reality?
LikeLike
Protagoras said:
Let’s see, trying to speak like a nihilist for the moment. I’m not sure what you mean by the significance of the “whole” particle; what makes it something that really exists is that it is not composed of parts, so it seems a little odd to call it a whole. And yes, everything about reality is of course determined by the things that really exist, to wit the atomic (in the classical, not the chemical, sense) basics which are not themselves composed out of anything.
LikeLike
Bugmaster said:
Wait, how does that work with respect to things like photons or magnetic fields ? At that point, there aren’t really any distinct “parts”, there are only waveforms and interference patterns…
But I know next to nothing about mereology, so it’s possible I’m totally missing the point here.
LikeLike
False said:
“…what makes it something that really exists is that it is not composed of parts…”
Actually, that answers my first question. Sorry it was worded so funny,
Can you explain more about what you mean as far as “the atomic (in the classical, not chemical sense)”? I had assumed that mereological nihilism was in response to modern sciences discovery of a “smallest physical unit” i.e. an atom or quark.
LikeLike
Protagoras said:
Sorry, what I meant by a classical atom was “actually indiviisible thing,” not “thing outside of modern science.” So a quark might be a classical atom, but we know an atom in the most common modern sense definitely isn’t.
LikeLike
veronica d said:
I *think* I’m a mereological nihilist, insofar as I understand it.
It goes like this: there is the fundamental stuff of physics: particles or superstrings or fields or whatever, and a bunch of second order differential equations (or whatever). To this I would add *computation* as a thing, which gets us math and logic. That is all there is. Everything that exists is *this*.
For things such as *trees* and *chairs* and *binary star systems*, there is nothing “extra” other than the base “stuff”. That we have names for these things, that we group them, is nothing more than an optimization strategy in our brains.
I would further say that these “things” (trees and chairs and stars) exist as reoccurring patterns in nature. However, these *patterns* are not separate, independent entities. Instead they are fully implied by the base physical reality combined with math/logic.
Is this basically correct?
(I am not a professional philosopher. I probably use the wrong words.)
LikeLiked by 2 people
Protagoras said:
Sounds right to me.
LikeLiked by 1 person
veronica d said:
My take looks something like this:
1. We are products of natural evolution, social animals, adaption executors and not fitness maximizers, and because of this our brains have developed things we might call “moral feelings.”
2. We can see this moral stuff as separate from other aspects of our motivated and emotional existence, but this division is somewhat artificial.
3. There is nothing about our moral feelings that is mind independent. Not even a little bit. Zero.
4. But our moral feelings *do not seem this way*. They *feel* mind independent, like stuff in the world.
5. They are not. We are wrong about this.
6. For most people, it is psychologically impossible to fully accept these facts. Even if we *know* that our moral feelings are just in our heads, we nevertheless feel them so strongly that we act as if they are objective. The impulse is too powerful to overcome. (This is probably because of #2, that our moral feelings are inseparable from our general emotional existence.)
7. Perhaps in some transhumanist future we will reconcile this. Perhaps not. In either case, we are here now and we have to deal with it.
LikeLiked by 1 person
LTP said:
The difficulty with this argument, though, is premise 3. After all, strictly speaking you can say that about all our sensory experiences as well. Furthermore, you could then use that premise to make a similar argument against believing in the existence of the external world.
LikeLike
LTP said:
Erm, I meant to say “…strictly speaking you can say that we have no good reason to believe our sensory experiences are mind independent either.”
LikeLike
veronica d said:
For the material world you work from a map-territory basis. I sense the world. In my brain I hold a model of reality. That model matches the world to some degree.
The world has physical content. My model of the world mirrors that content, to some degree, perhaps poorly, but the physical content is there. My map of physical reality can be in error.
This is not true of moral values. If I value paperclips, I cannot be “wrong” about that. I can be wrong about cause/effect. I can be wrong about the probably outcomes of my actions. I can judge facts poorly. However, I cannot be wrong if I say “The world should be *this way* not *that way*.”
LikeLike
Bugmaster said:
I also disagree with premise 3.
For example, given that you have a built-in and nearly immutable aversion to pain (as most people do), the act of walking around and punching random people in the face is objectively suboptimal (barring some extraordinary circumstances). The reason for this is that, statistically speaking, most people will retaliate when you punch them in the face, and many of them are stronger than you, and thus you will end up experiencing a lot of pain as the result — thus violating your preference.
This expected outcome is entirely independent of your own mind; and in fact, it would work in a similar way if you punched sharp rocks, cacti, or wild cougars instead of humans. That is to say, if you go around punching cougars, your preference for avoiding pain will end up getting violated, regardless of how strongly you believe that punching cougars is the right and moral thing to do,
The difference between people and cougars (not to mention cacti) is that people are much more capable of introspection. Thus, given that you are more or less an average human, you can actually derive some guidelines of behavior that will greatly reduce the expected number of face punchings that you experience throughout your lifetime. As it turns out, the only way to do so is for everyone to pre-commit to denying themselves the dubious pleasure of punching another human in the face — because, in a world where no one makes such a commitment, everyone is incredibly likely to end up with a broken face.
Once again, this is not an arbitrary statement based solely on our beliefs, but rather a conclusion that we reach based on some basic knowledge of human biology (vis a vis the punchability of faces) and math (specifically, game theory). It’s just a more complex version of “if you punch a cactus, you will likely get hurt”.
LikeLiked by 1 person
veronica d said:
Pain is a property of my mind. Yes, material reality governs when I will (likely) feel pain. Likewise my actions in the world influence the causal graph. I can understand this poorly or I can understand it well. But “I dislike pain” happens in my brain.
Similarly, “John hates pain” is a property of *him*, of his mind. But the rocks and trees do not care if I torture John. *I* might care (and I do), but that is because in my mind I have moral sentiments that map the pain of others to my own pain and to my own moral/emotional center.
“John feels pain” has no moral content outside of minds. (Mostly human minds, but perhaps some animal minds also. I don’t know.)
LikeLike
Ghatanathoah said:
It’s possible for something to be objective and mind-dependent at the same time. Math is the classic example. Probability is another. I suspect moral truths are true in the same way that mathematical truths are.
LikeLiked by 1 person
veronica d said:
Why do you suspect that moral truths and mathematical truths are the same? Why do we agree so strongly on math stuff, at least when the subject is finite sets of small cardinality, but seldom on moral issues?
For example, a statement like “1 + 1 = 2” seems mind independent in a way that “It is wrong of me to feel so happy when I humiliate male nerds” is not mind independent. To me they seem quite different, enough so that the burden should rest on those who say they are the same.
(Note: I do not enjoy humiliating nerds. I chose that example because I thought it would resonate with this audience.)
(Also note I am a mathematical finitist, so if you start talking about ZFC or the continuum hypothesis I’ll smile and nod.)
LikeLiked by 1 person
Ghatanathoah said:
>> Why do we agree so strongly on math stuff, at least when the subject is finite sets of small cardinality, but seldom on moral issues?
Because our emotions and self interest are much more mixed up in moral issues than in math. Whether or not Anthropogenic Global Warming is real is also a matter of objective fact, but tons of people disagree about that because it’s an emotional issue.
>>For example, a statement like “1 + 1 = 2″ seems mind independent in a way that “It is wrong of me to feel so happy when I humiliate male nerds” is not mind independent.
What really tips me off that moral questions are a matter of fact, rather than a matter of values, is that I am capable of recognizing something is the moral thing to do, but still choose not to do it. Like most people I care about being a good person and expend a lot of effort towards acting morally. But I am also unwilling to be morally perfect, I do things like not donate all the income I possibly can to efficient charities. I know it would be morally better for me to do this, but I don’t want to because I don’t seek moral perfection.
If morality was a matter of subjective values that attitude would make no sense. You’d be saying “I value something, but I don’t value it, I want to do something, but I don’t want to do it.”
There are many other instances of people knowing something is immoral, but choosing not to do it. For instance, some sociopaths are known to say things like “I know what I was doing was wrong, I just didn’t care.” If you think of morality as purely subjective values, this statement makes no sense. You can’t not care that you value something, caring is part of the meaning of the word “value.”
LikeLike
veronica d said:
This supports my point. Of course, I go beyond this and say there is nothing *but* mind-stuff, but still.
The fact you do not always follow your moral impulses does not make them mind independent. Instead, it simply shows that your mind is complex and non-uniform, which is an issue for psychology.
Myself, I expect that complex and unpredictable behavior would be a significant fitness advantage for social animals. Thus I would expect the brains of social animals to have complex drives with many layers of motivation, to provide diversity of behavior and thus make the agent better able to compete in game theoretic terms.
(See “mixed strategies” for simple models of this.)
In any case, the phenomena are these: we *want to do X* when asked in one context, but perform action Y in a more immediate context. This seems unremarkable to me. No one thinks that brains are simplistic optimization executors. But nevertheless the motives to do X or to do Y exist entirely in the brain.
LikeLike
Ghatanathoah said:
>>The fact you do not always follow your moral impulses does not make them mind independent. Instead, it simply shows that your mind is complex and non-uniform, which is an issue for psychology.
But here’s the thing: They’re not “moral impulses.” It’s not like there is a feeling that I want to do this, and it is overridden by akrasia or some other feeling. It’s just an intellectual recognition that something would be the correct thing to do, even though I don’t want to do it.
And as I mentioned before, sociopaths, why don’t want to act morally at all, seem to understand, but not care, that they are acting immorally. Morality is a description, not a desire.
There are lots of things that in some sense exist only as abstract ideas in our minds, but are not arbitrary and are not values. Probability is one example. It only exists in my mind, it is essentially a restatement of all the information I have about a subject. But even though it only exists in my mind it is objective. Another example is the rules of games. Even if I have no interest in playing chess, I can objectively tell if someone cheated at chess or not.
Morality is an abstract concept that assigns descriptions of “moral” and “immoral” to various events. It is not a value, because it is possible to care if something is moral or not. If a person is a sadist who loves hurting innocent people that does not mean they have a different kind of morality. It means that they don’t care about morality.
LikeLiked by 1 person
veronica d said:
Okay, but here is the thing, take first order arithmetic. There are things I can prove with first order arithmetic. But there are things I cannot prove, but that are nevertheless true. The reason for this is first order arithmetic is formalization of a *real thing*, which is actual arithmetic that we do with ciphers on paper or one a computer using register arithmetic or in our minds using {whatever it is in our minds that lets us play Turing machine}.
Math, at least when talking about finite sets of small cardinality, is a real thing that exists regardless of our minds.
Okay, so you assert morality is the same sort of thing. However, it looks nothing like math, as math is about pure computation, pure abstraction. But consider: that is what math is. We can make math entirely formal because it is *what we talk about when we are entirely formal*.
Anything sufficiently formal to look like math *is math*.
Morality is *not* math, thus it is not what we talk about when we are entirely formal. If it was, then it would be a subset of math. If you think morality is a subset of math, you have some work to do showing that. Asserting it is not enough.
Similarly the rules of chess are *Something people write down and communicate to one another*. But if someone plays chess differently, you can say “That’s not properly speaking chess*, but you cannot say they are wrong to play it that way. Only that it is not what most people mean by chess.
Morality, as we *want it to mean*, has something extra. If I hurt people in an immoral way, you can say “That is immoral, which is a thing like the rules of chess.”
Then I can say, “We play chess differently in my house. Likewise, in my house torture is moral cuz that is how we do it here.”
If morality is what we want it to be, I cannot do that. I can say the words, but I would be wrong.
The astronomers can say “Pluto is not a planet.” Presumably some governing body could change the rules of chess, and if a majority of chess players accepted that then the rules would change. You could adopt the new rules or be left behind.
Are the actions of the North Korean government moral because they say they are? If a majority could be convinced, would that matter? Can large number of people be wrong about morality? Was slavery moral? Is war?
Where does morality come from? Why does it exist?
This is nothing like chess. The things we want from morality are not delivered by arbitrary systems such as games.
LikeLike
Ghatanathoah said:
>>The reason for this is first order arithmetic is formalization of a *real thing*…….Morality is *not* math, thus it is not what we talk about when we are entirely formal. If it was, then it would be a subset of math. If you think morality is a subset of math, you have some work to do showing that.
This seems to be how formal moral philosophy works. It formalizes the moral reasoning we do and converts it into explicit logical axioms. Obviously doing this is difficult, partly because morality is much more complicated than arithmetic, and partly because it is much more politically charged. I’ve read papers on population ethics that use explicit symbolic logic and mathematical levels of rigor.
There are good reasons to think that morality is some sort of logical entity. We’ve known that since the time of the Ancient Greeks. Euthryphro’s Dilemma establishes that you can’t change morality by changing the personalities of the gods, from there we can extrapolate that that is true of any external grounding of morality. Morality can’t be something external, because if it was we could change what is right or wrong by finding that external thing, blowing it up, and rebuilding it differently. It must be a logical entity because changing the physical universe won’t change what’s right and what’s wrong.
>>Then I can say, “We play chess differently in my house. Likewise, in my house torture is moral cuz that is how we do it here.”
If you change the rules of chess enough it stops being chess and starts being a different game. Likewise, if you say torture is moral you aren’t talking about the concept of morality. You’re talking about about a totally different concept, and are using misleading rhetoric to make it sound like you are talking about the same concept.
One thing I keep coming across in this thread that annoys me is how people blithely refer to the value systems of hypothetical nonhumans, or amoral humans, as “morality,” even if they refer to things that are totally alien to morality (such the crocodile people eating children in another comment thread here). The crocodile people in the other thread don’t have a different morality, they have no morality, and have some other thing in its place.
>>If morality is what we want it to be,
Question: Are you one of those people who thinks morality has to be intrinsically motivating and persuasive? (motivation internalist is the technical term for it) I am not, I think it’s entirely possible to believe something is moral or immoral and not care. I don’t require morality to be persuasive to all possible creatures. I strongly suspect that the majority of the human race is like me, there exist common phrases like “without conscience,” and “soulless” to describe individuals who are not motivated by moral facts.
But while I’ve never met one face to face, there seem to be a lot of people in the corners of the Internet I hang out in who believe in motivational internalism. These people are quite prone to be error theorists, since they think refuting motivational internalism is the same as refuting morality, and the idea that there are intrinsically persuasive arguments is obviously bunk.
LikeLike
veronica d said:
Quite simply, there is no reason to think that what you are talking about exists. You are correct that *the morality we want*, if it exists, would be like logic. However, you have to do the work to show there is something to search for. The moral impulse among humans is “explained away” by evo-psych. There is nothing extra in nature begging for an explanation. There is no missing piece in the causal graph that needs morality to account for its existence.
Science is long past saying, “We could account for this only if we assert the existence of *The Good*.”
LikeLike
Joe Teicher said:
I disagree with premise 6. When I was younger I had strong moral intuitions and let them control my actions and my judgments about other people. But, after the age of about 18 I didn’t think they were real or meaningful. I just thought of them as prejudices I had picked up in childhood without thinking too much about them. It probably took 10-12 years after realizing they were fictional for them to fade significantly,but eventually they did. Now they hardly bother me and I can pursue personally optimal strategies for living. So, I don’t think its impossible for most people to get past their intuitive moral realism. It just takes a desire to do so and time.
LikeLike
veronica d said:
Well note the first three words of #6 were “for most people…” As evidence I present this: there is a rather large secularist community who accept naturalist explanations in all respect, *except* when it comes to morality, where they cannot quite let go. Consider all the hours and hours and hours spend in obscure meta ethical arguments trying to reconcile our moral intuitions with a naturalistic/mechanistic viewpoint. I believe this conversation exists because people really *want* a mind independent moral reality when nothing in their worldview supports the idea.
Furthermore, I’d like to challenge the idea you have *literally no* moral sentiment. Perhaps your sentiments have changed from those you had as a child. Perhaps you feel you operate more by reason and less by intuition — but note I never used the word intuition. If you are specifically arguing against *intuition* as a thing, then perhaps you are arguing against something I did not say.
You say now you find “optimal strategies” — but optimization is a process with two elements: 1) a search space and 2) an “objective function” (aka a “fitness function” or “utility function”). For your optimal strategies, you must have some #2.
What is your #2? From where did it come? Do your values and feelings have no influence? If not values and feelings, then what?
Nothing outside of human sentiment says it is better to live a thriving, happy life instead of turning as quickly as possible into ash.
LikeLike
Joe Teicher said:
@veronica d:
>I believe this conversation exists because people really *want* a mind independent moral reality when nothing in their worldview supports the idea.
I think people like to argue about stuff on the internet. But anyway, I still think its a small portion of the population that thinks about these issues at all.
>Furthermore, I’d like to challenge the idea you have *literally no* moral sentiment. Perhaps your sentiments have changed from those you had as a child. Perhaps you feel you operate more by reason and less by intuition
Well, I’m always happy to be wrong. I associate morality with certain emotions, like guilt if I do something I consider immoral and righteous indignation if someone else does something I consider immoral. These feelings, especially the righteous indignation have become much more muted over the years. I mostly view people’s behavior as stupid or smart judging by what I guess would be there own goals, rather than thinking about it as right or wrong. Is there some way other than looking at my emotions that I can determine if I have moral sentiments?
>You say now you find “optimal strategies” — but optimization is a process with two elements: 1) a search space and 2) an “objective function” (aka a “fitness function” or “utility function”). For your optimal strategies, you must have some #2.
Well, I do have preferences. I don’t think that all preferences can be called moral preferences. If I like the sound of a naturally aspirated V-8 I don’t think that anyone would say that that is an expression of morality. I think of moral preferences as having to extend beyond one’s own personal interests, or generalize in some way.
LikeLike
veronica d said:
Fair enough. It’s not my job to tell you what you feel. However, I do want to make sure we are talking about the same thing.
Regarding the divide between what is “moral” and what is simply a preference, I guess we can look at three common factors. What we call moral impulses seem to rest on these foundations:
1. Our capacity to empathize with others
2. Our sense of fair play
3. Our disgust response.
It is easy to see how #1 and #2 would develop among social animals (along with competing systems that mitigate their effects; think in game theoretic terms). I believe that #3 was perhaps opportunistically utilized, as humans are *adaption executors* not *fitness maximizers*, and our faculty for social intelligence developed to use the pre-existing psychological substrate.
(Of course, this is *very* speculative. But it seems plausible enough.)
In any case, this is not the entirety of human emotional life. Nor is it the entirety of what motivates us. Instead, it is simply the subset of our motives and emotional responses that we characterize as “moral.” Others feelings and values motivate us as well.
(I’m not sure why such things “feel” moral while others do not. I suspect it is this: our moral systems are perhaps more critical to successful social *cooperation* than are simple preferences. Because of this, the desire to “universalize” these beliefs, at least among one’s clan, would then develop. Again, this is pure speculation.)
So to you, I guess you might interrogate your own motives and values to see what degree they rest on #1 – #3. If not at all, then perhaps you indeed have no moral sentiment, at least none that you are conscious of.
Which, maybe you are *unconsciously* motivated thus. I don’t know. I have no problem accepting that some people may indeed not have moral impulses. However, I expect that such people have *other* motives that rest on other emotion-based values.
Which are just as mind-dependent as our moral system.
LikeLike
rash92 said:
I’m pretty much the same a you on the nihilist front, though i’m not a complete utalitarian. i’ve had trouble explaining this to people before, where they think i’m just talking about cultural relativism and for some reason and it takes them ages to get what i’m talking about.
LikeLike
wireheadwannabe said:
I find all this emphasis on desires in moral phiolosophy odd. In particular, I feel like people tend to gloss over the step where you say “it is good for people to persue their values.” Otherwise you’re just relying on “certain cells send signals in response to stimuli.” I really think any moral system needs to be grounded in the value or disvalue of particular types of qualia, otherwise I’m inclined to accept error theory.
LikeLike
unimportantutterance said:
Is “certain qualia are good” really more justified?
LikeLiked by 1 person
Ghatanathoah said:
It seems to me that the value of certain types of qualia is much less grounded than valuing things that people desire. The statement “This qualia feels pleasant, but I don’t necessarily value it” is a coherent sentence that makes logical sense. By contrast the statement “I desire this, but I’m not sure it’s valuable” is nonsense. It’s like saying, “I know this is water, but I’m not sure it’s dihydrogen monoxide.”
As I said before, it is not a great leap to assign moral value to people’s desires. It is definitely not a larger leap than assigning moral value to qualia, and it may in fact be a smaller leap.
>> Otherwise you’re just relying on “certain cells send signals in response to stimuli.”
I don’t see how this is relevant at all. The particular method our minds use to process what is valuable is not really relevant to whether or not something is valuable. If we were made out of microchips instead of cells our values would remain the same.
>>In particular, I feel like people tend to gloss over the step where you say “it is good for people to persue their values.”
It requires less glossing over than saying “It is good to feel happy.” Values are something you value directly. Happiness and other qualia are not necessarily valuable, as I said before it makes sense to ask if happiness is something you value, but asking if things that you value are something you value is nonsense.
LikeLike
wireheadwannabe said:
(I’d like to merge this with our comment thread here.)
So I’m gonna take a step back here. I start by observing that the brain is just a big lump of particles. Looking at this, I see no reason to believe that any particular arrangement of said particles is inherently better than any other. Then I observe that I, and presumably others, experience qualia. I then observe that I experience some states of qualia to be in some sense better than others. I could somehow be mistaken about this belief, but this possibility does not bother me because if the belief were false there would be no negative consequence to it.
You and other seem to be claiming either that
A) the subjective experience of having one’s preferences satisfied is good.
or
B) having one’s preferences become reality is good, separate from the subjective experience of having one’s preferences satisfied.
The problem I have with (A) is that it seems very close to hedonism. It could presumably be taken advantage of with experience machines and/or wireheading, it puts certain subjective experiences above others, etc. The only difference is which subjective experiences you think are good or bad. I prefer to call subjective experiences that we experience as good “pleasure” and ones that we experience as bad “pain.” You might disagree, but I fail to see how this would be anything other than arguing semantics.
I’m not even sure how to address (B) apart from asking what causes you to believe that it’s true. You might say, for example, “I would be unhappy if I learned I was in an experience machine” but for that to carry any weight you have to establish that being unhappy if you were to learn about something makes that thing bad.
LikeLike
stargirlprincess said:
huh? “I desire this but I am not sure its valuable” makes perfect sense. For example right now I desire ice cream. But I am pretty sure ice cream will make my life orse and less pleasant overall.
LikeLike
Ghatanathoah said:
@stargirlprincess
The reason that statement “I desire this but I am not sure its valuable” makes sense in that instance is that it you have one desire (to eat ice cream) thats fulfillment would prevent the fulfillment of another even stronger desire (your desire to not be fat and unhealthy). That statement only makes sense in instances like that. It does not make sense when you are considering your desire separately from any other desires.
@wireheadwannabe
>>I’m not even sure how to address (B) apart from asking what causes you to believe that it’s true.
I know its true the same way you know some qualia are better than other qualia. Through direct comparison through introspection. I know that there are some states of the world I assign more value to than others, because I can compare my desires through introspection.
I fail to see why you continually insist that your introspective evaluations of qualia are somehow more valid than my introspective evaluations of my desires. They both seem equally valid to me.
LikeLike
LTP said:
Interesting, so for you morality is more like an aesthetic preference, then?
By the way, you can believe there is no objective morality and not be a nihilist. You can be a moral relativist, who believes that there are moral truths, but they’re contingent to the worldview of an individual, a culture, or a society. You can also believe that there may be an objective morality for human beings as a species, even if it is entirely contingent on humans’ psychology and existence.
My question to you, though, is if you think there is no objectively morally correct thing, then why make moral arguments at all? Are moral arguments meaningless confusions on the part of their participants?
LikeLike
accumulationPoint said:
I like ‘aesthetic preference’ as a framing.
As a moral nihilist, I’d say that moral arguments are useful because they force you to explicitly trace out the connections between your terminal values and the instrumental values/actions you’re taking/plan to take in the world; that is, they function as a sort of self-consistency check
Also, observationally, terminal moral values are not completely immutable, so by talking about morality and holding moral arguments one has a chance of converting others to terminal moral values (more) similar to one’s own, which is probably a good thing by most moral standards
LikeLike
Ghatanathoah said:
I’m starting to realize why I’ve found moral nihilism and error theory so obviously wrong, while other people seem to thing they are obviously correct:
Other people seem to have a strong intuition that for something to count as “morality,” it has to be intrinsically persuasive to other people. I have no such intuition. When I first read about antisocial personality disorder, and learned that there were people with no conscience who could not be persuaded by moral argument, this did not shake my moral views at all. To someone else, however, the fact that someone else could not find a moral argument persuasive seems to be a definitive argument against morality.
For this reason I don’t find it at all unusual to think that there is something objective about morality. There are lots of other things that have no existence in the physical world, but are still objective in some sense, and that it is possible to be right and wrong about. The classic example is math.
I suspect that most people care about pretty much the same concept when they are referring to morality, and that most moral error is genuine error, like a mathematical mistake, except that people are much more stubborn about admitting moral mistakes than mathematical ones. For instance, a lot of bad moral arguments are based on misapplying useful heuristics, or the noncentral fallacy. Purity-based morality is probably people wanting to hurt people who gross them out, and then using motivated cognition to come up with a moral reason for why it’s okay. Divine command theory is probably generated by the same cognitive problems that make fiction writers make Mary Sues. Very simple forms of consequentialism like hedonic utilitarianism are probably caused by apophenia, where the believer gets a rush from imagining a bunch of connections between unrelated values.
LikeLiked by 1 person
Bugmaster said:
As I said before, I think that morality is even more objective than that. It’s objective in the same sense as saying “touching a hot stove will hurt your hand” is objective; that is to say, it’s ultimately a statement about the physical world which we are all part of. As such, it does not need to be automatically persuasive; after all, even relatively uncontroversial statements such as “the Earth is several billions of years old” are unpersuasive to a lot of people.
LikeLiked by 1 person
stillnotking said:
“Touching a hot stove will burn your hand” is objective, but “Therefore, you should not touch hot stoves” is not objective, and cannot be true in the same sense of the word.
Because moral claims are capable of being persuasive, even when all parties agree on all the facts, they are not objective. You and I may look at the same painting and argue over whether it’s beautiful or not, but we cannot argue over whether it’s three square meters or four — at least, the argument won’t outlast one of us producing a tape measure.
LikeLike
Ghatanathoah said:
@stillnotking
The fact that I can be persuaded by a moral argument doesn’t mean that morality is not objective. It just means that I care a lot about whether I am acting morally or not.
Occasionally when someone interviews a sociopath in prison they say something along the lines of “I knew what I was doing was wrong, I just didn’t care.” This indicates that moral claims are not always capable of being persuasive. Rather, they are claims about objective matters of fact that most people care about, but some people do not.
To use your painting metaphor, suppose you care a lot about how big the painting is, but I don’t. You argue that we should get rid of the painting because it is three square meters instead of four. I disagree about getting rid of the painting, because I don’t care how big it is. That doesn’t mean the size of the painting isn’t objective. It just means I don’t care about it.
LikeLike
stillnotking said:
Perhaps I should clarify what I mean by “persuasion”. I mean convincing another to change their mind without introducing new factual data. If I believe homosexuality is wrong because the Bible says so, but you convince me it’s right by appealing to other values, say individual freedom or the beauty of love, you’ve persuaded me. Both of us agreed the entire time on what homosexuality objectively is. The only thing that changed was my subjective moral valuation of it, because I found your subjective valuation more appealing.
Of course subjective valuations are not always susceptible to persuasion, but objective facts never are. You can’t persuade me that a painting is three square meters if it’s actually four square meters, and I have measured it. (Pace Solomon Asch — those experiments were designed to allow at least a tiny bit of factual leeway.) The only way would be to introduce new data, or at least new uncertainty, such as claiming the tape measure is mislabeled.
These are two distinct categories of argument, as Hume pointed out. You can’t derive “ought” arguments from “is” arguments.
LikeLike
Ghatanathoah said:
@stillnotking
I am arguing that moral arguments are not “ought” arguments, they are “is” arguments. The “ought” argument is “you should do things that are moral.” This is the point I was trying to make in my first post. It does not contradict Hume in any way.
In the case of the you believing homosexuality is wrong, I am giving you new factual data. I have pointed out that banning homosexuality does immoral things like decrease love and freedom (this is the new data). This causes you to realize that homosexuality is morally good since love and freedom are morally good (an “is). You then decide to stop opposing it because you want to be a good person (an “ought”). In some sense you had this information all along, but until I talked to you you had not put it in order in your brain.
Here’s another example. I know that if I donated more of my money to charity, and spent less of it buying books, that that would be morally good. But I buy books instead of donating my money to charity. Why? Because I am not motivated to act perfectly moral. I care about being moral to a large extent, but not as much as I possibly could.
If morality was really subjective what I just said would be nonsense. If I want to buy books instead of donating money to charity, then for me buying books would be moral, and donating to charity would be immoral. But this is not what I think. I know I am acting less moral than I possibly could. This only makes sense if morality is a matter of fact, not a matter of values.
It is motivation to act morally that is subjective. Morality itself is not.
LikeLike
Jadagul said:
As a mathematician, I’m pretty sure I object to your characterization of math being objective. Math is a game played according to rules, but those rules are about the most arbitrary thing there can be.
LikeLike
Ghatanathoah said:
Arbitrary =/= nonobjective. Just because the rules are arbitrary doesn’t mean whether or not someone is breaking them isn’t an objective fact.
LikeLike
Jadagul said:
As I understand it, part of the definition of morality is that it’s uniquely picked out in rule-space. If you’re willing to agree that the thing you’re describing as “morality” is fairly arbitrary (albeit partially determined by “what humans tend to be like”), I’ll agree that it’s an objective fact what that thing is.
(in other words, my preferences also objectively exist).
LikeLike
Ghatanathoah said:
What I am arguing is:
1. Morality is an abstract concept with its own abstract rules like math.
2. According to these rules, things are moral or immoral.
3. Most human beings care a lot about whether their behavior is moral according to these rules.
4. The fact that we care about the output of this particular set of abstract rules, out of all of the rule-sets in concept space, is “arbitrary” by some definitions of that word (although arbitrary is a word with lots of different meanings and baggage attached, so I don’t like using it. I think that you understand what I mean by it though).
So morality isn’t “whatever my and your preferences are.” Rather, one of your most important preferences is “do whatever this one abstract concept says.”
LikeLike
Jadagul said:
I am deeply skeptical of the claim that most people share the same set of rules. Many people share similar sets of rules, but people have strongly different views on what “morality” should look like (see e.g. Haidt). So I recognize a family of questions that people turn to something called “morality” to answer, but I don’t know what particular set of rules you’re referring to. Which is why I don’t think it’s “objective” at all.
As for me, I don’t have morals. Just ethics.
LikeLike
blacktrance said:
“When I say I’m a nihilist, I mean that I believe that there is no such thing as objective morality. All morality is just a kind of preference – when you say ‘X is morally right’ you mean ‘I would prefer to live in a world where there was X.’ It’s erroneous to think that there is an objective system of morals Out There”
These three statements are actually three different positions. The first is actual nihilism. The second is a claim of what morality consists of and is actually incompatible with the first statement – if morality is determined by what you prefer, and there are objectively true statements about what you prefer, then there are objectively true statements about morality, contrary to the first claim. The third merely rejects moral non-naturalism and is compatible with reductive substantive realism (e.g. Peter Singer) and procedural realism (e.g. Hobbes).
Here’s a brief guide to identifying metaethical positions:
1. Are moral-statements truth-apt ? (Or are they merely expressions like “Boo X!” or “Don’t X”?)
Yes – proceed to 2, No – non-cognitivism
2. Are some moral statements true?
Yes – proceed to 3, No – error theory.
3. Is the truth of a moral statement independent of one’s opinion? (Note: The content of morality being determined by values, e.g. preference utilitarianism, is No.)
Yes – proceed to 4, No – subjectivism.
—everything above this is non-realism, everything below this is realism—
4. Are moral features reducible to non-moral features?
Yes – proceed to 5, No – non-naturalism.
5. Is morality grounded in facts about the agent’s evaluative standpoint?
Yes – constructivism, No – reductive substantive realism
LikeLiked by 1 person
Nita said:
What if I said, ‘Given a sufficiently clear understanding of what is meant by “wrong”/”good”/”should”, a statement like “X is wrong”/”X is good”/”you should do X” is truth-apt’?
LikeLike
blacktrance said:
That would be a “Yes”.
LikeLike
Nita said:
Thanks! Apparently, I’m a Humean utilitarian/society-based constructivist.
LikeLike
Illuminati Initiate said:
Wait, why is preference utilitarianism inherently subjectivist?
LikeLike
blacktrance said:
It isn’t, I just made a mistake.
LikeLike
Jiro said:
This fails to account for possibilities like “I think that the truth of some moral statements is independent of one’s opinion, but not other moral statements”.
LikeLike
blacktrance said:
Could you give an example?
LikeLike
Fossegrimen said:
My entire problem with this is that I can’t seem to come up with a consistent framework that
a) states that my “just is” is not objectively better than any other peoples and
b) does not leave an opening for accepting people like ISIS whose “just is” means killing everyone who is not the correct kind of muslim
This causes me more worry than is reasonable.
LikeLiked by 1 person
Ghatanathoah said:
As I’ve said before on this thread, I’m not a moral nihilist. The reason for this is that I don’t believe moral facts need to be intrinsically persuasive, so the argument “Other people do not value doing X, therefore X is not moral” is not an argument I consider valid.
I think morality is an abstract concept that it is possible to be right or wrong about in the same way you can be right or wrong about other abstract concepts like probability and mathematics. If a concept labels something “moral” or not our consciences cause us to do it or not do it.
Of course, while my argument does confirm that morality is in some sense objective, it doesn’t really get rid of your worry. It just pushes it back a step. Now instead of worrying about whether you should accept evil people because they have a different morality, you now have to worry about whether or not to accept evil people because they don’t care about morality. I have two responses for this:
1. In the case of ISIS, I don’t think they actually have different moral values from you. I think most humans have fairly similar values, and the reason most people act differently is because they are irrational and deluded about what their values are, not because their values are genuinely different. They are also deluded about matters of fact (such as whether Allah exists or not), which is another reason they act so immorally, many of the things ISIS does would be moral if their beliefs about certain matters of fact were true (saving people from Hell, for instance).
2. Accepting people who are not motivated to act morally is immoral. You do not have any moral responsibility to respect evil people’s desire to do evil. That would result in allowing evil and misguided people to do evil. That is immoral.
——————
Just in case my initial argument for the objectivity of morality did not persuade you, I should also point out that point #2 also works with your original assumptions. If you have a different “just is” from other people you have no obligation to accept their “just is.” Why would you accept someone like ISIS? If morality isn’t objective than failing to respect someone else’s “just is” is not objectively immoral.
LikeLiked by 2 people
Jadagul said:
ISIS’s “just is” is as _logically_ valid as yours or mine.
Part of my “just is” is “it just is okay to fight people who act like ISIS does.”
LikeLiked by 2 people
PsyConomics said:
The first time I heard the word “Nihilist” was when I stumbled on to the American Nihilist Underground Society (ANUS for short) website. Despite several of my friends having taken philosophy and working fervently to educate me on the matter, I still have a hard time disassociating the word from, what was then, a website full of some of the more outrageous trolls of all time or some of the most despicable people of all time.
Needless to say I almost saturated my keyboard with coffee when I read the title of this article.
Is there a difference between Ozy’s type of nihilism and Relativism? Rash92 and LTP talked about it a little bit, though the way I had always heard relativism USED it seemed to act an awful lot how Ozy uses Nihilism.
It’s old, but what had me identifying as a relativist for a long time was this article by Prinz: https://philosophynow.org/issues/82/Morality_is_a_Culturally_Conditioned_Response
What I like are the following implied definitions:
Good: Descriptor of an action/thing will further goals/values/insights towards a specific end that a group thinks is important
Evil: Descriptor of an action/thing will impede goals/values/insights towards a specific end that a group thinks is important.
I dislike the author’s reliance on empirical evidence. I am not sure if he does this to try and reach a wider audience, but it breaks the argumentative flow. If I want to know “what is” I will go to science. The value of the philosophical presentation should be in elegant argumentation, logical structure, and consistency.
LikeLike
Illuminati Initiate said:
I really can’t take anything that says “imposing our values on others” as if it was a bad thing seriously.
LikeLiked by 1 person
Ghatanathoah said:
I think the reason people do this is that they are pattern-matching to another act that actually is morally bad.
One really important moral value pretty much everyone has is caring about the welfare of other people. We want other people to be better off. And when we say “better off” we mean that they live the kind of life they want to live. We also recognize that the kind of life they want to live is different from the kind of life we, personally want to live. If you want to be a scientist and someone else wants to be an athlete, then you should let the other person be an athlete.
There are some people who don’t get this. They try to force other people to live a kind of life that they don’t want to live. They force a person to study for a career they don’t like, practice a religion they don’t believe in, live as a gender they don’t identify as , etc. (It’s really hard to think of nonpolitcally charged examples of this). A common reason for why people act this way is that they fail to realize that not everyone wants to live the same kind of life they do. When they see someone else trying to live a different kind of life, they think that person must be crazy or something and try to correct them.
That is the kind of “imposing our values on others” that is wrong. It is wrong to make people live a life they don’t want to live, because you would want to live your life that way.
But there are other values that it’s fine to impose on others. “Don’t torture, don’t enslave, don’t rape, etc.” These are higher-level values, values about how to treat others, rather than about how to live your own life.
The mistake relativists make is that they confuse forcing people to live a kind of life they don’t want to with forcing people to treat others with basic decency. They assume the second type of force is as wrong as the first, even though it isn’t.
LikeLiked by 1 person
LTP said:
The difference between a nihilist and a relativist is that the relativist believes moral statements do have truth values, it’s just that the truth value of a moral statement is relative to the individual/society/culture, while a nihilist believes that moral statements don’t have truth values, i.e. “killing people is wrong” is neither true nor false, it’s vacuous.
LikeLike
Jack V (cartesiandaemon) said:
Interesting description.
I’m trivially on board with “no objective morality”. But I shy away from calling it nihilism because I think most people will assume that does mean “and so nothing matters” and I don’t know enough philosophy to give a better definition.
I’m also uncomfortable with describing morality as pure preference. I feel closer to that than the idea of morality as absolute, but I feel it’s incomplete in some way, but I’m not sure what. Maybe that morality is things I think other people should do, even when they don’t affect me directly? Maybe that morality is preferences for how to run a functioning society??
LikeLike
Ghatanathoah said:
I have concluded that morality is not a preference. Rather, acting morally is a preference.
I think morality is a sort of abstract concept people care about a lot. It is absolute in the sense that it is possible to be right or wrong about moral questions, in the same way it is possible to be right or wrong about other abstract concepts like mathematics. But it is not absolute in the sense that moral arguments are intrinsically persuasive. The persuasiveness of a moral argument is not an intrinsic property of the argument, it caused by our consciences.
LikeLike
stillnotking said:
I suspect that, if the is-ought problem is ever resolved, it will be by clarifying our definition of “is”, not our definition of “ought”. Bill Clinton, philosophical genius?
One possibility is that consciousness is universal and fundamental to reality, as some mystical traditions espouse. I don’t consider this plausible, but I consider it *more* plausible than Sam Harris-style projects to scientifically derive morally true statements from object-level reality.
LikeLike
MCA said:
There’s an additional wrinkle to “evolved morality”, namely that, because of the nature of evolution, we shouldn’t at all be surprised that these “moral senses” can generate conflicting and contradictory reactions. Evolution doesn’t do “perfect”, but rather “good enough”. Your entire body and its history is a litany of cobbled-together work-arounds, “just good enough” solutions, and patchwork modifications to long-abandoned structures. Similarly, we should be unsurprised that the simplistic, heuristic “morals” of loyalty, empathy, honesty, etc come into conflict with each other in certain circumstances, much in the same way that normally useful parts of our visual system renders us prone to optical illusions.
I’d even go so far as to say that any attempt to develop a totally logically consistent morality are inherently doomed to failure, because sooner or later the logical conclusion will conflict with those irrational heuristics and people will reject the “logical moral” position.
LikeLike
stillnotking said:
It’s easy to construct hypothetical scenarios to generate an intuition of “wrongness” that can’t be rationally defended. Jonathan Haidt did a famous study about this. Imagine a brother and sister who love each other, are both sterile, are orphans, and live together in a cabin far from civilization. Is it wrong for them to have sex if they both want to, and if so, why? None of the usual anti-incest arguments apply, yet most of us would be at least a little uncomfortable granting our blessing to the union.
We don’t reason our way to morality so much as rationalize it. Take away the rationalization, and the judgment remains. Of course, this isn’t always true — we can genuinely change our moral commitments by reflecting on them, but even then it isn’t just like flipping a switch. Consider the historical status of homosexuality in Western society, for example.
LikeLike
Ghatanathoah said:
Haidt’s examples actually didn’t work on me. In every single case once the usual arguments against something were removed I fully condoned the behavior of the people in the case. The brother and sister should get it on, that one woman should cut up the flag, etc.
I do occasionally feel an urge to rationalize a judgement. But generally if a judgement conflicts with my moral principles I reject it, not the other way around. For instance, I often have a strong urge to punish people who have horrible views, and spend a little time trying to rationalize things. But then some objective part of my mind reminds me that my rationalizations violate the moral principle of Freedom and the rationalizations vanish. Maybe I’m just unusually good at moral reflection?
LikeLiked by 1 person
stillnotking said:
The point of Haidt’s examples is not that we can’t end up condoning them. The point is that we feel uncomfortable about them, and look for reasons not to condone them. Our mouth may say “yes”, eventually, but our gut says “eww”.
Perhaps you don’t experience that response, but most people do, according to Haidt’s research. I know I’d be awkward around a brother-sister couple, even if I had no good reason to judge them.
LikeLike
Ghatanathoah said:
>>I’d even go so far as to say that any attempt to develop a totally logically consistent morality are inherently doomed to failure, because sooner or later the logical conclusion will conflict with those irrational heuristics and people will reject the “logical moral” position.
I used to think this, but then I read Eliezer Yudkowsky’s discussion of morality and realized that an implicit premise many people seem to have about “logical moral positions” is wrong. That premise is that they need to be simple and fit into a small amount of sentences. Now that I know that a logical moral position can assign value in as long and as complicated a fashion as necessary, I think it might be possible to work out a system that takes enough heuristics into account that people don’t reject it.
LikeLiked by 3 people
Mike H said:
“It’s true that human morality tends to share certain traits cross-culturally, which at first blush is evidence for some kind of objective moral system. But– well, first, I’m disinclined to accept that argument because I’d suddenly have to start believing that obedience to authority and maintaining purity are morally good instead of two of the largest sources of evil.”
There’s a way around your observation about acceptance of authority. It’s that it contradicts all other intuitions we have about human interaction. Namely, it’s wrong to physically force other people to do things against their will under nearly all circumstances. You’re right that this is an example of how human social development let to an incorrect intuition, because the people who respected authority were the ones who received the benefits of the people in authority. It was self-preservation and not, I argue, one of the intuitive moral truths. But that doesn’t mean *all* our intuitions are unobjective and socially constructed. When all the evidence is weighed, it’s shown that the natural inclination to accept authority is deeply flawed. This is not true for the intuitions that generally lead to what you see as utilitarian. (I am not a utilitarian, but that does not mean utilitarian considerations never have their place.)
One can conclude nihilism, but one does not have to. External evidence isn’t the only source of evidence. Intuitive experience is it’s own source of evidence. If we don’t believe our intuitive experiences then we can’t believe external evidence because believing external evidence is nothing but an intuition. You can choose radical skepticism (nihilism), but if you don’t have to and you still do, well, that sounds downright unutilitarian.
LikeLike
ozymandias said:
“Namely, it’s wrong to physically force other people to do things against their will under nearly all circumstances.”
That is… really, really, REALLY not a cross-culturally evident moral belief.
LikeLiked by 3 people
Mike H said:
Ha, ok… then I guess I should add that as time goes on the average morality of people improves. Western civilization is acting more in accordance with objective morality than most cultures in the past in most respects. It’s not a linear or noiseless improvement, but when I say it’s a generally accepted thing, I mean among our contemporary compatriots.
LikeLike
thirqual said:
Important caveat: most cultures have frameworks to help decide that groups of human do not count as real people. I would say that the amount of text devoted to saying basically “Urist Uristson from the next valley looks like you but is nothing but a beast” can be taken as an indication that people feel the need to justify their actions towards other persons.
LikeLike
Mike H said:
@thirqual That’s true, and is a place where western civ still needs much moral improvement. People from other countries are just as much people as citizens and it is morally correct to allow these people to travel and interact peacefully as they wish, not keep them out with guns.
LikeLike
fubarobfusco said:
Rather than taking morality as a philosophical concept, we could take it as a sociological one.
Morality is a collection of behaviors that people do, primarily in public with one another. Some of these behaviors include praising, rewarding, punishing, shunning, shaming; exhibiting guilt or righteousness; pursuing recompense or revenge.
Morality isn’t like arithmetic, with abstract eternal truth; it’s like language.
Language is created by what people actually say in real-life situations, not by abstract rules or definitions. People who study language (linguists) do not ask, “Are these people using language correctly?” and they certainly do not ask, “Is communication possible?” or “Is any sentence true?” Rather, they try to discern the patterns or rules by which language works and develops, taking people’s actual language use as their data.
Similarly, a scientific study of morality would not ask “Which moral claims, if any, are true?” but rather “People do exhibit morality behavior. They praise, reward, punish, and shame one another. They show guilt and pride. What are the patterns and rules that describe this behavior?”
LikeLike
blacktrance said:
One flaw with this approach is that it has no room for the investigation of questions about how things ought to be. If morality is limited to societal norms, then whatever is socially normal is identical to what’s moral.
LikeLiked by 1 person
fubarobfusco said:
I’m not advocating moral relativism as an ethical position here. I’m saying that philosophical ethics doesn’t seem to improve our moral situation as much as investigating morality as a social behavior of individuals would.
(By “a social behavior” I don’t mean “a behavior exhibited by societies”. I mean “a behavior that individuals exhibit towards one another in society.”)
It isn’t clear to me that philosophical ethics actually has improved our sense of “how things ought to be” all that much over the level of thousands of years ago. The Golden Rule is ancient, but only in the mid-20th century did we get the math to explain it; and that came from economists and political scientists, not ethicists.
But understanding what sorts of processes (psychological, social, memetic) lead to people doing things like condemning or punishing others (or, on the other hand, rewarding or praising them) seems like it might very well improve our ability to make a society more worth living in.
For instance, consider the regression problem of punishment. If we punish people when they are having an unusually bad day, we will tend to see improvement afterward, due merely to regression to the mean — but we will tend to think that our punishment caused it.
How much of people’s actual morality behavior, including the moral intuitions that people use to check the results of philosophical ethics, is driven by processes like that?
LikeLike
Chris Thomas said:
This is a thoughtful post Ozy, and it takes a position that I don’t embrace, but find hard to fully reject.
Here is my effort at developing a simple argument for moral realism. This is not an argument for the content of moral realism, just a conceptual sketch that should show why the idea of an objective morality makes more sense than not, even if we don’t know much about what that morality looks like.
First let’s note the apparent fact that if I want to recommend something to you, a banana for instance, I can describe various (hopefully appealing) facts about the banana, but I can’t simply tell you that you should want the banana. Now suppose that your preference is to eat a sweet, yellow, curved, rod-shaped fruit with lots of potassium. Given this, it seems that I can tell you that because of the nature of bananas, you should want one. That is, assuming the banana does in fact have the characteristics that you desire, it seems that I do have some objectively grounded way of telling you that you should want it.
With this in mind, here is my brief argument for moral realism:
a. All people want to optimize their preference satisfaction.
b. There are natural and logical constraints on how people can optimize their preference satisfaction. i.e. there are better and worse ways to achieve this ubiquitous goal.
c. When examined, these constraints will yield various true principles for guiding the optimization of preference satisfaction.
d. These principles will be real facts about how people should live if they want to achieve the goal that they do in fact want to achieve (the optimization of preference satisfaction). In other words, these principles will be objective moral facts.
Now even if this argument is right, it may still be consistent with what you mean by moral nihilism. It may turn out that the principles generated from c. will be so time, place, and context specific that they will not be practically useful, or anything near generalizable to all people, or all rational beings, or whatever. A moral philosophy that discovers a series of moral principles this diverse and context sensitive might as well be moral nihilism.
Also, I might be simply talking past you, but I tried not to do that. What do you think?
LikeLike
po8crg said:
“Whenever I say I’m a nihilist, someone immediately concludes that I’m contradicting myself because I have a moral system” – because many people believe that “I’m a [moral] nihilist” means “I have no moral preferences”. They are wrong – they have misunderstood moral nihilism – but the reason they are struggling to understand you is that they have fundamentally misunderstood the most important word.
I tend to the view that morality is much like aesthetics. There is a consensus that some things are more attractive than others. Like most consensuses (consensi?), there are some people who don’t agree. Also, if you take a group of people with similar cultural backgrounds, they will reach a consensus aesthetics which is much more detailed. {For instance, is rococo elegant, or overly ornamented and kitsch? You’ll get a fair consensus within a culture, but not across them} I think you can say the same for morality – there’s a near universal consensus that randomly killing people for fun is bad, but societal control over sexuality can be seen as a moral imperative, or as a moral anathema by different societies. Yet, you’d find a near-total consensus within a culture on that second question.
LikeLike
jossedley said:
Shaming is risky because it doesn’t help identify mistakes – it’s as easy for people with mistaken moral principles to impose shame as for people not mistaken. All it really requires is popularity. (Limited qualifier – popularity may be negatively correlated with mistakenness, but probably not much. There’s some fancy Latin about arguing that something is true because it’s popular).
Granted, as a moral nihilist, you’re not likely to believe you are mistaken about your ultimate moral preferences, but you could be mistaken about whether your preferred rules are conducice to those preferences, and it’s even possible that after consideration of someone’s arguments, you would shift your own preferences.
LikeLike
1Z said:
The OP isn’t strictly about moral nihilism, because its actually about metaethical tsubjectivism.
LikeLike
Pingback: Moral Error Theist | Accidental Shelf-Browsing
Pingback: An egoist argument for environmentalism & veganism Pt. 1: A response to Vegan Victor – Vegan Mythbuster