[blog note: My friend Sara Luterman is kickstarting NOS Magazine, a magazine by and for neurodivergent people. Potential contributors include a lot of awesome disability rights people and me. Consider backing it! If you pledge $45 you get a stim toy!]
[content note: this essay contains justification of people’s right to commit suicide.]
There is a lot to like about Transhumanism As Simplified Humanism. For one thing, it bashes bioethicists, and bioethicists are pretty much universally worthy of bashing once they leave the time-honored “being a Nazi is bad, don’t be a Nazi” territory. However, its fundamental argument makes me cringe.
Suppose a boy of 9 years, who has tested at IQ 120 on the Wechsler-Bellvue, is threatened by a lead-heavy environment or a brain disease which will, if unchecked, gradually reduce his IQ to 110. I reply that it is a good thing to save him from this threat. If you have a logical turn of mind, you are bound to ask whether this is a special case of a general ethical principle saying that intelligence is precious. Now the boy’s sister, as it happens, currently has an IQ of 110. If the technology were available to gradually raise her IQ to 120, without negative side effects, would you judge it good to do so?
(Unfortunately, age makes this a bit complicated, because we don’t grant nine-year-olds the same autonomy we do adults, and so we have to make decisions based on what we hope the nine-year-old’s adult self would want. If most people prefer an IQ of 120 over one of 110, then we should cure his brain disease and raise his sister’s IQ. I am going to solve this problem by pretending Eliezer said “eighteen-year-old” instead.)
I would much prefer to have an IQ of 120 rather than an IQ of 110. Eliezer Yudkowsky would prefer to have an IQ of 120 over one of 110. Perhaps most people in the world would prefer to have an IQ of 120 over one of 110, although that needs to be shown and not simply assumed based on my and Eliezer Yudkowsky’s preferences. However, it does not seem true to me that everyone in the world prefers an IQ of 110 over one of 120. People want a lot of different things! Humanity contains death metal fans and Leon Kass and people who willingly consume zucchini chocolate cake with tofu frosting. Are you honestly expecting me to believe that in all of humanity there’s no one who will say “actually, I prefer to be only one standard deviation above average in IQ, thank you?
Lots of people have an impairment they would prefer to keep. The obvious example, of course, is some people who have glasses and haven’t gotten LASIK, as well as people who write articles in the New York Times about how we are medicating away childhood/great art/whatever. But I am pretty sure most people reading this can think of a impairment they’d like to keep, or that they agree reasonable humans would want to keep: an IQ that is lower than superintelligence? a sexuality that is not capable of enjoying every conceivable sex act with every conceivable person, with a dimmer switch for when you need to concentrate on something else? a face that doesn’t look like one of those averaged-out Most Attractive Person faces?
I admit this may seem a bit nitpicky. After all, it may very well be that most people would prefer to have an IQ of 120 to one of 110. However, the issue of the right to be impaired is a live issue for disabilities from Deafness to autism; the only reason it isn’t an issue for trans people is that we’re going “LA LA LA LA LA LA LA WE AREN’T A NEURODIVERGENCE IT IS TOTALLY TRANSPHOBIC TO CALL US A NEURODIVERGENCE” in defiance of all logical classification schemes. I don’t think that anyone would deduce from first principles that loss of hearing is a valuable part of many people’s lives that they want to pass on to their children, and paraplegia isn’t.
Eliezer also discusses death:
If a young child falls on the train tracks, it is good to save them, and if a 45-year-old suffers from a debilitating disease, it is good to cure them. If you have a logical turn of mind, you are bound to ask whether this is a special case of a general ethical principle which says “Life is good, death is bad; health is good, sickness is bad.” If so – and here we enter into controversial territory – we can follow this general principle to a surprising new conclusion: If a 95-year-old is threatened by death from old age, it would be good to drag them from those train tracks, if possible. And if a 120-year-old is starting to feel slightly sickly, it would be good to restore them to full vigor, if possible. With current technology it is not possible. But if the technology became available in some future year – given sufficiently advanced medical nanotechnology, or such other contrivances as future minds may devise – would you judge it a good thing, to save that life, and stay that debility?
But what if the forty-five-year-old, or the ninety-year-old, or the hundred-and-twenty-year-old, wants to die?
I think this concedes one of the strongest points anti-deathists have against deathists. Are you afraid that an immortal life would become boring, or take away your urgency to accomplish things, or prevent you from enjoying an eternity of bliss with your deity? Great! You can die. We aren’t going to stop you. But there is no call to go around imposing your values on people who would like to stick around until the sun goes out.
I’m inclined, if anything, to reverse this argument: if a 120-year-old can, after careful thought and consideration, decide that they’ve had quite enough of the world and they’re done now, then so should a twenty-year-old. (Obviously, many people have distorted beliefs about whether they should kill themselves, and I have no ethical problem with waiting periods or screening to make sure that the person isn’t being pressured.) Instead of leaving one’s time of death up to chance, or requiring people to live out endlessly increasing amounts of time, we can trust individuals to decide for themselves whether they want to.
I don’t accept “life is good, death is bad; health is good, sickness is bad.” But my transhumanism is simplified humanism too. Is not autonomy a fundamental humanist value? My understanding of autonomy doesn’t come with a hidden rider of “…as long as you don’t make any choices we disapprove of too heavily.” You don’t have your options limited to choices that are “natural” or don’t squick out bioethicists. And you can make a choice about the most fundamental decision facing human beings– the decision to live or to die– instead of leaving it to chance.
blacktrance said:
Is it possible for people to be mistaken about what’s good for them? Is it possible for us to generally know better in some categories? Answering both questions with “yes” (as I do) doesn’t commit you to forcing people to have these enhancements done on them. But there is definitely a tension between transhumanism and the “disability acceptance” part of disability rights.
LikeLiked by 1 person
ozymandias said:
Yes, but [insert standard Millsian/Hayekian argument about autonomy here, I’m pretty sure you know it as well as I do].
LikeLike
AlexanderRM said:
Personally I don’t see the tension at all- my version of transhumanism is believing that people who *want* to be smarter should be able to be smarter, nothing to do with the inherent value of intelligence.
Actually I honestly was not aware there were a significant number of transhumanists who opposed voluntary suicide; I just assumed (I think maybe after being introduced by Greg Egan) that it’s basically required in order to be a transhumanist. I suppose I wouldn’t be surprised if there are transhumanists who believe in a mandatory 100-year trial run before you can decide on death, though (which is really just different from the waiting period in degree, not kind).
LikeLike
Matthew said:
Minor nitpick:
The obvious example, of course, is anyone who has glasses and hasn’t gotten LASIK,
Um, no. LASIK is a huge upfront cost not covered by most health insurance. Many people who would like it (like — pulling someone out of the hat at random here — me) will end up spending more over time on contact lenses because we don’t have any choice in the matter.
LikeLiked by 3 people
ninecarpals said:
Didn’t want to add an aside to my own main branch comment, but as a guy with glasses who really wishes he didn’t need them because they’re inconvenient and detrimental to his quality of life…yeah, what Matthew said.
LikeLike
ozymandias said:
Fine, added “some”. 😛
LikeLiked by 1 person
ninecarpals said:
I’m a LARPer, a fencer, and a guy who liked to doze off watching TV before the myopia nation attacked. They’re petty reasons to want a cure for a disability (and glasses aren’t even close to the medical problem I have I’d most want cured), but dammit, I want to wear masks again without risking an eye infection from contact lenses!
LikeLike
ninecarpals said:
I understand your aversion to bioethicists even less than I understand your aversion to eggplants, because in my experience bioethicists believe a wide variety of things for a wide variety of reasons, and sometimes both the things and the reasons line up with what you believe. Unless your objection is to something so intrinsic to the field that it can’t be decoupled from its practitioners, I really can’t understand where you’re coming from.
I mean, was brought in to give a guest lecture in a bioethics class about the similarities between xenomelia and transgenderism, and work the class through an exercise around healthcare expenditures for people whose brains demand that their own body parts be removed or altered. The “trans people are neurodivergent” thing was half the thesis to one of the papers I wrote for my own bioethics class the year before.
And hey, shitting on an entire profession – especially one that’s had a remarkable impact in pushing the medical world a little further away from horrific, systemic abuses – wouldn’t be a kind thing to do even if you were right.
LikeLiked by 2 people
blacktrance said:
I took a bioethics class in college. The positions we were exposed to read almost entirely ranged between “But won’t it increase inequality?” to “It represents a drive for mastery and a lack of openness to the unbidden” to “the wisdom of repugnance”. The only really pro-enhancement person we read was Ronald Bailey, and he isn’t a full-time bioethicist.
LikeLiked by 3 people
ninecarpals said:
Different classes may have something to do with it. Most of my ire came from Ozy’s lack of specificity – everything short of directly anti-Nazi work (whatever that means) was a target.
LikeLike
ozymandias said:
I am very much in favor of informed consent.
LikeLike
ninecarpals said:
Which part of my comment is the ‘informed consent’ line responding to? I can read it a few different ways.
LikeLike
ozymandias said:
What “don’t be Nazis” means.
LikeLiked by 1 person
ninecarpals said:
Ah, thank you for clarifying. 🙂
LikeLike
R Stuart-Cohen said:
Can you think of even one subject in which you have advanced expertise where the syllabus of the average introductory course is even remotely reflective of current thought in the field?
LikeLike
LTP said:
I don’t get it either. Bioethics is just applied ethics focusing on a certain range of issues.
LikeLike
nancylebovitz said:
The bioethicists I run across in the media tend to be pro-death.
LikeLiked by 1 person
ninecarpals said:
Are you referring to bioethicists advocating in favor of assisted suicide for people who want to die, or for a different position? “Pro-death” covers a heck of a lot of ground.
LikeLike
nancylebovitz said:
They’re opposed to life extension.
LikeLiked by 1 person
shemtealeaf said:
Are there really people who have given serious thought to life extension and are opposed to it? Not people who think it’s unrealistic or too expensive or wouldn’t want it for themselves, but people who just think it’s a bad idea across the board? I’d be very curious to read a well articulated argument coming from that viewpoint.
LikeLike
Fisher said:
Because professional bioethicists seem to exist purely to provide support/justifcation for someone who wants to 1) ban or 2) mandate something? Someone who is in favor of increasing possibilities would naturally be opposed to someone who is dedicated to limiting them.
LikeLike
liskantope said:
There only was ever one scientifically verified 120-year-old, but it seems that most 110-year-olds are not all that eager to live much longer. I wonder if there’s a sort of fatigue that comes with having memories stretching over a century which results in a readiness to die, even in the absence of physical frailty or ailments.
LikeLike
sh said:
It would massively surprise me if any of those 110 year olds were not extremely physically (and probably mentally; human brains degenerate with age as any other organ) frail by the standards of, say, 20-year olds.
In addition, these people have probably watched a lot – likely most – of the people they’ve had close relationships with suffer involuntary death.
Them getting kinda sick of life after suffering that much degeneration and losing that many people – and knowing that they won’t be able to continue for much longer anyway, or ever get their earlier capabilities back – really doesn’t provide much evidence for how they might think about continuing life if none of those things were the case.
LikeLiked by 1 person
wfenza said:
I think there’s a reason EY chose a nine-year-old, and I think it’s the same reason you changed the age to 18. All of your arguments make sense if we’re talking about adults who are capable of making their own decisions. But EY’s example isn’t about an adult capable of making their own decisions. EY’s example is about a nine-year-old whose parents have to decide whether to raise their IQ. Lots of people might have disabilities they want to keep, but that doesn’t mean that parents shouldn’t cure their children of disabilities if they have the opportunity.
I guess I’m saying I don’t see any tension between your argument and EY’s argument. You both seem to be talking about different things, and you both seem correct.
LikeLiked by 4 people
ozymandias said:
Yes, but you shouldn’t cure autism, you shouldn’t cure transness, you shouldn’t give intersex children normalizing surgery, and an irreversible cure for Deafness is at *least* ethically problematic. That isn’t a “cure disability” position, it’s a “maximize the adult’s eventual ability to choose as much as you can, and if you can’t, do your best to do what the adult would probably prefer,” which I still think is meaningfully different than EY’s.
LikeLiked by 1 person
taradinoc said:
Curing a child’s deafness doesn’t take away the adult’s choice, does it? They can always voluntarily deafen themselves, if they decide that’s really what they prefer.
LikeLiked by 3 people
ozymandias said:
It seems perfectly plausible to me that many people would prefer to be Deaf if they had acquired ASL in the critical period for language, but not if they didn’t; learning languages as an adult is very very hard, and ASL is a requirement to participate in the Deaf community. Also, the medical system will not assist people in becoming Deaf and, take it from a suicidal person, it is really hard to figure out how to safely and painlessly do medical procedures the medical system refuses to help with. Alsoalso, the idea of voluntarily making oneself impaired is extremely far outside the Overton window (hence the difficulty BIID people have with getting amputations).
LikeLiked by 2 people
blacktrance said:
If I had the option to cure my child’s autism, I would. If I could avoid condemning them to a lifetime of sensory issues, difficulty interacting with people, etc, and I didn’t do it, I’d be a failure of a parent, the same as if I had the chance to fix my crippled child’s legs and let it pass me by.
LikeLiked by 5 people
ozymandias said:
I can’t find a poll of how many autistic people want a cure, so I’m working off the heuristic that the prominent autism advocacy groups that actually have autistic leaders are staunchly anti-cure. That seems to me to be evidence that your viewpoint is the minority, and that your autistic children could be predicted to prefer to have autism. (Conversely, if it turned out that most borderlines wanted less intense emotions, I would have to support a hypothetical cure for BPD, even though I like my intense emotions.)
LikeLike
ninecarpals said:
@Ozy
I want a cure for physical gender dysphoria*. Unsurprisingly, the trans community has decided to dedicate itself to other matters, but that doesn’t mean I’m alone in my opinion.
*I won’t say “transness” because being a gender other than the one assigned to you at birth is a hell of a lot more benign than being hounded by the need to tear apart your own body, or the reality of going through with that tearing apart.
LikeLiked by 1 person
ozymandias said:
ninecarpals: I support you receiving a cure for physical gender dysphoria if you want one, and I support children receiving a cure for physical gender dysphoria if it is autonomy-maximizing to do so or if X% of people want a cure for physical gender dysphoria, Y% of people don’t, and X > Y. I think it is likely that X < Y, which is unfortunate for people like you in the event that there is a childhood-only cure for physical gender dysphoria, but giving people a cure which the majority of people wouldn’t want seems perverse.
I really wish there were more surveys about this sort of thing so I could stop guessing based on activism priorities.
LikeLiked by 1 person
ninecarpals said:
@Ozy
Surveys would be nice, but I don’t see a pressing need for them. If we really tried we could probably figure out where dysphoria comes from, and if we tried some more we could probably fix it in at least one stage of development, but…priorities.
Anyway, there’s also the question of severity here, much as there is with autism. My hackles go up at the thought of those who are minimally affected being permitted to dictate policy for those of us whose lives have been severely impacted, or even for those who have been through a rough time and would willingly go through it again having that kind of vote. I had a cancer scare because of my testosterone, and the options I was given for fixing whatever was going wrong with my hormones were taking estrogen (unacceptable because of my dysphoria) or having my ovaries removed (which meant a lifetime of dependance on testosterone injections). An ethical system where someone’s preferences that could give me cancer are treated as being exactly on par with mine is an ethical system I’m…reluctant to embrace.
LikeLiked by 3 people
blacktrance said:
And if you surveyed the leaders of most Christian groups, you’d find that they’re against conversion to atheism. Sometimes people are wrong about what’s best for them, and if we know that something is unpleasant (as autism is), then we’re best off without it.
LikeLiked by 2 people
ozymandias said:
I am sort of confused about how we know a trait is unpleasant other than asking people who have that trait. What’s stopping me from saying that liking green beans is actually very very unpleasant and I would be a bad parent if I didn’t cure my child of liking green beans?
LikeLiked by 1 person
blacktrance said:
That’s how we know about some of the negative aspects of autism, for example, sensory issues caused by not being able to ignore repetitive sounds. But we also see that autistic people have difficulty interacting with others, which is a negative trait even if they insist otherwise, just like if a paraplegic were to insist that they’re fine not being able to walk. Being able to do more is always good.
LikeLiked by 1 person
ozymandias said:
blacktrance: I think we’ve hit a basic values difference here. I don’t think it’s obvious that being able to do more is always good.
LikeLike
Ghatanathoah said:
>you shouldn’t cure transness
If you mean that you shouldn’t change a transperson’s mind so that they accept the gender identity they were assigned at birth, I would agree with you. But it seems to me that doing the opposite and making sure everyone gets assigned the correct* gender identity at birth is “curing” transness in some fashion, and that seems unproblematic to me. A world where everyone was assigned correct* gender identity would be a world where transpeople no longer existed. But that doesn’t seem to be a problem to me because transpeople are defined by their desiring something they don’t have, or used to not have (a correctly assigned gender).
Similarly, the body dysphoria many transpeople experience seems to be caused by getting a body that is different than the body they want. If we had some way to analyze brains at birth, determine what kind of body they will end up wanting, and modifying their body so it develops into that kind of body, I don’t see the problem. It will mean no more transpeople, but again, that’s not because transpeople are being killed or something. It’s because transpeople are defined by something they don’t have, or used to not have (the kind of body they want), and in this world everyone has the body they want from the start.
*I define “correct gender identity” as “whatever gender identity the person themself would end up choosing to identify with when they have gotten old enough to think rationally about gender identity.” So, for example, an AMAB transwoman’s “correct” gender identity is “woman.”
LikeLiked by 2 people
blacktrance said:
What’s the moral difference between changing people’s minds about what gender they are and changing their bodies to be in accordance with what they’d identify as? If we both were equally safe and cost-effective, it seems to me that changing their minds would come out slightly ahead, since they’d still be able to have children then. (Unless the genitals they’d receive would be as good or better than the natural genitals associated with that gender, in which case it really wouldn’t matter.)
LikeLike
Ghatanathoah said:
@blacktrance
>Being able to do more is always good.
I probably agree with this in principle. But I can think of a few times when being able to do more is bad in practice.
Being able to do more might make people fall victim to akrasia more easily under some circumstances. For example, suppose someone really wants to be part of the Deaf Community. One needs to know sign language to fully participate in it. In principle a hearing person can learn sign language. But in practice I suspect akrasia prevents a lot of hearing people from doing so. By contrast, akrasia is not a factor for deaf people, since if they don’t learn it it will be very hard for them to communicate with people face-to-face. Their need for communication forces them to overcome akrasia. (For another example: I can personally attest that the most successful I ever was at dieting was when my doctor told me I’d get pancreatitis if I wasn’t careful with what I ate).
Another obvious example is game theory, where people put limits on themselves in order to convincingly commit to courses of action. For instance, ripping off your steering wheel to win a game of chicken, or submitting to an authority that punishes defecting prisoners in order to safely cooperate in the Prisoners’ Dilemma.
>What’s the moral difference between changing people’s minds about what gender they are and changing their bodies to be in accordance with what they’d identify as?
Generally destroying existing terminal values and replacing them with new ones is considered morally bad. The new ones don’t quite compensate for the loss of the old ones.
To use a different example, imagine I offered to modify your brain so you stop wanting to study philosophy and want to read trashy romance novels instead. I imagine you’d probably object, even if I pointed out that trashy romance novels are easier to read. Similarly, if someone wanted to be a better runner, the correct response would probably be to help them work out, not try to encourage them to not want to run.
To name the most extreme example, it is generally considered a bad thing to kill someone, even if you conceive a new person to replace them. Gaining one new set of preferences cannot completely compensate for the loss of the old one.
LikeLike
Fossegrimen said:
@ blacktrance & ozy.
The thing with autism is that it kind of comes as a package of several things. The lack of a properly functioning sensory filter is possibly unpleasant, but I’ve never had one so I can’t really say. From other people describing how it is to enjoy being in crowds, I would be willing to get that bit cured if it was possible.
But not at the expense of attention to detail, heightened reasoning ability, the ability to keep a full logical tree structure in my mind at one time and traverse it without loosing my place when distracted. This because I truly believe normal people are not fully functioning in these respects.
Now, for a better ability to communicate with other people the issue is a bit more mixed. I have zero problem with communicating with other aspies, but I do have some issues with a lot of normals. I have been able to learn some tricks to compensate so I can now function reasonably in most settings where the lack of a sensory filter doesn’t mess things up. These tricks turn out to be easy to learn when you are aware of them, and I have taught my aspie kids how to interface with normals. (look @ eyebrows, nobody is able to see that you’re not making eye contact etc.)
I suspect this ties in with the IQ issue in the main post. It is quite hard to keep a sensible conversation with people that are a couple standard deviations away from you. If most of your friends got their IQ increased by 10 points, probably you would want to too, if you were the only one who got the option, you’d probably need new friends which might be too much of a sacrifice.
Similarly, if normal people could get cured of their deficiencies of logic and arbitrary & counterproductive communication requirements, this would be much better than ‘curing’ the aspies.
The conclusion must be that if one could selectively cure only the bad bits about autism, I’m all for it. If it’s an all or nothing situation, it would have to be a case by case evaluation of the pros and cons. Severe autism might be worth it, the condition formerly known as Aspergers, probably not.
Other topic: You should probably not be allowed to commit suicide @ 20 because the pre frontal cortex is not fully developed for another 5 years or so, increasing the odds of stupid decisions. I’d add a few years for a safety buffer too, so 28 or 30 as a reasonable age limit perhaps?
LikeLike
MugaSofer said:
>you shouldn’t cure transness
Um, yes, you should. What on Earth makes you say this?
Being trans is literally nothing but discomfort at your current sex/gender. That’s what it is. And yeah, I’m bloody well in favour of reducing that discomfort – by coming up with better surgeries, better social norms, and yes, if we had it, some magical Trans Anymore(tm) ray or therapy that reassigns the structures in the brain that determine gender.
If we actually had such a ray, then it would be perfectly within people’s rights to make themselves trans later. But, funny thing, no-one does this. We already have the technology to let cis people get reassignment surgery and experience the joys of feeling like they’re the wrong gender, and no-one does this.
LikeLiked by 2 people
wfenza said:
I don’t really agree with this framing. Maximizing the adult’s eventual choice is one consideration, but it’s not a parent’s only job. A parent making decisions for a 9yo also has to worry about the child’s quality of life pre-adulthood, and make decisions that are in the best interests of the child.
I think a parent’s job is similar to attempting to reflect the coherent extrapolated volition of the child. Either that, or attempting to model the future adult and deciding what they would want you to do. Of course, the future adult will be changed by curing a disability, so sometimes that isn’t a great option.
Really, I think a parent’s best course of action is to make the decision that has the lowest probability of regret by the eventual adult. In other words, make the decision that’s least likely to cause the future 25yo to say “I wish you hadn’t done that.”
For something like deafness, it seems obvious to me that one should cure it if possible, and not to do so is unethical (for a nine year old). The proper analysis isn’t only to ask deaf people and defer to them. That’s only one relevant group. The other group is people who were born deaf, but gained hearing. If 25% of deaf people wish they’d had a childhood cure, but only 10% of people who had a childhood cure wish they didn’t, then the best choice is probably to go with the cure. That same analysis can be done for any disability.
Granted, some disabilities don’t have childhood cures, so there is no second group to talk to, but then I think it’s ok to leave it up to individual parents. Then in 25 years or so there is a group of adults that we can ask.
Where a cure can be administered pre-natally or in infancy, I think the only ethical thing to do is to defer to statistics. However, if we’re talking about a 9yo, parents have some more leeway to analyze their child’s personality and guess at what the future adult won’t regret.
I think a lot of disability advocacy in this area isn’t about the well being of the child, but about maintaining diversity and supporting the disabled community. Frankly, I find deafness activists trying to prevent children from having their deafness cured to be one of the least ethical things I can imagine. It’s taking advantage of vulnerable people in order to push an agenda.
When it comes to IQ, I seriously doubt anyone would resent having their IQ increased by 10 points at age 9, but I suppose it’s possible.
LikeLiked by 2 people
taradinoc said:
@Fossegrimen:
As someone with Strong Opinions about youth rights, I’m always sad to see this argument being made, but even more so in a discussion about neurodivergence.
20 year olds are clearly able to make decisions, no matter what’s going on with their prefrontal cortexes: ask one why they want to do something, and you’ll get an explanation that’s every bit as internally consistent as anyone else’s. They may make decisions differently, with a tendency toward having different priorities than an older person, but there’s nothing inherently wrong with that.
In practice, this “developing brain” argument is almost always just an excuse for an older person to override a decision made by a younger person when they can’t find an actual fallacy in their decision-making process.
LikeLiked by 2 people
ozymandias said:
I am never going to be as mature as a NT 20-year-old. The logical corollary of “20-year-olds shouldn’t be able to commit suicide” is “Ozy should never be able to commit suicide,” which I have obvious problems with.
LikeLiked by 1 person
wfenza said:
I think the logical corollary is more like “people should not be allowed to commit suicide until their prefrontal cortex reaches its maximum development.”
LikeLike
taradinoc said:
@wfenza:
(1) If someone has progressive dementia, do we have to let it fully progress — probably destroying their reasoning ability — before trusting them to make decisions? If they aren’t suffering from dementia yet, do we have to wait to see if they develop it? On what grounds can we be sure that their changing brain will actually more capable of making decisions tomorrow than it is today, instead of just different (or even less capable)?
(2) Why does “maximum development” matter, anyway? If Ozy’s statement is correct (“I am never going to be as mature as a NT 20-year-old”), and if we accept that Ozy is still capable of making this decision, then how can we justify overturning the decision of a 20 year old who we know to be even more capable?
LikeLiked by 1 person
blacktrance said:
Ghatanathoah:
I think it’d be more analogous to compare it to body integrity disorder. If someone wants to cut off their arm but has the option of taking a pill to remove that preference (if it even counts as a preference, as opposed to an unendorsed impulse), most people would choose to take the pill. I think this is a non-problematic modification of one’s preferences, especially since it’s in line with one’s meta-preference to be comfortable.
LikeLike
tailcalled said:
Random thought: if we end up able to detect the gender identity people will end up with before they are morally relevant, there’s a good argument that swapping sex vs swapping gender is irrelevant.
LikeLiked by 1 person
tailcalled said:
Another thought: there are two important kinds of wanting: having as a terminal value, and feeling good. For example, I don’t have sleep as a terminal value, but it feels good. I do have some terminal values that follow the shut-up-and-multiply system, but actually implementing these things does not necessarily feel good, and can in many cases feel pretty bad.
Now, onto transness. The argument for curing transness by any means necessary assumes that transness is only or mostly a ‘feeling good’ want… which, I mean, is a likely assumption, as long as you believe the hypothesis that people, essentially, slowly start deriving ought from is (i.e. something something baseball bat or I can only feel good as the opposite of my assigned gender, so I only ought to feel good as the opposite of my assigned gender, so I have a terminal value to be the opposite of my assigned gender).
However, there is in principle nothing that says people can’t have being-opposite-of-assigned-gender as terminal value without lacking the feeling-good as their assigned gender, and from reading trans forums it seems that this is indeed the case every once in a while.
So, now that we have this difference made clear, I think the three main arguments on how to deal with trans people can be summarized as this:
1. Treating transness should focus on ‘feeling good’ about ones gender, with whatever means necessary.
2. Treating transness should focus on the terminal value related to transness, and so you are not allowed to cure transness by swapping people’s gender.
3. Transgender is good because diversity/complexity/whatever.
All of these seem like perfectly good arguments to me. I like nancylebovitz’s idea of reducing the intensity dysphoria, as it seems compatible with all of these arguments. Everything beyond that is complicated.
LikeLike
Maxim Kovalev said:
@taradinoc
(2) I’d say, by extrapolating volition. If we can predict that taking a certain decision will, as the result of gaining more intelligence, knowledge, and data, permanently put the person’s brain in the state “I’m really glad I made this decision”, while other decisions would result in regret or death (and thus being unable to have feelings about previous decisions), then the former decision is the best one. Thus, if we believe that a person isn’t on top of their development, and it’s likely that as the result of the further development they will grow glad they didn’t commit suicide, it may be a good thing to stop them. On the other hand, if odds are this won’t be the case (e.g. because they’re dying of cancer), there’s no point in stopping them.
LikeLike
LTP said:
“I can’t find a poll of how many autistic people want a cure, so I’m working off the heuristic that the prominent autism advocacy groups that actually have autistic leaders are staunchly anti-cure. That seems to me to be evidence that your viewpoint is the minority, and that your autistic children could be predicted to prefer to have autism.”
I don’t think that people who have autism preferring that they still have it is enough reason to say it is unethical to “cure” a fetus or young child of autism. For one thing, people are attached to their self, and such a radical and quick personality change would in essence destroy their self and a new person would emerge. I think it is reasonable and rational to not want your self to be destroyed, or the emerging selves of even 9 year-olds like you. But that doesn’t mean that it’s not possible that *maybe* the world would be better if we prevented any new people from developing selves of certain types, and that *maybe* there isn’t even an ethical obligation to do so. I don’t think the hypothetical selves of fetuses or very young children actually have any rights or moral value to be let to come into existence.
Now, I do actually lean towards thinking it is unethical, but I just don’t think the preferences of adults who already have autism is strong evidence for that claim, nor does that argument form work in other analogous cases. Indeed, I feel that way about many neuro-divergences.
LikeLiked by 1 person
Fossegrimen said:
See my reply to blacktrance above;
Have you considered that it might be everyone else that needs fixing and not the autists? Could some aspects of autism be a net benefit and if so would you be willing to fix everyone else rather than the ones who deviate from the norm?
LikeLike
LTP said:
So, a few things. I merely was arguing against that argument form. I’m not actually convinced that either people with autism nor “normals” should be “fixed”, if fixing just meant making one like the other for good and for ill.
If you could somehow isolate certain traits and give them to people without any negative traits bundled with them, then I would say *both* people with autism and those without need–I don’t like the word “fixing”– let’s say “upgrading”. That said, I wouldn’t necessarily give all the “good” traits of each group to the other, as I think some traits might be good to have in the current world might not be so good if literally everybody had them. But, if you could give the ability to intuitively read body language to people with autism and the ability to do mathematical and logical calculations very quickly in one’s mind to “normals” without any other traits bundled with them, I’d do that in a heartbeat.
I tend to be in favor of neurodiversity for its own sake in many circumstances. Obviously not all cases. I don’t think there is much value to paranoid schizophrenia, except maybe for inspiring a small number of artists or something, which is small in the grand scheme of things. I think there are many traits that are good, but where you’d never want a society where everybody had them. I wouldn’t want a society of only natural-born leaders who have outlier levels of empathy. I also wouldn’t want a society only of STEM-oriented tinkering homebodies. That said, in a diverse society both traits are positives in their own ways.
I have no in-depth knowledge of psychology or neuroscience, but my intuition is that many traits are not separable, however*. In other words, I’m not sure one could get the best of both worlds of autism and non-autism. If that’s the case, then I am even more in favor of neurodiversity.
*Let’s leave aside the question of if such edits would even be practically possible in anything other than the far distant future.
LikeLike
Ghatanathoah said:
When I read Eliezer’s essay I added certain qualifiers I thought were pretty implicit. For example when he said “Life is good, death is bad” I assumed he meant “Life is good, death is bad [because most people don’t want to die and would enjoy living longer].” When he said “intelligence is precious” I assumed he meant “Intelligence is precious [because most people would have an easier time living the kind of life they want to live if they were smarter].”
I certainly didn’t take it as an argument against voluntary suicide, or for voluntary enhancement. It didn’t even occur to me that anyone could interpret it otherwise. Typical Mind Fallacy and all that.
Similarly, I initially had difficulty understanding opposition to sex-positivism because I had always interpreted the position as “Sex is good [because most people really like having sex] not “Sex is good [so even people who hate it should have it]” and didn’t realize that other people could interpret it differently. Again, all I can do is plead “Typical Mind Fallacy.” I have a bad habit of using “good” as a shorthand for “the vast majority of people want and like this thing.”
LikeLiked by 3 people
Ghatanathoah said:
It should read “or for involuntary enhancement” in the second paragraph.
LikeLiked by 1 person
Vamair said:
That’s how I’ve read the Elieser’s essay as well. But as it seems not everyone are like this, Ozy’s post is a good addition to that question. I’ve got some questions, though. Suppose there is a child with some condition A, and a switch. The parents have a single chance to turn the switch and give the child a condition B instead. Right now the child is ambivalent between A and B. In future, if the child will have condition A, she will prefer to have the condition A, and if not, she’ll prefer the condition B. When it’s okay to turn the switch?
What I feel about transhumanism is that modifying people without consent could be bad, but right now there are almost no people who are modified without consent and a lot of people who are left unmodified without consent. And this situation can be improved a lot by the research into modifications.
LikeLike
False said:
I’ve mostly only lurked on Rationalist blogs and I’ve never been in a place where I could talk with someone who identified as a rationalist or was familiar with rationalist ideas and memes. I’ve really wanted to have a certain discussion about humanism and trans-humanism as they are defined within the Rationalist sub-culture, and this blog post touches on it the most out of any I’ve seen recently, so I feel like now is as good a time as any.
I’ve seen it mentioned that trans-humanism, Yudkowsky specifically, aims at eliminating death and giving everyone immortality. I feel like this a pretty controversial goal that should have a lot of theory and deep consideration around it, but as best as I can tell, this is best reasoning we have for this position:
“if people were hit on the heads with baseball bats once per month, some philosophers would discover all sorts of amazing benefits to being hit on the head with a baseball bat; it toughens us makes us less afraid of lesser pains, makes bat free days all the sweeter, but if you take someone who’s not being hit with a baseball bat and ask them if they’d like to start doing for all those amazing benefits they’ll say no. likewise smallpox, aging and death.”
It seems trivially obvious to me that the thing that most accurately replaces “being hit on the head” is NOT “smallpox, aging, and death” but LIFE. There are dozens upon dozens of biological, cultural, academic and political systems engaged towards making the point that life is worth-living, but the fact of the matter is that we as a species are still not totally convinced this is true. “Life is good, death is bad” seems like a completely myopic and almost infantile reasoning. Whereas a more robust position is that life is suffering, and if it’s ultimately pointless, at least we die, and actually, because life is ultimately pointless and we don’t live forever, the heavy weight of existence is lifted from our shoulders because our actions are not actually that meaningful in the overarching scheme of things so its OK to mess up.
Am I the only one who thinks subjecting our species to immortality is itself a doomsday scenario? What happens if someone with immortality gets trapped somewhere where no one can reach them or even know they are there? Then, we as a species have doomed this individual to an eternal existence of pure solitude and suffering. At least the way things are now, that person wouldn’t suffer literally forever.
I understand that a lot of people enjoy life, and that’s fine; but we do see people get older and decide that dying is quite alright, enough is enough already. My concern is that we overemphasis the “sanctity of life”, and that people who would rather choose death don’t have a lot of options and don’t get advice other than “don’t kill yourself because reasons”.
LikeLiked by 2 people
Ghatanathoah said:
I think that most rationalists implicitly hold the belief that life is an instrumental value rather than a terminal value. They believe that the reason living is good is not that failing to die is valuable for its own sake, but rather because people want to accomplish many things and have many experiences, and it is really hard to do that when you’re dead.
A number of Less Wrong community members have basically advocated this view, and it seems implicit in most of the other discussions I’ve seen. The reason they want immortality is because they need to be alive to do things, have fun, etc., not because they value living as an end in itself.
>Whereas a more robust position is that life is suffering, and if it’s ultimately pointless, at least we die, and actually, because life is ultimately pointless and we don’t live forever, the heavy weight of existence is lifted from our shoulders because our actions are not actually that meaningful in the overarching scheme of things so its OK to mess up.
This position seems just as infantile and myopic as its opposite. It sounds like a depressed person noticing that their brain feels depressed and pointless and assuming that depressingness and pointlessness is an intrinsic property of the external world, rather than a property of their brain.
I personally am having quite a lot of fun with my life, and certainly think the amount of suffering I have experienced is far outweighed by all the fun I’m having. I never have any trouble finding meaning and purpose in my day-to-day routine. I’ve never felt this heavy weight of existence you speak of. My actions are pretty meaningful and awesome, and I suspect that if I was immortal and had a decent standard of living that they’d only get more meaningful and awesome. Life doesn’t give us purpose, we give life purpose.
Don’t get me wrong, I can certainly conceive of finding myself in a situation where I’d want to die (for instance, nonstop eternal torture). But if my immortal life at all resembles my current life, I think it will be pretty awesome. I mean, do you have any idea how many books I’d have time to read if I was immortal?
>What happens if someone with immortality gets trapped somewhere where no one can reach them or even know they are there?
When people talk about immortality, all the realistic scenarios (i.e. being uploaded into machine bodies) involve having new bodies that are immortal as long as they receive regular maintenance and don’t suffer extreme damage. They don’t mean immortal as in indestructible. An immortal would still be able to kill themselves. Think Norse God, not Greek God.
LikeLiked by 1 person
False said:
Thank you for your reply.
>This position seems just as infantile and myopic as its opposite. It sounds like a depressed person noticing that their brain feels depressed and pointless and assuming that depressingness and pointlessness is an intrinsic property of the external world, rather than a property of their brain.
I can see where you would consider this to be a difference of values, and it is entirely possible that this is the case. However, I don’t feel like you are giving this point the attention it deserves. Much ink has been spilled and thought-systems carved out over thousands of years around the idea that life is suffering and the universe, no more or less man’s place in it, is meaningless. I think it’s incredibly hand-wavy and “nothing to see here folks” to dismiss it as “depressingness”.
I’m glad your life is good and meaningful, and I sincerely hope it stays that way. In no way am I trying to advocate that you or anyone else “should” die, or you “must” view your life as meaningless. However, my concern is about humanism as a movement, and it’s ideas about “doing whatever we can to extend man’s life indefinitely”. This seems to be putting the cart before the horse so to speak; why are we spending money for mosquito nets to prevent deaths when we have no way of ensuring that those lives we are saving are subjectively worthwhile to those that live them. If once the problem of suffering was sorted out (how you can do this with technology is still beyond me), only then does it seem reasonable to talk about “immortality”.
My issue, in short, is that why is so much philosophical-emphasis is put on saving lives (effective altruism is the main culprit here) and when the issue of suffering is brought up, it’s all “well, nanobots will solve it”. Isn’t important to figure out suffering as mechanism first, and then deal with that? But within rationalism, it’s treated as a non-problem.
LikeLike
sh said:
Hi! I’m an aspiring x-rationalist (a lot of us consider it somewhat presumptuous to call ourselves “rationalists” (without qualifier) when there are so many bugs left in our minds). I’m not famous in the community, but I have spent a lot of time there and am familiar (and strongly agree) with the relevant ideas.
First of all, as Ghatanathoah mentioned, unless people explicitly state otherwise it is generally assumed that suicide will remain an option. The standard transhumanist agenda is not so much elimination of death as it is elimination of involuntary death. In the long run these may well be the same, but the general position is that minds that don’t want to continue to exist should not typically be forced to (there’s some complicated issues around just how much informed consent suicide should take; you can probably see how that might be hard).
Suicide being stigmatized in our society as it is – and many people who would want to commit it for good reasons not getting the chance, because they’re too infirm and no one will help them – is not good, and all else being equal these things should change.
> that life is suffering, and if it’s ultimately pointless,
“life is pointless” is generally a confused statement. If you keep unpacking what people mean by that, it generally turns out that they were looking for a justification that could never have existed in that form to begin with.
To vastly simplify: Intelligent entities have goals – or ‘preferences’, if you prefer. They act on those preferences to push the world around them into states they like better. If you’re an intelligent entity, there’s already something you want – and that’s good enough. Neither you, nor I, nor anyone else fundamentally need external validation of these preferences for them to be valid.
Things get complicated when you dig into this deeper – some agents may want to change their preferences, some agents may care about others, and external validation can become very important then (and indeed tends to, for humans). But calling it ultimately “pointless” is a meaningless statement.
>at least we die, and actually, because life is ultimately pointless and we don’t live forever, the heavy weight of existence is lifted from our shoulders because our actions are not actually that meaningful in the overarching scheme of things so its OK to mess up.
Depending on your precise causal model, individual lives can be extremely impactful even if they’re short. It sounds to me here like you don’t particularly buy into the “life is pointless” claim yourself; but regardless, even if I absolutely knew that I would die in a few decades and never see what happened to our civilization, I’d continue to try to push the world into future states I like better. You can still have an impact in limited time.
As for “it’s OK to mess up”, that’s a purely psychological question! If it makes you happier and/or more productive to work on that model, then go ahead and do it. There’s a lot to be said for moving fast enough and being ambitious enough that sometimes you’ll fail, and that approach is not contrary to rationalist thought. Sometimes what you’re doing is really important, and then it makes sense to try hard to actually succeed. But usually it’s not, and the precise outcome can be less important than what you learn from it and apply in the future.
Counter-narrative: If you live longer, you have longer to make use of things you learned from your mistakes. If I strongly expected to live forever, there’d be a lot of “Okay, that was silly, I’m not going to make /that/ mistake again! This experience will pay for itself within the next 2 millenia, no problem.” in my life.
> I understand that a lot of people enjoy life, and that’s fine; but we do see people get older and decide that dying is quite alright, enough is enough already.
We do. However – and without impacting what I said about suicide to begin with – note the following:
1. Aging is a horrible disease. Older people have suffered it a lot, and are frequently physically and/or mentally infirm, or if not infirm then still less capable than they used to be. They’re also not ignorant of how it progresses; they know that if they keep living, this will only get worse for them, and they’ll die a little each day. Transhumanism, needless to say, would do away with involuntary aging (and we don’t think many people would want to do it voluntarily).
2. Involuntary death is horrible, and older people in particular are likely to have lost a lof of their friends, acquaintances and family to it. I would get sick of people dying around me all the time – good people, people who really wanted to continue, and couldn’t because of our biology being fucking broken – too.
Don’t underestimate the impact of these. We haven’t /seen/ how people would adapt psychologically if neither of these were the case. I would be unsurprised if all the fed-up-with-life-in-old-age reactions vanished once you eliminated involuntary aging and death.
And finally:
> when the issue of suffering is brought up, it’s all “well, nanobots will solve it”. Isn’t important to figure out suffering as mechanism first, and then deal with that? But within rationalism, it’s treated as a non-problem.
First of all – fundamentally, yes. Eliminating involuntary suffering is important. It’s frequently treated as less urgent than dealing with involuntary death, because death likely implies irreversible personality and data loss, whereas suffering is in some sense easier to compensate for afterwards.
But make no mistake; eliminating involuntary suffering is a critical part of the transhumanist agenda. “nanobots will solve it” might be simplistic, but only slightly; the truth is that we can already impact mood a lot by dousing people’s brains in chemicals, and experiments with electrodes in rat brains have resulted in down-right wireheading outcomes.
If we had better access to the physical substate of human minds, we could almost certainly monitor and control moods much better, and in short order come up with good fixes to people being depressed or even just a lot sadder or less energetic than they want to be. We don’t, right now. Good MNT would do it; so would uploading. Figuring out which parameters to adjust after that will most likely be fairly simple, given how many related things we’ve already learned with our current, much cruder methods.
LikeLiked by 2 people
False said:
Thanks for your reply, it’s a rare opportunity for me to actually have this conversation.
>But calling it ultimately “pointless” is a meaningless statement.
I would disagree. The way you are framing “meaning” is not really what I’m talking about. Preferences is one thing, but I’m still trying to get this discussion to tackle suffering on an existential level. If we suffer, why? Is there a reason? If not, is there a “point” or some other thing to be obtained from suffering? If not, is existence worth enduring? If you end up at the extreme of “NOTHING MATTERS” scare quotes big neon letters, something like “the way our civilization ends up” is completely unimportant, because we inhabit an empty universe that we can’t meaningfully interact with. So, when I say something like “existence is pointless” I mean that when you account for scale, there is no difference between civilization continuing or not.
You say that something intelligent enough try to make push the world into conforming to its preferences, and you talk about “being effective” on a micro level, which, obviously, if you aren’t going to kill yourself, is fine and correct and most likely good. And yes, within that structure, it is reasonable to discuss what “being effective” means and how to best achieve that. But my question is, what if there was an intelligence so, um, intelligent that it realized any preference was meaningless in scale and so decided to do nothing. Recent attempts at AI involved trying to program it so it could beat tetris, but it ended up merely pausing the game forever in order to not lose. This sounds funny, but to me it was shocking and horrifying in its ramifications. Because, ultimately, the game itself is meaningless, and any amount of “effective play” is infinitely not as effective as just not playing.
>Involuntary death is horrible
Is it? I feel like suffering is horrible, and continuing to suffer is bad. But is dying bad? How do you know? It’s bad when other people die, sure, because we are biologically conditioned to fear death and experience loss as a motivator to not have people die, but removed from those biological impulses, is the experience of death bad? I feel like, actually, we know that it isn’t, on an intuitive level, hence the phrase “putting something out of its misery”. Isn’t it involuntary living that is horrible? Was “not existing” horrible before you were born? Presumably you return to a state that is similar to what you were like before you were born. Is that bad? I’m not convinced.
And finally, yes, I agree with you that we are in better shape to deal with depression and bad moods better than we have ever been. And perhaps maybe it is possible to deal with subjective suffering on a brain map level. But what about people who are suffering for good reason, be it poverty, poor living conditions, etc.? Yes, maybe if you took the rationalist community and made them live forever, they would be happy. But what about the rest of us? Do you understand that when someone says “There is suffering”, replying with “Well, I’m doing pretty good right now” is not a solution to the problem?
LikeLike
blacktrance said:
Why would scale matter? Value isn’t some independent feature of the universe, but is assigned by us, expressed as our preferences, and if we value things that we can affect on a small scale (which is the case for most of us), then the vastness of the universe doesn’t make much of a difference. It is conceptually mistaken to talk about preferences being meaningless – to the extent that “meaningful” is a meaningful term in this context, preferences are the source of meaningfulness. A Tetris-playing AI has different values from us, and we shouldn’t confuse its finding the optimal strategy with some realization of existential meaninglessness – if that AI could look at us and see us not maximizing our Tetris scores (and had a psychology similar to ours in other respects), it would think us foolish for ignoring Tetris, which is “really” the most meaningful thing in the universe.
Yes, because it prevents you from having positive experiences. If your life would in net be worth living if you had continued to live, then death is bad for you.
LikeLiked by 1 person
sh said:
> If we suffer, why? Is there a reason? If not, is there a “point” or some other thing to be obtained from suffering? If not, is existence worth enduring?
I assume you asking for an analytical response, not a natural-selection-perspective one. In which case the answer is that most people do have stuff they like. The low-level varieties are collectively called ‘pleasure’. To the extent that suffering matters, so does pleasure; both kinds of feedback are biological responses on a similar level.
A lot of people also value other things (terminally), often for more complicated reasons. But the existence of pleasure is a decent way to ground it.
> But my question is, what if there was an intelligence so, um, intelligent that it realized any preference was meaningless in scale and so decided to do nothing.
It doesn’t work that way. An intelligent agent is an entity that optimizes its environment based on its own preferences.
Now, in principle you could build a kind of agent that was kinda confused about what it really preferred, and with enough reflection figured out that its utility function is really a constant and its actions are irrelevant. However:
1. That is highly unlikely in the case of humans. We have a bunch of fairly stable and common drives. The details of human preferences can get hilariously complicated. The basics are, usually, not.
2. If it is the case of humans, we definitely haven’t shown that.
3. If we *do* show it, that will imply suffering is irrelevant also. If it wasn’t, minimizing it would still be a valid preference (c.f. negative utilitarianism) and give us reasons to build a working society to do that! So if that turns out to be the case eventually, we haven’t lost anything by working under different assumptions up to that point. Indeed, the possibility is pretty much safe to ignore by definition – either it’s wrong, in which case you’ve wasted your time thinking about it, or it’s true, in which case you didn’t gain anything because there was nothing to gain.
> Recent attempts at AI involved trying to program it so it could beat tetris, but it ended up merely pausing the game forever in order to not lose. This sounds funny, but to me it was shocking and horrifying in its ramifications. Because, ultimately, the game itself is meaningless, and any amount of “effective play” is infinitely not as effective as just not playing.
Actually, to me this is neither funny nor horrifying; the ramifications are not as significant as you may believe. My reaction would be: “hum, yes, I can see how you might accidentally write a tetris optimizer with that failure mode. Don’t do that.”.
This kind of behavior .. it’s an outgrowth of a specific kind of goal system coupled with a specific environment. The specific kind of goal system is one that produces negative utility in response to a specific input (either “losing the game” or “filling up rows” in this case, I would guess), and the specific kind of environment is one that doesn’t allow you to avoid that result without stopping the world, and doesn’t allow you to accumulate corresponding positive utility if you keep the world running.
The rough human-altruist equivalent might be to be put in a position where you can either destroy the planet painlessly now, or else keep it running and know that everyone will die over a period of hours starting a few minutes from now.
But that’s not the situation we’re in. Humanity’s future contains a lot of possible pleasure (and other good things), which could more than offset any suffering on the way. That’s what we’re fighting for.
> Is it? I feel like suffering is horrible, and continuing to suffer is bad. But is dying bad? How do you know? It’s bad when other people die, sure, because we are biologically conditioned to fear death and experience loss as a motivator to not have people die,
That biological conditioning is not fundamentally different from the biological conditioning to avoid other kinds of suffering. They’re both equally valid.
In addition, I don’t want to die because I’m curious to see what will happen in the future. I don’t want to die – and all else being equal I don’t want other people to die, though I respect their right to – because it would mean data loss. And while “data loss” is a bit of a clinical term, in this case it really corresponds to a wealth of lost positive interactions and beauty.
> Isn’t it involuntary living that is horrible?
Depends on the kind of involuntary life. As mentioned, if you suffer enough, then certainly and people shouldn’t have to go through that.
> But what about people who are suffering for good reason, be it poverty, poor living conditions, etc.
Oh! That! You’re quite correct, those are /also/ things that need fixing! It’s frequently assumed in the >H community that with a few more key technologies, we’ll soon have sufficient resources to increase universal living standards by several orders of magnitude, unless we also grow population to match.
So the idea is to get those technologies, put in some viable policy to restrict population growth, and then everyone can live (no, not like kings, because most kings really didn’t live that well compared to what modern society has to offer) significantly better than the wealthiest people in current society. The implementation details of this are tricky, but that’s the idea. It’s not about making people happy to live in squalor, it’s to get rid of squalor.
LikeLiked by 1 person
sh said:
Also, for the record, I would not describe myself as “doing pretty good right now”. My own life is not particularly happy. This is not a cry for help; I’m stating this purely so you understand that this position works for me, personally, despite me on average not enjoying the present very much.
I’m not working towards the elimination of death and suffering because my life is awesome now and I want more of it. I’m working towards those things both for other people, and in the expectation that if we succeed and I live long enough to see it, my own life will become pleasant and awesome at some point.
LikeLiked by 1 person
sh said:
Sigh. s/death and suffering/involuntary death and suffering/. I really shouldn’t forget that word.
LikeLike
Fisher said:
I think it is reasonable to restate “life is pointless” as some variation of “the world would not be changed by my nonexistence.” And for some scope of “the world,” that it trivially true. For some people, “the world” has to be a very small place to make it untrue.
LikeLike
False said:
> It is conceptually mistaken to talk about preferences being meaningless – to the extent that “meaningful” is a meaningful term in this context, preferences are the source of meaningfulness.
So preferences are the end all, be all? What if preferences conflict internally? What if my preference is that everyone else should die?
>Yes, because it prevents you from having positive experiences.
??? I can just as easily say that death is good because it prevents you from having bad experiences… How can you know if a life will have more good experiences than bad?
> In which case the answer is that most people do have stuff they like. The low-level varieties are collectively called ‘pleasure’. To the extent that suffering matters, so does pleasure; both kinds of feedback are biological responses on a similar level.
I guess this is my issue, at it’s core: If immortality is a worthwhile goal, I feel like there needs to be a more robust analysis of pleasure as something that outweighs suffering, which I just don’t see. It seems obvious to me that there is more net suffering, but that could be bias. Simply saying “pleasure is good, therefore good” is not enough.
> Now, in principle you could build a kind of agent that was kinda confused about what it really preferred,
I disagree with you that humans don’t qualify for this. It seems fairly obvious that people are not unbiased when it comes to their own experiences, and the amount of people who’s actions actively undermine themselves seems loomingly large. If anything, human beings empirically act towards their own detriment on a macro-level.
> 3. If we *do* show it, that will imply suffering is irrelevant also. If it wasn’t, minimizing it would still be a valid preference (c.f. negative utilitarianism) and give us reasons to build a working society to do that! So if that turns out to be the case eventually, we haven’t lost anything by working under different assumptions up to that point. Indeed, the possibility is pretty much safe to ignore by definition – either it’s wrong, in which case you’ve wasted your time thinking about it, or it’s true, in which case you didn’t gain anything because there was nothing to gain.
Woah woah woah, slow down. How is this not horrifying? Maybe I just don’t understand you, but it sounds like this option agrees with me that immortality is undesirable.
> The rough human-altruist equivalent might be to be put in a position where you can either destroy the planet painlessly now, or else keep it running and know that everyone will die over a period of hours starting a few minutes from now.
But that’s not the situation we’re in. Humanity’s future contains a lot of possible pleasure (and other good things), which could more than offset any suffering on the way. That’s what we’re fighting for.
See, again, this is what I’m saying. Your second to last sentence is a hope and belief, not something proven. Yes, it’s positive and empowering, but that doesn’t make it true. Your position is that if the world stops we can’t accumulate positive things, which is bad. My position is if we stop the world, we can’t accumulate bad things, which is good. How to do you prove that one outweighs the other? I understand that you like the first one better, but I’m not interested in which one sounds better.
> I’m not working towards the elimination of death and suffering because my life is awesome now and I want more of it. I’m working towards those things both for other people, and in the expectation that if we succeed and I live long enough to see it, my own life will become pleasant and awesome at some point.
Hey, I’m extremely sympathetic to this. I don’t think this is a negative motivation in any way. I’m just concerned that maybe death is not bad and maybe it’s actually good. If we could eliminate suffering, I’m immediately on board, no questions asked. I’m just not convinced that eliminating death is in service to that idea, and for being a unified movement, I feel like it’s weird that transhumanism targets death first, and suffering is something that just gets solved, who knows.
LikeLiked by 1 person
blacktrance said:
If your preferences conflict internally, you should weigh them against each other and decide which one you want to go with. If, all things considered, your preference is that everyone should die, then there’s a conflict between most people’s preferences and yours. I don’t see this as a problem conceptually – people’s preferences conflict all the time.
If you anticipate that your life will be more bad than good, then not wanting to extend your life is rational. But most people don’t want to kill themselves, which suggests (though doesn’t conclusively prove) that their lives are worth living. If someone expects their life to be so bad that dying would be better, they shouldn’t extend their life, but the benefit of life extension is that the people who anticipate their lives to continue to be good enough to continue (myself among them) wouldn’t have to die.
LikeLiked by 2 people
sh said:
[To clarify: Neither transhumanism nor LW are dogmatic groups that agree on everything. There are people with many different opinions in both; this is an outline of my own, which I believe is a fairly normal example of a member of both groups.]
> So preferences are the end all, be all? What if preferences conflict internally?
Reflective inconsistencies are tricky in general, in part because they’re a large category containing many very different cases. I can’t give you a blanket answer, and we haven’t worked out a comprehensive theory about how to deal with them in a good manner.
> What if my preference is that everyone else should die?
For a human to adopt that as an unchangeable preference would be exceedingly unusual. So my first inclination would be to discuss it to see if I could correct your view on it; if that didn’t work out, you’re likely to be suffering from sufficient mental limitations to make it hard for you to achieve that goal, and I’d try to ignore or work around you as necessary without resorting to direct conflict. If you were some kind of some extremely abberant human who combined that sort of belief with competence to achieve it, direct conflict may become the best option.
Human preferences probably differ quite widely; and whenever it wouldn’t lead to horrible outcomes I’d be inclined to seek some kind of compromise “live and let live” solution. But forcing large amounts of people to die is not something I could agree to unless the alternative was somehow even worse.
> Woah woah woah, slow down. How is this not horrifying? Maybe I just don’t understand you, but it sounds like this option agrees with me that immortality is undesirable.
No, not at all. To summarize very quickly: If there’s no good, then there’s no bad, and there’s no reason to be horrified by anything.That possibility is /irrelevant/. It’s a waste of time to think about regardless of if it’s true; it can’t influence anything. Sorry if that still makes no sense; in that case you may have to read more about the mechanics of expected utility maximizing agents to really understand it. But take it from people who have looked at it in more detail; there’s no horror there. There can’t be ever be any horror or anything even slightly bad there. And immortality would still not be undesirable, it would just be irrelevant.
> I guess this is my issue, at it’s core: If immortality is a worthwhile goal, I feel like there needs to be a more robust analysis of pleasure as something that outweighs suffering, which I just don’t see. It seems obvious to me that there is more net suffering, but that could be bias. Simply saying “pleasure is good, therefore good” is not enough.
I agree that’s not enough. You appear to be working on a model of roughly “we’ll just fix involuntary disease, aging and death, and the rest of society will still largely the same”. That’s not the idea, at all. There are lots of other things that also need fixing in this society.
In practice, aging, other disease and death are themselves the causes of huge amounts of suffering, so getting rid of most of that is part of the suffering-elimination program.
The other parts include (as mentioned previously) the elimination of the nasty parts of poverty in our society, other fixes to society that we haven’t quite figured out and much better options and access to mental attitude adjustments (also mentioned before).
People don’t like to suffer. Suffering is clearly fixable. Our current society /fucking sucks/ in this regard. It doesn’t much matter what the current balance is, for long-run predictions. We can do much better than we are, and than we ever have in human history. Unless we really screw up in the near future, we almost certainly will do much better in short order.
Transhumanists often don’t focus much on the possible material living standards improvements. If you care about these, note that there are orders and orders of magnitude free for the taking with a bit of technological advancement; the only real concern here is getting a moderately fair distribution, and not growing our population by quite as many orders of magnitude, and poverty is solved. One of the common LW ideas is “reedspacer’s lower bound”. It was defined by the following line on IRC:
> living in your volcano lair with catgirls is probably a vast increase in standard of living for most of humanity
Suggesting that this kind of living standard should (and will, if our plans work out) be generally available is not an outrageous idea in LW, it’s pretty normal. (There’s some disagreement about the catgirl part. I’d rather not get into that right now; I hope it’s clear enough even if you ignore that aspect.)
But even more important than this are the possible mental adjustments you could make to minds once you understand them better. To keep intelligences working, you almost certainly need some kind of negative feedback – but it doesn’t have to feel like suffering. As a simple constructive proof, even in this world humans can learn from their mistakes just fine even if they don’t suffer at all for them.
Even if you force minds to live in our current environment, there’s no inherent reason they need to suffer very much. That’s a /design flaw/. A design flaw we’ll be able to fix with a few more technologies, the hardest of which are conveniently useful for death and disease eradication also.
So. The program is to eliminate suffering from involuntary death, suffering and disease. Then everyone gets a volcano lair – possibly with NPCs, details to be determined – or similarly awesome residence suiting their personal tastes. We perform some social re-engineering to eliminate or massively reduce the more hedonically problematic aspects of current human society (the details of this have not been worked out. It’s a thorny issue).
In addition to that we hand out some mental upgrades for people who want them, including adjustments to allow them to be much happier by default and scale down their suffering reactions.
I would be severely surprised if that doesn’t push us well into “people have awesomely happy lives on average” territory, but if it doesn’t, we then follow that up by further research into positive and negative feedback in human minds and how to further shift it into the right direction.
Now, there are some reasons transhumanists may like to focus on involuntary death mitigation. Some are individually worried that they might die without seeing all the awesome stuff that we’ll get in the future. Some are individually worried about losing people they care about. Some are concerned about humanity in general and consider involuntary deaths extremely negative-utility events, and far harder to reverse or compensate afterwards than anything but extreme amounts of individual suffering.
Involuntary death reduction is also one of the few aspects of the transhumanist agenda that we already have the technology (cryonics) to make good progress on right now. There aren’t too many practical lifestyle choices people can make right now in line with transhumanist priorities; signing up for cryonics is one of them.
LikeLike
YmcY said:
Hmmm. Alternate Transhumanism: There exists X, 0 < X << 1 , such that for any disability: if more than X * total_population(disability) would not push a button to remove their disability, they get another button to make them transhuman while keeping their disability.
E.g. As a neurodivergent, its easy for me at least to perceive how most neuro-divergences (probably all, if I tried) can be given superpowers congruent with their identities, that give them large personal utility and a valued social role in the posthuman utopia. And having been recently been studying a little sign and learning about the Deaf, same for them. Generalize to "amount of time I spend experiencing or learning
I mean, I just started watching Daredevil – which is something like this trope? My disability helps make me awesome, in a fighting evil rather than motivational poster disability porn sense? Just picture this achieved with Technology instead of Plot.
(To translate Plot to Technology, just rewrite the existing story as a fanfic of Yudkowksy story).
Another point: path dependency. Maybe we all end up as gods. But no one wants to go from "me now" to "god" in a single step, because uh no thanks sounds like a lot of responsibility and like I totally couldn't really be me anymore anyway.
So everyone who gets to god necessarily goes by incremental, consensual steps. Now that I'm IQ 111, I see the benefits of my next IQ point.
And everyone's path is somewhat individual specific; for the average person with disability circa 2015, they can't be expected to cast the disability aside from their identity until they're like 80% of the way to full transcendence.
LikeLike
Eliezer Yudkowsky said:
Sounds like it’s time for another game of… HIGH-ENERGY ETHICS!
So, like, House Elves in Harry Potter. They *want* to be House Elves, as part of the heritable curse cast by some ancient wizard in order to generate their personal slaves. Should they be allowed to reproduce? Should the curse be allowed to include the part about enjoying being a House Elf? How evil was that wizard? If the curse hadn’t been cast yet and you were on the spot, knowing that a thousand years later there will be a community of House Elves who quite approve of their lives, would you stop the wizard from casting the curse?
How about natural human slaves? Nobody created them on purpose or fine-tuned them to remove their free will, they’re just (very slightly) neurodivergent folk who happen to benefit others a lot by existing and have a strong masochistic drive. Does Brienne have the right to reproduce? (She doesn’t want to, but other slaves do.) Would it be good to have stopped that gene from existing in the first place? Would it be okay for her to expose her offspring to an environment that maximized their chance of growing up to be a BDSM!slave?
An Evil Geneticist creates a virus which causes everyone infected with the virus to (a) send her $1000/month, (b) invoke the brain’s several rationalization centers to rationalize why it is good to send her $1000/month. 2 years later they’ve all formed a community based on how nobody else understands the various competing explanations for why it is good to send money to the Evil Geneticist. Aside from that, they’re just people with a substantially reduced income and all the pain that brings. A cure is developed. Can we strap them down and administer it? Does it make any difference how ludicrously implausible the rationalizations are?
Which of the following should have the right to reduce a child’s prospective IQ from 100 to 94 via deliberate exposure to a small dose of toxin: A single parent with IQ 140, a single parent with IQ 100, a single parent with IQ 94, a single parent with IQ 80, an AI which has correctly calculated that the child’s life will be hedonically improved by 2%, an AI which has correctly calculated that society as a whole will be better of by 2000 hedons thanks to having a good worker even if the child’s life is worse. Does it matter if the child’s IQ is going from 140 to 136? From 80 to 76?
I am an Evil AI Researcher and I create a sapient, conscious, emotional AI modeled on a cleaned-up version of a roughly male human architecture, except that the artificial man contains a loop of code requiring him to (1) experience intense pain for 2 minutes every 24 hours, and (2) prefer in an abstract way that this pain continue. So while the pain occurs the man is screaming, but if you ask him afterward about removing the code block, the code block kicks in and he’s like, “I prefer you don’t do that.” It’s not a rich emotional drive or a complex rationalization, the code block kicks in each time the man considers the abstract preference and forces a particular outcome of deliberation, without any attempt to backpropagate emotional support or rationalization for itself. Aside from that, the artificial man has a humanly rich emotional inner life, allowing him to experience the full scope of existence and also the suffering and debilitation and fear and horror and sickness associated with those two minutes of intense suffering. Is it okay to hack his brain and remove the code block? What if the code block contains a third instruction saying that this code block must be passed on to the man’s children – is he allowed to do that?
LikeLiked by 2 people
ozymandias said:
Intuitive responses:
House elves should be allowed to continue to exist, and I’d probably be in favor of creating them, although population ethics is hard. House elf existence certainly appears superior to nonconsensual slaves existing. Harry Potter house elves clearly need more rights, though– for instance, the right to be given clothes on demand.
I genuinely do not understand why the continued existence of natural slaves would present any sort of moral conundrum whatsoever. Obviously, yes.
I’m inclined to view the evil geneticist/evil AI researcher cases as analogous to heroin addiction, in which we have accepted that the person’s current preferences do not reflect their overall life flourishing. I am *really leery* of extending this exception beyond a relatively small and circumscribed area of human life, for obvious reasons, but “a person caused you to rationalize that you want this by nonconsensually reprogramming your brain” seems even more central than heroin addiction. (I mean, at least heroin addiction is usually consensual when you start out!)
No difference between 140/136, 100/94, and 80/76, obviously. AIs who have correctly calculated that IQ reduction is the morally correct thing to do should obviously do so. As for the actual humans, I’m inclined to the position that parents should wait until adulthood and allow the adult to make the decision, unless the toxin only works in childhood for some reason, in which case they should use their best judgment about what the adult would want. (I don’t know which IQ bands, if any, would prefer to have a lower IQ.)
LikeLike
Eliezer Yudkowsky said:
This all sounds reasonable to me – I don’t agree with every point, but it all sounds like a reasonable position to take. Can you give a specific case where you think I probably recommend a different action from you?
LikeLike
ozymandias said:
I’m not sure. On priors, I’d suspect that I’m much more okay with Deaf parents selecting for Deaf children than you are, just because I’m more in favor of that than *most* people are, but I wouldn’t be super-surprised if we agreed. I suspect the primary difference is more of framing than anything: I’d *much* rather summarize it as “freedom good, constraint bad” than “health and life good, sickness and death bad”, even though I suspect most people will choose the former.
LikeLike
Illa said:
I think the questions “what do I, personally happen to want?” and “what would be objectively best for me?” can have a lot of overlap in their answers, but I still see them as different questions if “whatever’s best for me” isn’t actually one of my wishes. Maybe it should be?
The post says “we don’t grant nine-year-olds the same autonomy we do adults”, so I’d like to ask: isn’t conventional childrearing a case of people “nonconsensually reprogramming your brain”, or at least “nonconsensually programming your brain”?
In the thought experiments, when you think about whether to change the preferences of the virus-infected people or the artificial man, do you want them to later pick their own preferences? And would they still hold their preferences if they knew how they came about? Maybe people have preferences about the sources of their preferences?
LikeLike
Creutzer said:
Deaf parents selecting for deaf children is really a terrible idea, because a hearing child of deaf parents gets the best of both worlds – they grow up bilingual with sign language and spoken language and are able to fully participate in both cultures. They are also loved by every sign language linguist anywhere. Remaining deaf doesn’t give the child them a single benefit.
LikeLiked by 1 person
Franz_Panzer said:
“But I am pretty sure most people reading this can think of a impairment they’d like to keep, or that they agree reasonable humans would want to keep.”
Is this true? Because I’ve tried to think of something I would consider an impairment and would like to keep and came up blank.
I can accept that other people would like to keep theirs, because, well, they are not me, so they may have different values and preferences from which other conclusions than mine are reasonable.
They’d stll act unreasonable according to my reference system, though
LikeLiked by 1 person
Fisher said:
I think it’s more likely that:
1. They really like some aspect of their life, but think that having that good thing requires a particular impairment. This may not be true, or may become untrue with further technology.
2. They are circumscribing their identity with their impairment and fear identity loss.
3. They fear change.
LikeLiked by 1 person
jeqofire said:
I would be surprised if there aren’t a lot of people whose reason would be 1 or 2. I’m part of the way toward 3.
I definitely would like to have the vision in my right eye restored. Having never experienced stereo vision and having no idea how my brain would process two visual inputs, the idea of giving myself a functioning left eye kinda scares me.
There are things about restoring what vision I had that are kinda worrying, too, and the idea of going all the way to perfect or superhuman vision even in one eye opens up the terrifying world of learning how to deal with faces and nonverbal communication, but I know most of what I’m missing and most of it is desirable.
I have no idea what mental illnesses I have. I know that I’d have no objections to giving up anxiety and the magnitude of 24/7 akrasia that plagues me. Anything else would require lots of thought, research, and caution.
LikeLike
sh said:
I think you’re fundamentally right about choice being important. Personally, I like to summarize this as my goals including “The eradication of involuntary disease, aging, suffering and death”. Fwiw, I think this is a perfectly standard transhumanist position.
The precise details around how much effort it should take to consent to something like death – or higher amounts of suffering – are complicated, and I don’t think we’ve worked them out well. But yes, an intelligence consenting-in-sound-mind should ultimately be allowed to experience any or all of those things.
I’m fairly certain Eliezer Yudkowsky agrees with this. He didn’t go into it in that particular essay, because he was focused on bashing people who insist that such horrible things should be forced on people – and as you say, they really deserve some serious bashing for that sort of attitude – but it’s pretty clear in some of his other writings.
LikeLiked by 1 person
Sniffnoy said:
I think you’re potentially mixing up a few things here. You use the example of autism above. But surely we should say not that autism is an impairment, but that it contains impairments? Or we could say that it is a net impairment (however we’re judging that), but it is not a strict impairment. At least as I understand it, it’s a trade-off, even if it’s not an even one.
So, like, there’s the old “cure for autism” specter, but what if we instead were talking about something that merely “cured” the parts we would consider impairments… and also “cured” the impairments resulting from not being autistic — something that got around the tradeoff and just said “Here, have all the upsides of each!”
Basically I’m wondering what you would say to someone who said “Yes, I’m all in favor of autonomy, so long as the improvements we’re talking about aren’t strict, i.e., so long as you’re selecting from among the Pareto frontier.” Noting of course that lots of things that may be nonstrict now, especially in nitpicky senses, cease to be if you change the brain architecture they’re attached to. A totally deaf person won’t be frightened by loud noises, but when you can just install a better filter in your brain, that advantage goes away. Failing that, an easier fix is to give people the ability to turn their ears on and off. Turning everyone into shapeshifters may not be what you mean by “maximizing morphological freedom”, but it fits the literal meaning.
Because, like, regardless of whether or not this is is right, it seems like something worth addressing.
LikeLiked by 4 people
tailcalled said:
I think that if you take utilitarianism seriously, you should probably just Shut Up and Multiply; assuming the ‘most-people-would-prefer-IQ-120+’ hypothesis is true, and that the individual preferences for high IQ are at least as strong as the preferences for low IQ, then you get more utility by increasing people’s IQ.
Essentially, the question is why we should take ‘average human today’ as anchor rather than ‘the kind of human we want’.
LikeLiked by 2 people
Itai Bar-Natan said:
I think you’re presenting a false dichotomy. What Ozy is suggesting is to raise the IQ of anyone who wants it raised while keeping it low or lowering it for people who would prefer it that way. This option is strictly better than either increasing everyone’s IQ or lower everyone’s IQ as long as there is at least one person who prefers it to be higher and at least one who prefers it to be lower (though you haven’t explicitly made this a dichotomy between these two possibilities, only discussed the more vaguely-stated option “increasing people’s IQ”). We are only forced into all-or-nothing thinking for people who are unable to state their preference, in which case I agree that we must (reluctantly) Shut Up And Multiply.
LikeLike
ozymandias said:
Or if the interventions are population-level, such as decreasing the amount of lead in the environment.
LikeLiked by 1 person
tailcalled said:
Ah, I think I misunderstood ozy’s position then, I though they meant that only the people whose IQ is raised should get to decide, meaning that you would have to wait until you can get their consent, my bad. Do I understand correctly that the point of divergence of opinion is whether parents or society should decide whether higher IQ is likely to be better for the child?
LikeLike
Guy said:
If I were to render transhumanism down to a single statement representing some kind of simplified humanism, it would probably be “you should have what you want”, not “age, disability, and death are bad”. Obviously “what you want” then becomes a sticking point, as is much discussed elsewhere on the internet. But “you should have what you want” more clearly renders the terminal value I’m trying to reference when I say “I believe transhumanism is an overall good”. The other one says a bunch of things about what I want out of MY transhumanism, and what I guess most people want out of their transhumanism, depending on what they mean by disability, but I don’t really want anything at all out of YOUR transhumanism, and I think it would be immoral for me to want something from your transhumanism. You should have what you want (in some sense, at least), which may or may not be the same as or similar to what I want.
Also, as others have said, I fail to see how “help people transition fully to the body they desire” is distinct from “cure trans-ness”. I generally take “cure X” to mean “X is (or contains or references) some problem, and we need to make the problem go away”. Sometimes you have multiple methods for curing something; if you’re in hypothetical-land, this is basically always the case. If you’re talking about “curing trans-ness” I take the problem to be that people don’t have the bodies they want. There are two ways to cure this: you can adjust their mind to remove that desire, or you can adjust their body to match the desire. I take the mind to be more ethically valuable than the body, so I say adjust the body. So I suppose I’d add “the mind has priority over the body” to my summary of the core of transhumanism. That’s what makes it transhumanist specifically, I think. The body is a temple to the mind, so you should, if possible, remodel the temple to fit the desires of the being to which it is dedicated.
LikeLike
nancylebovitz said:
When I first read “curing gender dysphoria”, my thought was lowering the intensity of the dysphoria to “I’m not crazy about the gender of my body, but it’s not a great big deal to me”. If that were available (and cheap and safe), would this be a good thing?
LikeLiked by 1 person
Maxim Kovalev said:
Depends on whether it’s an object-level or meta-level preference, I guess. Clearly there would’ve been a lot of opposition to a pill that makes people less concerned about, say, the suffering in Africa, even though it would by any objective measure reduce the level of suffering/anxiety of those who take it. On the other hand, a pill that would make people less afraid of flying would be pretty popular, since few actually consciously believe that fearing it is the right thing to do.
LikeLike
Maxim Kovalev said:
That reminds me of an earlier post – https://thingofthings.wordpress.com/2015/05/05/thoughts-on-hedonic-utilitarianism/ – particularly, this part:
It seems to me that in both cases we have a system that’s mostly preference utilitarian, except when it comes to the author’s truly sacred values, that won’t be sacrificed for meta-level ideas about ethics. In Eliezer’s case it’s the preference for intelligence and immortality – eternal life is a strong net positive for him, and everyone who thinks otherwise must be mistaken. Furthermore, and as far as I can conclude from his essays, he equates dying because of lethally mistaken beliefs with not dying because of the same beliefs (because in an alternative universe they happen to be mistaken too, but non-lethally), and then being shot by a firing squad for having them – thus, no one should die because of mistaken beliefs, and thus immortality must be mandatory. You can poke holes in this argument, but I think it basically boils down to having a preference so strong it overrides all other moral considerations, with some rationalization.
In your case, you hold equally dear sex-positivity (in the broad correct sense) and GRSM acceptance, but not nearly so much immortality and intelligence, so the latter can be sacrificed in favor of more preference utilitarian, but not the former.
Now, it would be easy to say “let’s all get together and develop an ethical system free of personal biases”, but that’s not how things work. Sacred values are sacred precisely because they’re not fungible, and we’re not motivated to compromise on them – not as a long-term solution, at least. I mean, surely we can all agree on a truce, and even combine efforts in achieving some things we agree are strictly better than the status quo, but anything beyond that would require something more creative.
Is that supposed to be an argument for allowing parents to use eugenics to have paraplegic babies, or for not allowing to use it to have deaf babies? I can totally see working it either way.
I agree, but only when we control for the fact that 120-year-olds are usually in the state when their body and mind are failing them, they’re likely to be in pain, they’re unlikely to find any new friends due to the debilitating aging process, they’re guaranteed to not have friends of their age, and they’re gonna die soon anyway. 20-year-olds can find themselves is a roughly the same (although still not as bad) position if they’re dying of a terminal disease (most likely cancer). In this case I have no doubt in the validity and rationality of the decision to die: the remaining life is short and bad enough to predict with an incredibly high confidence that its value is negative.
That is, most of the rational decisions to die that are made as of now are highly conditional on the fact that all people die anyway, and they’re merely making this ending slightly sooner, if they believe they’re better off this way.
But any technology strong enough to achieve longevity escape velocity is strong enough to address these problems. You’re gonna die soon (where for some people “soon” may well mean 60 years, if they internalized existentialism well enough) anyway? No, you’re not – both 20-year-olds and 120-year-olds are in this case deciding to terminate over 99% of their remaining life. You’re in pain? No, you’re not – a sufficiently advanced technology will cure all pain, or at least be able to put people into cryostasis until the cure is found. You hate your body? No, you don’t – if immortality is achieved via mind uploading, then you don’t even have a body beyond a biorobotic avatar, that you can change of whim, and if it’s achieved by nanites, you can fully rebuild it.
Hypothetically, I could see that some people will still prefer to be dead, but I expect this number to be orders of magnitude lower than it is now.
Same thing as above: anything that can provide immortality, can also address this. The implications are obvious for nanites and mind uploading (someone’s identity is vacillating between male cat and female spider? Fine, with mind uploading they can have it! Not so easy with nanites – not so fast at least – but any body capable of sustaining a human brain would be fine, including any physically possible set of genitals, bone and fat structure, etc.), but even in its weakest form – gene therapy + organ regrowth – it can well give anyone a body typical for any sex, or even something slightly atypical. I’m not sure it would be easy to build from scratch a Y-chromosome for an AFAB person (aside from using a donor’s one), and use that to grow male genitalia, but in an AMAB person you won’t even need to replicate the existing X-chromosome, you just need to silence the Y-chromosome, since in AFAB people one of the X-chromosomes is silenced anyway. Heck, we can give people fertility typical for any sex, regardless of the genitals they born with – and that’s something that’s conceivable to achieve withing a decade or maybe two!
So, even in foreseeable future we’ll be able to deal with most of issues with physical dysphoria (I wonder: if we give an AMAB woman a working vagina, uterus, mammary glands, XX-karyotype, and otherwise make her a cis woman for any conceivable intent and purpose, but she’s still unhappy about wide shoulders and masculine face features, is it classified as GID or BDD?), and in less foreseeable but still conceivable future we’ll deal with any issue of physical disphoria, full stop. That leaves us with purely societal issues, at which point it starts looking less of an issue with the brain or body, and more of an issue with equal rights – why are people policed (often with physical force) into behaving a certain way based on their genitals?! And then I’m totally sure it can be meaningfully classified as neurodivergence.
LikeLiked by 1 person
Anon said:
Just because technology may exist, it is not necessarily cheap. Most of the technology you mention imply that minds can probably be copied; once this happens population growth will reduce median per-capita wealth or income to near subsistence.
You may also underestimate the probability that suffering and torture will be inflicted on purpose with condonement of the political system, and how rational preferences like suicide can be in even mildly dystopian scenarios, for the marginal inhabitant.
LikeLike
Maxim Kovalev said:
Just because technology may exist, it is not necessarily cheap.
Well, if it obeys Bell’s law, it’s only a question of time.
What’s the incentive for that though? I would want to have a bunch of offline backup copies of my mind just in case, but I wouldn’t want a bunch of mes going around, each having an equally strong claim to my identity. And since no one dies, evolutionary processes only work partially – that is, as long as I live, my modus operandi wouldn’t go extinct, being outcompeted by the Quiverfull.
That’s a fair point, but it seems to me that once it gets dystopian enough for the government to massively torture citizens, individual preferences don’t matter anymore. If the torturer is gonna allow the victims to kill themselves, they could just kill the victims in the first place. Thus, the victims wouldn’t be even granted the right to chose what to do with their lives. Every scenario in which we discuss the right to death implies a relatively large degree of personal freedom, which doesn’t really go along with being tortured by the government.
LikeLike
Anon said:
>What’s the incentive for that though?
It’s a multipolar trap. Once mind copying becomes feasible, some entities will use it to maximum economic effect. Unless you are uniquely gifted, the market price of your labor will become that of the price to instantiate another copy of someone who can do it equally well. Unless this is globally banned or heavily taxed, in which case it becomes a matter of how effective the respective black markets will be.
>If the torturer is gonna allow the victims to kill themselves, they could just kill the victims in the first place.
They key insight is still that this makes longevity and other transhumanist technologies more harmful, because they can be used for evil. The ability to keep someone alive against their will indefinitely implies the ability to torture them indefinitely.
LikeLike
Maxim Kovalev said:
I see how it will work in a society that has mind uploading, but doesn’t have AGI, but I don’t think such society is possible. The amount of computing power you need to run whole brain simulation is definitely much higher than what you need to run human-level AI, simply because human brains are vastly suboptimal for running math and logic, which is easily fixed by computers; furthermore, humans remained pretty high functioning after lobotomy, so even in wetware human-level intelligence can be achieved with less computing power. So whatever solution we’ll find – if we find it at all, that is – to the problem that human labor becomes worthless, we’ll need it well before we can actually upload and copy human minds.
LikeLike
Patrick said:
I’m not particularly concerned about arguments like “what if someone doesn’t WANT to gain 10 IQ points?” Well, tough.
We learn to deal with the things we can’t change. We incorporate our understandings of the things we can’t change into our identity. Living life without ever coming to terms with those things, while always feeling like we’re not all we could be, is hard. Our minds shy away from it, and we decide that we like how we are. We’re not just accepting of it- we LIKE it. It’s WHO WE ARE.
But… so what? Counterfactually, that reasoning only ever happened because we found it psychologically necessary. How much value should that really have if we suddenly have the ability to change the previously immutable thing?
I see this issue a lot like… like women in utterly patriarchal societies who defend their society’s values. I believe that their feelings are genuine. But how often do women in societies that DON’T burn infertile women alive ever say, “You know what our society could use more of? Infertile women, on fire!” Pretty much never. If something becomes a way of life you can’t change, you incorporate it into your sense of self. But if it never becomes that… well, things are different.
Or maybe it’s better illustrated by pro life activists who say, “But if your mother had aborted you, you wouldn’t be here!” But in that counterfactual world, me not being there isn’t important. No one ever points to the empty space next to them and says, “If not for an abortion, there would be a person here! Woe for that person!” The counterfactual only has emotional impact because I AM here, and capable of imagining not being here, and feeling like that’s bad. But whether my emotions are a valid guide to that is highly debatable.
TLDR, I don’t think person by person level human psychology is a useful guide to these things because I don’t think people are capable of rationally considering their own counterfactual selves.
LikeLiked by 2 people
ozymandias said:
I am confused about why this argument does not apply to raising IQs in the same way that it applies to lowering IQs. Surely people are equally incapable of reasoning about their counterfactual selves either way?
Also lots of people are sad about aborted fetuses– google “abortion regret”.
LikeLike