Tags

, , ,

[blog note: My friend Sara Luterman is kickstarting NOS Magazine, a magazine by and for neurodivergent people. Potential contributors include a lot of awesome disability rights people and me. Consider backing it! If you pledge $45 you get a stim toy!]
[content note: this essay contains justification of people’s right to commit suicide.]

There is a lot to like about Transhumanism As Simplified Humanism. For one thing, it bashes bioethicists, and bioethicists are pretty much universally worthy of bashing once they leave the time-honored “being a Nazi is bad, don’t be a Nazi” territory. However, its fundamental argument makes me cringe.

Suppose a boy of 9 years, who has tested at IQ 120 on the Wechsler-Bellvue, is threatened by a lead-heavy environment or a brain disease which will, if unchecked, gradually reduce his IQ to 110. I reply that it is a good thing to save him from this threat. If you have a logical turn of mind, you are bound to ask whether this is a special case of a general ethical principle saying that intelligence is precious. Now the boy’s sister, as it happens, currently has an IQ of 110. If the technology were available to gradually raise her IQ to 120, without negative side effects, would you judge it good to do so?

(Unfortunately, age makes this a bit complicated, because we don’t grant nine-year-olds the same autonomy we do adults, and so we have to make decisions based on what we hope the nine-year-old’s adult self would want. If most people prefer an IQ of 120 over one of 110, then we should cure his brain disease and raise his sister’s IQ. I am going to solve this problem by pretending Eliezer said “eighteen-year-old” instead.)

I would much prefer to have an IQ of 120 rather than an IQ of 110. Eliezer Yudkowsky would prefer to have an IQ of 120 over one of 110. Perhaps most people in the world would prefer to have an IQ of 120 over one of 110, although that needs to be shown and not simply assumed based on my and Eliezer Yudkowsky’s preferences. However, it does not seem true to me that everyone in the world prefers an IQ of 110 over one of 120. People want a lot of different things! Humanity contains death metal fans and Leon Kass and people who willingly consume zucchini chocolate cake with tofu frosting. Are you honestly expecting me to believe that in all of humanity there’s no one who will say “actually, I prefer to be only one standard deviation above average in IQ, thank you?

Lots of people have an impairment they would prefer to keep. The obvious example, of course, is some people who have glasses and haven’t gotten LASIK, as well as people who write articles in the New York Times about how we are medicating away childhood/great art/whatever. But I am pretty sure most people reading this can think of a impairment they’d like to keep, or that they agree reasonable humans would want to keep: an IQ that is lower than superintelligence? a sexuality that is not capable of enjoying every conceivable sex act with every conceivable person, with a dimmer switch for when you need to concentrate on something else? a face that doesn’t look like one of those averaged-out Most Attractive Person faces?

I admit this may seem a bit nitpicky. After all, it may very well be that most people would prefer to have an IQ of 120 to one of 110. However, the issue of the right to be impaired is a live issue for disabilities from Deafness to autism; the only reason it isn’t an issue for trans people is that we’re going “LA LA LA LA LA LA LA WE AREN’T A NEURODIVERGENCE IT IS TOTALLY TRANSPHOBIC TO CALL US A NEURODIVERGENCE” in defiance of all logical classification schemes. I don’t think that anyone would deduce from first principles that loss of hearing is a valuable part of many people’s lives that they want to pass on to their children, and paraplegia isn’t.

Eliezer also discusses death:

If a young child falls on the train tracks, it is good to save them, and if a 45-year-old suffers from a debilitating disease, it is good to cure them. If you have a logical turn of mind, you are bound to ask whether this is a special case of a general ethical principle which says “Life is good, death is bad; health is good, sickness is bad.” If so – and here we enter into controversial territory – we can follow this general principle to a surprising new conclusion: If a 95-year-old is threatened by death from old age, it would be good to drag them from those train tracks, if possible. And if a 120-year-old is starting to feel slightly sickly, it would be good to restore them to full vigor, if possible. With current technology it is not possible. But if the technology became available in some future year – given sufficiently advanced medical nanotechnology, or such other contrivances as future minds may devise – would you judge it a good thing, to save that life, and stay that debility?

But what if the forty-five-year-old, or the ninety-year-old, or the hundred-and-twenty-year-old, wants to die?

I think this concedes one of the strongest points anti-deathists have against deathists. Are you afraid that an immortal life would become boring, or take away your urgency to accomplish things, or prevent you from enjoying an eternity of bliss with your deity? Great! You can die. We aren’t going to stop you. But there is no call to go around imposing your values on people who would like to stick around until the sun goes out.

I’m inclined, if anything, to reverse this argument: if a 120-year-old can, after careful thought and consideration, decide that they’ve had quite enough of the world and they’re done now, then so should a twenty-year-old. (Obviously, many people have distorted beliefs about whether they should kill themselves, and I have no ethical problem with waiting periods or screening to make sure that the person isn’t being pressured.) Instead of leaving one’s time of death up to chance, or requiring people to live out endlessly increasing amounts of time, we can trust individuals to decide for themselves whether they want to.

I don’t accept “life is good, death is bad; health is good, sickness is bad.” But my transhumanism is simplified humanism too. Is not autonomy a fundamental humanist value? My understanding of autonomy doesn’t come with a hidden rider of “…as long as you don’t make any choices we disapprove of too heavily.” You don’t have your options limited to choices that are “natural” or don’t squick out bioethicists. And you can make a choice about the most fundamental decision facing human beings– the decision to live or to die– instead of leaving it to chance.