Tags

, , ,

[Previous: Models of Neurodivergence.]

Materialism. The first principle of the transhumanist model of neurodivergence is that there is no soul. There is no distinction between my brain and my me. There is no homunculus floating somewhere around in my head making decisions that aren’t a product of my brain chemistry. Everything we feel, think, say, or do is a product of the interactions of neurons and, ultimately, of atoms banging into each other.

This means that, contra the medical model, there is no difference between mental illness and character flaw. You can’t be like “well, it’s a mental illness if it’s because of your brain chemistry”. Every emotion– from the most functional or typical to the least– is a product of brain chemistry.

It’s no secret that a lot of mental health diagnosis is sort of arbitrary. Look at the Beck Depression Inventory. There’s no particular reason why, if Alice and Bob are identical except that Alice circled “I cry more than I used to” and Bob circled “I cry all the time now”, Alice should be classified as “borderline depression” and Bob “moderate depression.” But you have to draw the line somewhere, and that’s where they chose to draw the line.

Even more so, it seems probable to me that some disorders, such as generalized anxiety disorder, actually are the extreme end of normal human variation– a quantitative difference rather than a qualitative difference. Some people are very easy going and don’t worry at all; some people worry sometimes and not other times; and some people worry constantly. But the thing is that worrying constantly usually makes people unhappy and makes it harder for them to do things, so we have decided that worrying way more than average is a mental disorder and gives you access to therapy and medication. This is not exactly what one would call a well-grounded distinction.

The materialism point is not to say that we shouldn’t criticize people for character flaws. Sometimes criticism is an effective way of changing behavior. However, criticism is also an effective way of changing some behavior caused by neurodivergence: I myself am a lot less likely to get depressed when I’m going to be criticized for not doing the self-care things that prevent depression. And it is an ineffective way of changing a lot of behavior not considered mentally ill: yelling at someone for being normally forgetful often doesn’t help them be less forgetful.

Fuck ‘normal function’. Transhumanism is based on the recognition that just because an impairment is common doesn’t mean it’s good. Perhaps the worst impairment of all– death– is one every human will face. (So far. Growth mindset!) Normal humans can’t spin off subagents to work on particular tasks, add twelve-digit numbers in their heads, avoid confabulating memories, or change their minds as much as they should in response to new evidence. These impairments are just as important as more uncommon impairments such as ADD.

All too often, the impairments that are medicalized are not the ones that are the worst: they’re the ones that are the least common. If everyone could avoid memory confabulation, the people who confabulated would be considered psychotic. Since everyone does it, it is a normal part of life. But I see no reason that the morally relevant criterion for whether an impairment should exist should be popularity.

Morphological freedom. Eliezer writes in Prolegomena to a Theory of Fun:

In the era of my foolish youth, when I went into an affective death spiral around intelligence, I thought that the mysterious “right” thing that any superintelligence would inevitably do, would be to upgrade every nearby mind to superintelligence as fast as possible.  Intelligence was good; therefore, more intelligence was better…

But the real break came when I naturalized my understanding of morality, and value stopped being a mysterious attribute of unknown origins.

Then if there was no outside light in the sky to order me to do things—

The thought occurred to me that I didn’t actually want to bloat up immediately into a superintelligence, or have my world transformed instantaneously and completely into something incomprehensible.  I’d prefer to have it happen gradually, with time to stop and smell the flowers along the way.

I mention this not to make any point about intelligence increase after the Singularity, but to point out that people can, in fact, value being weaker than they could otherwise be. Eliezer Yudkowsky (at least) doesn’t want to immediately bloat up into a superintelligence. And by the principle that all impairments are equal, this means that some people can value having the suboptimal brains they currently have. I value my strong emotions; this is no different than Eliezer valuing his sub-superintelligence IQ.

So, therefore, the transhumanist model of neurodivergence embraces “morphological freedom.” Morphological freedom means that I decide how I want to change or not change my own body and brain; my informed consent is all that is required to cause a particular change.

This means that if you have a normal-human brain and want to have a better one, it is okay for you to use medications and other interventions to get there. If you want to take modafinil so you sleep less or work better, you have a perfect right to do so. Similarly, if you think therapy will help you understand yourself or deal with some subclinical issue like procrastination, there is nothing wrong with seeking therapy.

On the other hand, it means that I have a right to refuse intervention. You can refuse intervention because the side effects are too bad: for instance, not taking antidepressants because you dislike sexual side effects. You can also refuse it because you value your brain being a particular way: for instance, many autistic people genuinely don’t want to be nonautistic, and it is morally wrong to make them such.

Morphological freedom does permit stabilization. If someone is disconnected from reality and hasn’t left a mental illness advance directive, then it may be justified to give them medications until they’re connected to reality enough that you can ask what their preferences are. (However, if someone in sound mind prefers to be psychotic, they should be allowed to do so.) In addition, morphological freedom may permit discouraging people from using addictive substances (including by making them illegal): if a substance is addictive, then people may be using it for reasons other than their own best interests.

Finally, morphological freedom does not mean that doctors can’t help: I might know that I want to have a particular brain state, but I don’t know how to get there, and psychiatrists, psychologists, and so on may play an important role as expert consultants.

Accommodation. The transhumanist model of neurodivergence accepts the social model’s distinction between impairment and disability. If no one can spin off subagents to work on problems, then (for obvious reasons) you aren’t going to have a society that requires people to be able to do that. But if people could, then the hypothetical person who couldn’t would suddenly have a lot of difficulty finding a job.

Our society should strive to accommodate everyone’s impairments as best it can given current technology. There are two reasons for this. First, we cannot fix every impairment, and since there is no moral difference between uncommon impairments and common impairments, there is no reason that common impairments should be more accommodated (beyond the ordinary considerations of tradeoffs).

Second, the right to change your brain in any way you like is not a particularly useful right if, when you change your brain the way you like, you proceed to starve to death.

As I previously wrote in a tumblr post justifying anti-ableist activism:

when the posthumans come— when there are beings that think faster than us, know more, can alter their preferences and share their source code and branch into a thousand selves and merge again—

would you want them to change you, against your will, no matter how painful the changing is or what you value about yourself you would lose?

and if they couldn’t change you, would you want them to abandon you, lost, in a world full of signs you can’t see (everyone sees in ultraviolet) and conversations you can’t comprehend (everyone knows as much as Wikipedia) and sensations that overwhelm you (everyone likes noises as loud as a jet airline taking off), a world you can’t function in and can barely comprehend?

or would you want them to be kind?

and if you hope the posthumans— creatures unimaginably alien, unimaginably superior— would be kind, then how can you not justify being kind to those a little less optimal than you, right now?

And that is where I stand, as a transhumanist and a disability rights advocate.

Further Reading

Nick Bostrom, Transhumanist Values
Scott Alexander, Diseased Thinking: Dissolving Questions About Disease
Sarah Constantin, Errors vs. Bugs and the End of Stupidity
Liz Tarleton, Transhumanism and Disability
Ron Amundson, Against Normal Function

Feel free to share more resources you believe belong on this list!