, , ,

[epistemic status: consciousness discussion makes my head hurt, so I’m not certain that I’m not retreading material already covered by philosophers or making obvious mistakes]
[content warning: anti-abortion, filicide]

Consciousness is a thorny thing and there have been many arguments about it. About halfway through any argument about how we tell whether a being is conscious, I get lost and a headache, which is a real impediment to coming to a firm conclusion about which beings have this trait I can experience but cannot define. Unfortunately, the ability to experience qualia is what– in my utilitarian framework– makes a being have moral weight at all.

(Note that throughout this blog post I am using “conscious” to mean “there is a thing that it is like to be you”, not to mean “self-aware” or “has a sense of self”. It seems plausible to me that there can be a thing that it is like to be a bat, without the bat having an “I”.)

So: I know I am conscious. I am as close to certain about this as I am about any claim.

One might become a solipsist at this point and say that only oneself is morally relevant, because only oneself is known to be conscious. However, this claim is unsatisfying; to me, the consciousness situation seems like an opportunity for probabilistic reasoning. I might not know that you are conscious for certain, but I may be 99.99% sure that you’re conscious, and that’s enough to get on with.

“Consciousness” seems like it produces certain behavior in myself. Most obviously, it causes me to say that I am conscious; it seems related to me seeking pleasant experiences and avoiding unpleasant ones and my ability to express what I am experiencing in ways both trite (“I’m in a bad mood today”) and meaningful (poetry).

I can also note that my brain produced consciousness. As a materialist, I believe that I am my brain; there is no ghost in the machine causing my ability to think. While I don’t know where consciousness is encoded in the brain, as that would require having a better sense of what consciousness is, I can conclude that brains that are similar to my brain are also more likely to be conscious.

Neither line of reasoning is perfect. It seems plausible that aliens would evolve consciousness, even though they don’t have human brains (they might not even have brains) or DNA (they might not even have DNA). It seems plausible that some humans (perhaps those with severe brain damage) would not be conscious. It seems plausible that some beings have conscious experience without the ability to convey that they do: for instance, a person with locked-in syndrome who doesn’t have access to a way of communicating via eye movement. And some philosophers have imagined beings that behave exactly like they are conscious but are not, although the notion of the zombie is controversial. However, I think by combining both lines of reasoning– and by erring on the side that beings are conscious, because the harm of treating a nonconscious being as conscious is far less than the harm of treating a conscious being as nonconscious– we can reduce the risk of those errors.

Both lines of reasoning support the idea that other humans are conscious. They are capable of all the behavior I associate with my own consciousness, such as goal-directed behavior, communication of internal experiences, and self-identification as a conscious being. Furthermore, they have brains that are similarly structured to mine which were produced by a developmental process identical to mine.


It is common, among anti-speciesists, to deplore prioritizing humans over other species. We have agreed that gender or race or country of origin is an arbitrary distinction which does not justify caring about some people more than we care about other people [citation needed]. Species membership is similarly arbitrary. It makes no sense to prioritize a human infant over a similarly intelligent dog (other than for instrumental reasons like “the parents would be sad” or “the child will grow up to be much smarter”); if we encountered Vulcans, we ought not be biased against them because they aren’t human. What we ought to care about is ability to have experiences and, by extension, to suffer.

But what this argument ignores is that, as far as I am aware, philosophy and psychology have yet to come to a consensus about how to figure out which beings are conscious and which are not. 100% of the beings I am absolutely positively certain are conscious (i.e. Ozy Frantz) are human, as are 100% of the beings I am pretty damn certain are conscious (at least at this stage of AI development and space exploration). Therefore, having human DNA is very plausibly evidence that a being is conscious.

This is why I’m leerier about aborting human fetuses than I am about killing an animal at an equivalent level of development, intelligence, ability to interact with the world, etc.; similarly, this is why I object to killing severely disabled people who do not behave as though they have conscious experience.