, , ,

Recently, I’ve grown extremely skeptical of thought experiments as a way of finding truth.

Consider the Chinese Room thought experiment. A man who does not speak Chinese receives a series of Chinese characters from the outside. He flips through a very big book which tells him which set of Chinese characters to send out as a response. Intuitively, the system could not be said to really understand Chinese. The conclusion is that mere symbol manipulation, such as that performed by a computer program, cannot be said to understand things.

Now consider an objection I’ve read somewhere but I tragically haven’t been able to find. [ETA: It’s made by Scott Aaronson!  Thanks to Anonymous Colin, AlexR, and embrodski.] A dictionary that could take any sentence of Chinese and come up with a coherent response would have to be huge— maybe the size of a planet, maybe larger. Even to look things up in that dictionary in a timely manner would be a huge endeavor, probably involving all manner of machines and robots and computer programs and other such things. If you imagine that, suddenly it gets a lot harder to say that the system doesn’t understand Chinese.

Or compare an objection in “Animal Rights: Legal, Philosophical, and Pragmatic Perspectives” (link goes to book containing the essay), an essay by Richard Posner I read recently, that said that utilitarian views on animal rights were wrong because it implies that one should help a stuck pig before a stuck human if the stuck pig is suffering more. Of course, that framing brings to mind (at least to me) the idea of both a human and a pig having a mild cut. If one instead says that the pig is suffering unimaginably brutal tortures while the human has a stubbed toe, obviously the pig should be helped first. But if you grant that– as Winston Churchill said to the beautiful woman– we already know what you are, and now we’re just haggling about the price.

My point here is not about the Chinese Room thought experiment or animal rights or Winston Churchill. Hopefully, even if you do not find my examples change your opinions on the thought experiment, you can understand how they would change it for other people. My point is that the intuitive response to thought experiments is based on small details of how they’re framed that we might not even recognize consciously– like the idea that the man is sitting in a relatively small box with a book in front of him on the desk, or that both the pig and the human are cut a little bit– and that if you change those details the intuitive response changes greatly.

I think a lot of this is because we might know the answer our intuitions got, but we don’t know how it got there. If you actually knew that the reasoning process was “both pig and human have a cut, pig seems to be in more distress, humans generally matter more than pigs, the pig isn’t in that much more distress, I will help the human”, then this would obviously not cause you to disagree with Peter Singer’s ideas about animal rights. But if all you have is “I will help the human”, you can imagine all sorts of things about how you got there– and unless you happen to think of the thought experiment that proves that your imagination is wrong, you won’t ever notice.

This means that thought experiments are not terribly reliable for establishing knowledge. You may think “yes! I have established the consistent ethical principle that pigs only matter to the extent that humans care about them!”, but in reality you have only established the principle that you care less about pigs that have been cut a little bit than about humans that have been cut a little bit. That is a good principle of great utility in veterinary triage, but not exactly the sort of thing you want to ground a philosophy of animal rights on.

For this reason, I have started to back away from using thought experiments. Whenever possible, I refer to specific details of factual situations that I am thinking about; when it is not possible, I try to keep my thought experiments narrowly tailored, and I keep an eye out for details that may cause the reader to have intuitive reactions for reasons they don’t necessarily endorse.