Recently, I’ve grown extremely skeptical of thought experiments as a way of finding truth.
Consider the Chinese Room thought experiment. A man who does not speak Chinese receives a series of Chinese characters from the outside. He flips through a very big book which tells him which set of Chinese characters to send out as a response. Intuitively, the system could not be said to really understand Chinese. The conclusion is that mere symbol manipulation, such as that performed by a computer program, cannot be said to understand things.
Now consider an objection I’ve read somewhere but I tragically haven’t been able to find. [ETA: It’s made by Scott Aaronson! Thanks to Anonymous Colin, AlexR, and embrodski.] A dictionary that could take any sentence of Chinese and come up with a coherent response would have to be huge— maybe the size of a planet, maybe larger. Even to look things up in that dictionary in a timely manner would be a huge endeavor, probably involving all manner of machines and robots and computer programs and other such things. If you imagine that, suddenly it gets a lot harder to say that the system doesn’t understand Chinese.
Or compare an objection in “Animal Rights: Legal, Philosophical, and Pragmatic Perspectives” (link goes to book containing the essay), an essay by Richard Posner I read recently, that said that utilitarian views on animal rights were wrong because it implies that one should help a stuck pig before a stuck human if the stuck pig is suffering more. Of course, that framing brings to mind (at least to me) the idea of both a human and a pig having a mild cut. If one instead says that the pig is suffering unimaginably brutal tortures while the human has a stubbed toe, obviously the pig should be helped first. But if you grant that– as Winston Churchill said to the beautiful woman– we already know what you are, and now we’re just haggling about the price.
My point here is not about the Chinese Room thought experiment or animal rights or Winston Churchill. Hopefully, even if you do not find my examples change your opinions on the thought experiment, you can understand how they would change it for other people. My point is that the intuitive response to thought experiments is based on small details of how they’re framed that we might not even recognize consciously– like the idea that the man is sitting in a relatively small box with a book in front of him on the desk, or that both the pig and the human are cut a little bit– and that if you change those details the intuitive response changes greatly.
I think a lot of this is because we might know the answer our intuitions got, but we don’t know how it got there. If you actually knew that the reasoning process was “both pig and human have a cut, pig seems to be in more distress, humans generally matter more than pigs, the pig isn’t in that much more distress, I will help the human”, then this would obviously not cause you to disagree with Peter Singer’s ideas about animal rights. But if all you have is “I will help the human”, you can imagine all sorts of things about how you got there– and unless you happen to think of the thought experiment that proves that your imagination is wrong, you won’t ever notice.
This means that thought experiments are not terribly reliable for establishing knowledge. You may think “yes! I have established the consistent ethical principle that pigs only matter to the extent that humans care about them!”, but in reality you have only established the principle that you care less about pigs that have been cut a little bit than about humans that have been cut a little bit. That is a good principle of great utility in veterinary triage, but not exactly the sort of thing you want to ground a philosophy of animal rights on.
For this reason, I have started to back away from using thought experiments. Whenever possible, I refer to specific details of factual situations that I am thinking about; when it is not possible, I try to keep my thought experiments narrowly tailored, and I keep an eye out for details that may cause the reader to have intuitive reactions for reasons they don’t necessarily endorse.
The objection to the Chinese Room experiment was from Scott Aaronson, and he’s made it in a couple of different places.
LikeLiked by 1 person
That particular objection would just have your standard philisoph student scoffing “oh so you’re saying you just need to make it bigger! that intelligence is just a matter of size! Hah!”
I prefer to turn it around with the neurons in the brain room: instead of a tiny incredibly fast person in a gigantic room the inside of the room is a human brain that actually does know Chinese. Unfortunately the same guy who runs around inside the book room has to run around inside the brain room laboriously picking up and carrying neurotransmitters from synapse to synapse. *he* doesn’t understand chinese and he can see that there’s obviously no “understanding” going on in the neurotransmitters he’s carrying around, just chemicals being shuffled around.
And of course there’s the luminous room which rather compliments Ozys position.
“Consider a dark room containing a man holding a bar magnet or charged object. If the man pumps the magnet up and down, then, according to Maxwell’s theory of artificial luminance (AL), it will initiate a spreading circle of electromagnetic waves and will thus be luminous. But as all of us who have toyed with magnets or charged balls well know, their forces (or any other forces for that matter), even when if you wave them up and down as fast as you can they produce no luminance at all. It is inconceivable that you might constitute real luminance just by moving forces around!”
Intuitively it sounds correct. To a philosopher this would constitute a convincing argument that AL was impossible.
In reality the problem is that he would have to wave the magnet up and down something like 450 trillion times per second in order to see anything but because all we see in our mental image is someone waving a magnet in the air we conclude that it’s false.
LikeLiked by 2 people
Thought experiments are “intuition pumps,” not really arguments. Of course their every detail is orchestrated to fit the conclusion. This isn’t a reason not to use them at all, as long as everybody knows this.
LikeLiked by 3 people
Have you read Dennett’s book about intuition pumps? I think I understand the term, from context, and I like how you’re using it. Is it worth it for me to read the book as well?
LikeLike
I’ve read Dennett’s “Intuition Pumps” and found it *hugely* useful for clear thinking about several fields in philosophy. I highly recommend it.
I also think Ozy’s thoughts on the problems of thought experiments match up well with Dennett’s discussion: some thought experiments (a kind of intuition pump) *do* effectively help us understand a concept, while *other* thought experiments subtly cover up crucial details (like the Chinese room does) and so mislead us. It can be tricky to tell the difference.
LikeLike
I’ll observe that your post on the issue with thought experiments revolves around a thought experiment as a line of reasoning for demonstrating the issue with thought experiments.
They’re not trivial to avoid.
LikeLiked by 2 people
Those are not thought experiments. Those are examples. I’m not saying “intuitively, we accept the proposition that X, and therefore we must consistently accept that thought experiments are bad”; I’m saying “your intuitions can change if you can change minor details, here are some instances of that happening.”
LikeLiked by 2 people
I think you’re drawing a strong distinction between “thought experiment” and “example” that doesn’t actually exist, and in the same note, if we changed minor details in your examples, intuitions about those examples would change.
Your entire article is based around a thought experiment (“example”) where changing minor details results in the conclusion being drawn being incorrect. If we change some minor details about your article, the conclusion we draw form your article is incorrect; namely, the idea that thought experiments are about proving general principles about reality based on specific theoretical situations.
You omit the possibility that a consistent/general principle can be -disproven- by a thought experiment, the more natural place for thought experiments, and where they are correctly used.
Which is to say – you’re engaging in exactly what you’re criticizing, trying to draw general principles from a small number of examples. We could say you’re disproving the notion that thought experiments prove general principles – but that would be a strawman or at least a weakman, because sensible people don’t actually do this.
LikeLike
1) I know I’m making a comment on an old thread. If I should not do this then, please, let me know and I will refrain from doing so in the future.
2) I don’t know if Ozy’s post is a thought experiment or not. I haven’t formed an opinion on it. In the remainer of this post I will act like it is not because I think it is, or that it is more likely, but because I have the ability to work with propositions that may or may not be true. Such is necessary for inductive reasoning. (I also have the ability to work with propositions that I know to be false which is also sometimes useful).
3) It doesn’t matter whether Ozy’s post is a thought experiment. From a pure logic perspective one of three things must be true: “Thought experiments are always valid,” “Thought experiments are never valid,” “Thought experiments are sometimes valid and sometimes not.” For the record, I go with the last option.
“Thought experiments are always valid,” and “Ozy’s post is a thought experiment,” are inconsistent and cannot both (simultaneously) be true. Ozy’s post argues that thought experiments are not always valid (or perhaps something stronger) and thus if it is a thought experiment that means that the first option cannot be true.
“Thought experiments are never valid,” and “Ozy’s post is a thought experiment,” are constant. In this case (if both are true) Ozy’s post comes to a correct conclusion by accident. “Thought experiments are sometimes valid and sometimes not” is also consistent with “Ozy’s post is a thought experiment.” I hope the consistency is obvious but it if both are true it would necessitate a closer look at Ozy’s post then the other two situations would necessitate.
Thus, if Ozy’s post is a thought experiment, then it’s conclusion is correct because the only forbidden statement would be “thought experiments are always valid.” Since, in this case, one should not blindly trust thought experiments, one should be skeptical of them.
Orphan, I don’t know if you were intending to use an apparent contradiction to discredit the point of the post or not but if you were then the apparent contradiction is not in fact a contradiction and in fact proves the conclusion of the post if the post is a thought experiment. If you intended no such thing then I apologize for suggesting that you might have
4) In any case I think the advice that Ozy appears to be advocating is good advice regardless of whether or not the post qualifies as a thought experiment. For my part, I consider thought experiments useful tools at exploring or explaining something but never to be sufficient evidence to come to a firm conclusion about something. Ozy’s post has reinforced this for me.
LikeLiked by 1 person
This, and related issues, are a recurring topic of discussion in law schools. One of the primary component of legal education, for better or worse, is repeatedly giving students hypothetical scenarios designed to cause the student to intuit contextual details that would lead to one conclusion, while ordering the student to stipulate to different details, or to the absence of additional details. The first year of law school is filled with students being asked to analyze scenarios, the students responding that they can’t believe the scenario would ever happen, or the students insisting that if the scenario happened we can presume that certain other things also happened, and professors informing them that this is not an acceptable response. The overall goal is to teach you to curtail your intuitive instincts and only consider what’s in front of you.
By year two a rift develops in the class between those capable of doing this and those who cannot. The former find the latter very aggravating.
Assuming things the hypothetical didn’t or refusing to assume what the hypothetical stipulates is called “arguing the hypothetical.”
This isn’t exactly what you’re talking about, but it’s related.
Other random comments- some thought experiments are actually cynical priming efforts designed to lead you to a given conclusion. Any time a thought experiment asks you to agree with one seemingly agreeable statement then immediately asks you to agree with a more extreme conclusion that seemingly derives from the first statement, you should reach for your revolver. The prototypical example is the person who asks you to agree that a given scenario is theft or murder or whatever, offers you an explanation for why in hopes that you’ll bite, then prompts you to agree that a different scenario is also theft or murder or whatever based on that definition, while suggesting that if you reject the latter scenario you must be rejecting the former as well. There are libertarian arguments against taxation that follow this format, but also arguments against the death penalty, and both for and against abortion.
The biggest worry I have, and I’m not sure how real it is, I just worry, is that by considering these scenarios over and over and insisting that their implausibility should not affect our judgment, we shift our framework for thinking about the world until we’re more likely to believe these scenarios have real world plausibility.
This comment just grew longer and less directed than I intended, but, I dunno. Internets.
LikeLiked by 4 people
I don’t know where _you_ heard that response to the Chinese room thought experiment, but _I_ heard it in Scott Aaronson’s “Why Philosophers Should Care About Computational Complexity”, which is thought-provoking, well-written, and a comparatively accessible introduction to the topic (thus I highly recommend it to everyone): https://arxiv.org/abs/1108.1791
LikeLiked by 2 people
I second Alex R’s recommendation of that paper.
LikeLike
This way thought experiments are quite similar to real scientific experiments. Small differences may significantly change the results and good results require nontrivial planning. But instead of exploring the outside world the thought experiments aim to explore our intuitions about the situations.
LikeLike
The solution to the unreliability of thought experiments is more thought experiments. It’s possible that a given experiment doesn’t isolate the relevant principles effectively, which is why it can be valuable to consider a variety of experiments about the possibly relevant factors.
If isolating the principles is difficult in thought experiments, it’s significantly worse in real-life scenarios, which are much more ambiguous and often mind-killing. An argument that would be considered fighting the hypothetical in a thought experiment could be sound for a real-life situation. It’s a lot easier to overlook a relevant principle when there’s an established social norm about how to act in a given situation. Sometimes in a real-life situation one feels the pull of many conflicting intuitively-seeming duties. Thought experiments at least cut down on those problems.
LikeLiked by 3 people
I agree completely. Trying to establish universal principles from a single real life example is much worse than trying to do so from a large number of thought experiments, which may be narrowly to different aspects of the situation.
LikeLiked by 1 person
Link to Aaronson’s Chinese Room extrapolation here – http://www.scottaaronson.com/democritus/lec4.html
(about 2/3rds of the way down, ctrl+f Chinese Room)
LikeLiked by 1 person
I hope I’m not going too far off the intent of your post for your liking but your example raises the question of “when does a machine understand?” For the purposes of this discussion I am taking members of the species homo sapiens to be (biological) machines. At-least most of my species are capable of understanding at-least one thing but I doubt my abacus is. Where is the difference?
My first attempts to answer this question here involve thought experiments. I am forcing myself to avoid doing this. If I were to test whether or not someone understands integration, I could do the following:
1) Give the student several integration problems and see if she can solve them and how he does so.
2) Ask the student what integration means.
3) In a context not clearly involving integration, see if the student can apply integration in a setting where doing so is useful.
I think this is a reasonable test to see if a specimen of homo sapiens understands integration. Would it make sense to ask the same of an Artificial Intelligence? A computer running Mathematica can pass the first test. The same computer can be programmed to spit out a memorized answer for the second question but then again so can a student. This leaves the third test. I don’t know enough about programing to address the ability of today’s computers and computer programs to comment on their ability to pass the third test.
I do think I need to strengthen the second test. Maybe:
2) Ask the student an essay or short answer question that requires her to use a conceptual understanding of integration in answering the question. The question needs to be one in which the student would not reasonably know to memorize an answer for.
I think this would get around mindless regurgitation or the student in advance just obtaining a usable list of possible answers to such questions in a test setting.
If this a good test for human understanding of integration, what generalized definition for understanding does this suggest? I shall give it a try: “A thing understands a concept if and only if the thing can make useful use of the concept, use the concept to independently derive or explain another concept in a novel way, and can spontaneously make use of the concept in situations where doing so is useful and where the thing is not prompted specifically to use the concept.” This is my current attempt at a definition for understanding understanding. To be clear, independently means using only things inherent to itself and where the method used is not known beforehand, novel means unknown to the entity prior to the attempt to solve the problem, and spontaneously means on its own accord.
If an entity can derive an intelligent answer to any possible query in a given language, there will be a concept for which it can do these things. I’m not presenting a proof here but I am confident it is true. The entity might not understand the language but it understands something.
In the link given by embrodski, Dr. Aaronson talks about the program Otter that in 1996 was able to solve an open problem in Algebra. It demonstrated the first condition but not the others. I don’t know enough about the field of programing but it would not surprise me either way to hear that there exists a program that fits my proposed definition for understanding. If such a program exists, I am prepared to conclude that it understands.
I am also willing to accept that my proposed definition is faulty. I am not an expert on anything including these concepts. I don’t know if my thinking here is any good at all.
LikeLike
Reblogged this on The Ratliff Notepad.
LikeLike
Pingback: Against The Drowning Child Argument | Thing of Things