You can tell I've had philosophy of mind on my mind lately. I've written about the Computational Theory of Mind (albeit within the broadest context of a post on the difference between scientific theories and philosophical accounts), about computation and the Church-Turing thesis, and of course about why David Chalmers is wrong about the Singularity and mind uploading (in press in a new volume edited by Russell Blackford and Damien Broderick).
Moreover, and without my prompting, my friend Steve Neumann has just written an essay for RS about what is it like to be a Nagel. Oh, and I recently reviewed for Amazon, John Searle's Mind: A Brief Introduction.
But what prompted this new post is a conversation Julia Galef and I have recently had for a forthcoming episode of the Rationally Speaking podcast, a chat with guest Gerard O'Brien, a philosopher (of mind) at the University of Adelaide in Australia. It turns out that Gerard and I agree on more than I thought (even though he is sympathetic to the Computational Theory of Mind, but in such a partial and specific way that I can live with; moreover, he really disappointed Julia when he said that mind uploading ain't gonna happen, at the least not in the way crazy genius Ray Kurzweil and co. think it will).
During our exchange, I was able to crystallize in my mind something that had bothered me for a while: why is it, exactly, that so many people just don't seem to get the point of John Searle's famous Chinese Room thought experiment? Gerard agreed both with my observation (i.e., a lot of the criticism seems to be directed at something else, rather than at what Searle is actually saying), and with my diagnosis (more on this in a moment). That in turn made me think about several other famous thought experiments in philosophy of mind, and what exactly they do or don't tell us - sometimes even regardless of what the authors of those experiments actually meant!
So, below is a brief treatment of Searle's Chinese Room, Thomas Nagel's what is it like to be a bat?, David Chalmers' philosophical zombies, and Frank Jackson's Mary's Room. I realize these are likely all well known to my readers, but bear with me, I may have a thing or two of interest to say about the whole ensemble. (The reason I refer to 3.5, rather than 4, thought experiments in the title of the post is because I think Nagel's and Jackson's make precisely the same point, and are thus a bit redundant.) In each case I'll provide a brief summary of the argument, what the experiment shows, and what it doesn't show (often, contra popular opinion), with a brief comment about the difference between the latter two.
1. The Chinese Room
Synopsis: Imagine a room with you in the middle of it, and two slots on opposite sides. Through one slot someone from the outside slips in a piece of paper with a phrase in Chinese. You have no understanding of Chinese, but - helpfully - you do have a rule book at your disposal, which you can use to look up the symbols you have just received, and which tells you which symbols to write out in response. You dutifully oblige, sending the output slip through the second slot in the room.
What it does mean: Searle's point is that all that is going on in the room is (syntactic) symbol manipulation, but no understanding (semantics). From the outside it looks like the room (or something inside it) actually understands Chinese (i.e., the Room would pass Turing's test), but the correct correspondence between inputs and outputs has been imported by way of the rule book, which was clearly written by someone who does understand Chinese. The idea, of course, is that the room works analogously to a digital computer, whose behavior appears to be intelligent (when seen from the outside), with that intelligence not being the result of the computer understanding anything, but rather of its ability to speedily execute a number of operations that have been programmed by someone else. Even if the computer, say, passes Turing's test, we still need to thank the programmer, not the computer itself.
What it does not mean: The Chinese Room is not meant as a demonstration that thinking has nothing to do with computing, as Searle himself has clearly explained several times. It is, rather, meant to suggest that something is missing in the raw analogy between human minds and computers. It also doesn't mean that computers cannot behave intelligently. They clearly can and do (think of IBM's Deep Blue and Watson). Searle was concerned with consciousness, not intelligence, and the two are not at all the same thing: one can display intelligent behavior (as, say, plants do when they keep track of the sun's position with their leaves) and yet have no understanding of what's going on. However - obviously, I hope - understanding is not possible without intelligence.
Further comments: I think the confusion here concerns the use of a number of terms which are not at all interchangeable. In particular, people shift among computing speed, intelligence, understanding, and consciousness while discussing the Chinese Room. Intelligence very likely does have to do (in part) with computing speed, which is why animals' behavior is so much more sophisticated than most plants', and why predators are usually in turn more sophisticated than herbivores (it takes more cunning to catch a moving prey than to chew on stationary plants). But consciousness, in the sense used here, is an awareness of what is going on, and not just a phenomenological awareness (as, say, in the case of an animal feeling pain), but an awareness based on understanding.
The difference is perhaps more obvious when we think of the difference between, say, calculating the square root of a number (which any pocket calculator can do) and understanding what a square root is and how it functions in mathematical theory (which no computer existing today, regardless of how sophisticated it is, actually possesses).
2. Mary's Room.
Synopsis: Consider a very intelligent scientist - Mary - who has been held (somehow...) in an environment without color since her birth (forget the ethics, it's a thought experiment!). That is, throughout her existence, Mary has experienced the world in black and white. Now Mary is told, and completely understands, everything there is to know about the physical basis of color perception. One bright day, she is allowed to leave her room, thus seeing color for the first time. Nothing, argues Frank Jackson, can possibly prepare Mary for the actual phenomenological experience of color, regardless of how much scientific knowledge she had of it before hand.
What it does mean: According to Jackson, this is a so-called "knowledge argument" against physicalism. It seems that the scientific (i.e., physicalist) understanding of color perception is simply insufficient for Mary to really understand what experiencing color is like, until she steps outside of her room, thus augmenting her theoretical (third person) knowledge with (first person) experience of color. Hence, physicalism is false (or at the least incomplete).
What it does not mean: Contra Jackson, the experiment does not show that physicalism is wrong or incomplete. It simply shows that third person (scientific) descriptions and first person experiences are orthogonal to each other. To confuse them is to commit a category mistake, like asking the color of triangles and feeling smug for having said something really deep.
Further comments: I have always felt uncomfortable about this sort of thought experiment because, quite frankly, I misunderstood them entirely the first few times I heard of them. It seemed obvious to me that what the authors meant to show (the orthogonality of third and first person "knowledge") was obviously true, so I didn't see what all the fuss was about. Turns out, instead, that the authors themselves are confused about what their own thought experiments show.
3. What is it like to be a bat?
Synopsis: Thomas Nagel invited us to imagine what it is like (in the sense of having the first person experience) to be a bat. His point was that - again - we cannot answer this question simply on the basis of a scientific (third person) description of how bats' brains work, regardless of how sophisticated and complete this description may be. The only way to know what it is like to be a bat is to actually be a bat. Therefore, physicalism is false, yadda yadda.
What it does mean: Precisely the same thing that Jackson's Mary's Room does.
What it does not mean: Precisely the same thing that Jackson's Mary's Room doesn't.
Further comments: It's another category mistake. Actually, it's exactly the same category mistake.
4. Philosophical zombies.
Synopsis: David Chalmers has asked us to consider the possibility of creatures ("zombies," known as p-zombies, or philosophical zombies, to distinguish them from the regular horror movie variety) that from the outside behave exactly like us (including talking, reacting, etc.) and yet have no consciousness at all, i.e. they don't have phenomenal experience of what they are doing. You poke a zombie and it responds as if it were in pain, but there ain't no actual experience of pain "inside" his mind, since there is, in fact, no mind. Chalmers argues that this sort of creature is at least conceivable, i.e., it is logically possible, if perhaps not physically so. Hence..., yeah, you got it, physicalism is false or incomplete.
What it does mean: Nothing. There is no positive point, in my opinion, that can be established by this thought experiment. Besides the fact that it is disputable whether p-zombies are indeed logically coherent (Dennett and others have argued in the negative), I maintain that it doesn't matter. Physicalism (the target of Chalmers' "attack") is not logically necessary, it is simply the best framework we have to explain the empirical evidence. And the empirical evidence (from neurobiology and developmental biology) tells us that p-zombies are physically impossible.
What it does not mean: It doesn't mean what Chalmers and others think it does, i.e. a refutation of physicalism, for the reason just explained above. It continues to astonish me how many people take this thing seriously. This attitude is based on the same misguided idea that underlies Chalmers' experiment of course: that we can advance the study of consciousness by looking at logically coherent scenarios. We can't, because logic is far too loose a constraint on the world as it really is (and concerns us).
If it weren't, the classic rationalistic program in philosophy - deriving knowledge of how things are by thinking really hard about them - would have succeeded. Instead, it went the way of the Dodo at least since Kant (with good help from Hume).
Further comments: Consciousness is a bio-physical phenomenon, and as Searle has repeatedly pointed out, the answer to the mystery will come (if it will come) from empirical science, not from thought experiments. (At the moment, however, even neuroscientists have close to no idea of how consciousness is made possible by the systemic activity of the brain. They only know that that's what's going on.)
So, what are we to make of all of the above? Well, what we don't want to make of it is either that thought experiments are useless or, more broadly, that philosophical analysis is useless. After all, what you just read is a philosophical analysis (did you notice? I didn't use any empirical data whatsoever!), and if it was helpful in clarifying your ideas, or even simply in providing you with further intellectual ammunition for continued debate, then it was useful. And thought experiments are, of course, not just the province of philosophy.
They have a long and illustrious history in science (from Galileo to Newton) as well as in other branches of philosophy (e.g., in ethics, to challenge people's intuitions about runaway trolleys and the like), so we don't want to throw them out as a group too hastily.
What we are left with are three types of thought experiments in philosophy of mind: (i) Those that do establish what their authors think (Chinese Room), even though this is a more limited conclusion than what its detractors think (the room doesn't understand Chinese, in the sense of being conscious of what it is doing; but it does behave intelligently, in proportion to its computational speed and the programmer's ability). (ii) Those that do not establish what their authors think (Mary and the bats), but nonetheless are useful (they make clear that third person description and first person experience are different kinds of "knowledge," and that it makes no sense to somehow subsume one into the other). (iii) Those that are, in fact, useless, or worse, pernicious (p-zombies) because they distract us from the real problem (what are the physical bases of consciousness?) by moving the discussion into a realm that simply doesn't add anything to it (what is or is not logically conceivable about consciousness?).
That's it, folks! I hope it was worth your (conscious) attention, and that the above will lead to some better (third person) understanding of the issues at hand.
Originally appeared on Rationally Speaking, Sept. 6th, 2013
Three And A Half Thought Experiments In Philosophy Of Mind
Related articles
- Philosophy Not In The Business Of Producing Theories: The Case Of The Computational “Theory” Of Mind
- David Chalmers And The Singularity That Will Probably Not Come
- The Zombification Of Philosophy (Of Mind)
- Cool Thought Experiments IV: The Chinese Room
- 25 Years Later, Consciousness Wager Settled: Science Still Doesn't Know How Consciousness Arises
Comments