You are on page 1of 17

Copeland: The Curious Case of the Chinese Room

S e A r a r t h i s a r l e s C h i n e s e g u m e n t a n d h i s e l o g i c a l l y f l a e b i o l o g i c a l o b t r i v i a l . R o o m r e p l i e s w e d , a n d j e c t i o n

The Logical Flaw in the Chinese Room


The argument is this: If the person in the Chinese Room cant understand Chinese, then the system of which he is a part cant understand Chinese. But this is a fallacy: If X cant do Z, then the system of which X is a part cant do Z. e.g. If stomachs cant talk, then the system of which a stomach is a part cant talk.

Round One

Round One
Searle: We agree the person doesnt understand Chinese. But the rest of the system consists only of a rule book, paper, and pencils. How can something like this possibly understand Chinese? Copeland: Part of the problem is caused by inclusion of a human being. Let the program follower be a super-fast ea that ips switches. Also, part of the problem concerns the nature of the system, which is in fact impossible.

Round two
Searle: Youre begging the question. Claim of the Chinese Room argument is that running a program does not sufce for understanding. Thus, you cannot just say that the system does understand. Copeland: Im not begging the question but simply pointing out that the conclusion doesnt follow from the premise. Its a fallacy. Just because the person doesnt understand Chinese, it doesnt follow logically that the system doesnt.

Round Three
Searle: Let the person internalize the rule book, so that the system is now part of him. If he couldnt understand Chinese before, and the only difference is that now the system is inside him, he cant understand Chinese now. Copeland: This too is fallacious. From the fact that a person cannot secrete acid, it doesnt follow that nothing inside him cant secrete acid.

Round Four
Searle: The systems reply fails because, as the Chinese Room argument shows, mere symbol manipulation does not sufce for understanding. Copeland: Now whos begging the question? The system reply fails only if mere symbol manipulation does not sufce for understanding, but then you cant simply assume that the Chinese Room argument succeeds without begging the question.

Understanding
Part of the attractiveness of Searles argument is that the Chinese Room system does not appear to be something capable of understanding. Let the system reside in a very human-like android, and operate at a speed that would enable the android to pass the Turing test awlessly. Denying that the android thinks, and so understands language, now seems very difcult or impossible. Of course, there is still an empirical question whether symbol manipulation can produce such a an android.

The Biological Objection


Not just anything can lactate, reproduce, digest, photosynthesize, and so on. Hence, processes require for their successful completion the right kind of material constitution. Thought is no different: thinking things must be made from the right kind of stuff, which happens to be brain-stuff. Insofar as anything can be regarded a symbol manipulator, this shows that symbol manipulation does not sufce for thinking.

Pylyshyns Objection
Take a human brain and replace each neuron, one at a time, with a behaviorally equivalent silicon chip. If the human brain can think, so too should the silicon brain be capable of thought.

Searles Response
Whether a silicon brain can think is an empirical issue. If it can think, this shows that silicon is the right kind of stuff to produce thinking. The crucial point is that if it can think, it does so not in virtue of running a computer program.

Copelands Reconstruction
The biological objection amounts to the following: 1. A machine can think only if made of the right stuff organized in the right way. 2. Whether substances other than brains can think is an empirical issue. 3. It is not possible to endow physical substances with the right causal powers merely by having it run a program.

Copelands Reconstruction
1. A machine can think only if made of the right stuff organized in the right way. But this is trivial -- of course a machine can think only if made in a way that can support thought processes.

Copelands Reconstruction
2. Whether substances other than brains can think is an empirical issue. Of course this is an empirical issue. Whod have thought otherwise.

Copelands Reconstruction
3. It is not possible to endow physical substances with the right causal powers merely by having it run a program. This is the conclusion of the Chinese Room argument, but that argument is not valid.

What of the Possibility of Implementing Programs on Various Machines?


But it is possible to build a computer from just about anything: toilet paper, pebbles, pop tarts. This is true -- in theory. In fact, there are probably only a few kinds of material that could run the programs that dene a mind.

In Virtue of What Might Computers Think?


If a program can create thought only when implemented in the right materials, is it correct to say that symbol manipulation is by itself sufcient for cognition? Why is it the brain qua symbol processor that does the thinking rather than the brain qua neurophysiological mechanism?