You are on page 1of 30

JEAN R.

KAZEZ

COMPUTATIONALISM AND THE CAUSAL ROLE OF CONTENT

(Received in revised form 13 May 1993)

Computationalism offers a solution to a familiar problem about mental


causation. When it occurs to Zelda that her plants need water, and she
waters her plants, her thought apparently causes her behavior. But the
problem is this: how can something as ephemeral as a thought make
Zelda's body move across the room? The modem day computationalist
has a ready explanation. Thoughts can be causes of behavior because
they are not actually so ephemeral: Zelda's thoughts are not soul states,
but tokenings of neural symbols. Harboring amental sentence can cause
Zelda to water her plants in just the way that the symbols stored in an
ordinary computer can cause later events in the computer.
Computationalism has more trouble with another problem about
mental causation. Generally, when an event c causes an event e, we
can ask what it is about c in virtue of which it causes e. 1 Some of c's
properties are causally relevant. For example, when the bowling ball
causes the pins to fall, some properties of the ball play a role, and others
do not: the weight of the ball is causally relevant, but not the color or
the price of the ball. Intuition endorses the claim that the content of
Zelda's thought is causally relevant. Nothing could be more obvious:
her plant-watering thought causes plant-watering precisely because it is
a plant-watering thought. Generally, intuition agrees that

(REL) Content is causally relevant: thoughts cause behaviors (as


well as subsequent thoughts) in virtue of their content.
But can computationalists accept (REL)?
In my view, the answer is No. Computationalism has the counterin-
tuitive implication that our thoughts do not have their effects because of
their contents: that thought content is "epiphenomenal." Many philoso-

PhilosophicalStudies 75: 231-260,1994.


(~) 1994 Kluwer Academic Publishers. Printed in the Netherlands.
232 JEANR. KAZEZ

phers who have defended (REL) have focussed on devising general


criteria for causal relevance; they have not directed their attention to
the specific types of mechanisms that make up a computer.2 Once those
mechanisms are in focus, (REL) becomes difficult - in fact, impossible
- to sustain. If a computational account of thought processes is assumed
correct, a sound argument can be made against (REL). I shall present
that argument in section I, and defend the premises of the argument in
the remainder of the paper.

L THE ARGUMENT AGAINST (REL)

The computational theory of mind is committed to a particular view of


mental processes:

(COMP) Mental processes are sequences of operations on symbols


performed by primitive processors.

(COMP) will be the first premise of my argument against (REL). I shall


try to elucidate and motivate (COMP), but will say little to defend it. 3
Computationalism looks most attractive against the background of
an "analytic" conception of psychological explanation (Dennett 1974,
Haugeland 1980, Cummins 1983). On this conception, the primary
aim of psychology is not to explain mental and behavioral events via
nomological subsumption. To begin with, on the analytic view, the
explananda of primary importance to scientific psychology are psycho-
logical capacities not occurrences. Psychologists want to explain the
information processing capacities (IPC's) of cognitive systems. Exam-
ples of IPC's are producing solutions to math problems, judging the
distance of objects, given visual contact with the objects, and under-
standing another person's thoughts, given sentences they have uttered.
To explain the IPC's of a system, the psychologist analyzes the system
into a set of subsystems, each also having some IPC. Two requirements
must be satisfied for the explanation to be successful: the information
handled by the subsystems must be relevant to the original capacity
we were trying to explain; and the subsystems must be appropriately
COMPUTATIONALISM AND THE CAUSAL ROLE OF CONTENT 233

coordinated with one another. For example, a cognitive system that


can understand the meaning of English sentences might be (partially)
analyzed into a word recognizer and a syntax processor. But that is not
the whole story: information must flow from the word recognizer to the
syntax processor (and maybe also vice versa).
The next step is to explain the IPC's of the subsystems. They too may
be decomposed into processors that handle relevant information. For
example, the word recognizer may be analyzed as a feature analyzer and
a lexicon searcher. But explanation by division of information process-
ing labor cannot go on forever. Our decomposition doesn't accomplish
anything if, at every stage, the capacities of parts are explained in the
same manner as the capacities of wholes. (In fact, decomposition may
leave us worse off: it leaves us wanting explanations of the myriad parts
of the system which have been exposed by our decomposition.) At some
point, the parts yielded by informational decomposition must have IPC's
that can be explained without any further informational decomposition.
These parts are the system's primitive processors.
If the computational theory of mind is true, the subsystems that make
up a cognitive system are symbol manipulators. They take symbols that
encode one sort of information as inputs, and yield symbols encoding a
second sort of information as outputs. The primitive processors, then,
are symbol manipulators that are not themselves composed of symbol
manipulators. They accomplish their symbol manipulation tasks with-
out carrying out any other symbol manipulations. This negative con-
straint is mandated by the role primitive processors play in explanations
of a system's capacities. Only a weak positive constraint is mandated by
the explanatory role of primitive processors: primitive processors must
manipulate symbols in some readily explicable manner.4 If a system's
cognitive capacities were decomposed into a set of utterly mysterious
processors, we would not have explained those capacities.
When we study a cognitive system, we are often interested in a
decomposition of its capacities that remains at a high level of abstraction,
far above the level of primitive processors. Likewise, if we describe
the series of computational events mediating between an input and
an output, we may wish to describe events coarsely: first a goal was
selected, then options were considered, finally a plan was made. But
234 maN R. KAZEZ

the coarsely described events are aggregates of simpler computational


events.
We arrive, via considerations about psychological explanation, at a
certain view of mental processes. If the computational view of cognition
is correct, mental processes always comprise, at the lowest level of
computational description, some number of primitive operations on
symbols. This is the claim that (COMP) encapsulates.
The next premise of the argument against (REL) concerns the nature
of primitive processors. Standardly, primitive processors are formal
mechanisms: they are mechanisms sensitive to the formal properties
of input symbols. By "formal properties" I mean intrinsic physical
properties. Formal properties of the words on this page include shape,
temperature, and chemical composition. In a standard digital computer,
formal properties of symbols are electrical properties. The formal char-
acter of primitive processors bears on the causal relevance of content.
Because primitive processors are formal mechanisms,

(PRIM) Content is causally irrelevant in processes mediated by


primitive processors; i.e., if symbol tokening x causes
symbol tokening y because of the intervention of a prim-
itive processor, then the content of x is irrelevant to x
causing y.

(PRIM) may have seemed plausible to philosophers of mind before


the causal role of content became a concern, but will be unpalatable
to many now. Those who accept (REL) may insist that processors can
be sensitive to the form of symbols and at the same time sensitive to
their content. They will see form and content as causally efficacious
properties "at different levels." But, as I will argue, there are insuperable
problems with that view.
So far our premises say that transitions from one mental state to
another are mediated by primitive processors, and that those transitions
do not occur because of the contents of symbols. These premises, if
true, establish that thoughts have their immediate causal consequences
regardless of their content. But could the content of a thought never-
COMPUTATIONALISM AND THE CAUSAL ROLE OF CONTENT 235

theless be relevant to its r e m o t e consequences, for example, to behavior?


To rule out this possibility, we need an additional premise:

(WHOL) If content is causally irrelevant to the primitive processors


in a computational system, content is causally irrelevant
to the whole system, including its behavioral outputs.
(WHOL) claims that the content of a mental state cannot affect the
operation of a mind unless it affects it via primitive processors. If
Zelda's thought process consists of a series of symbol manipulations
performed by a number of processors, the content of THE PLANTS
NEED WATER can be relevant to her watering the plants only if it's
relevant to one of the processors.
(COMP), (PRIM), and (WHOL) will be the premises of my argument
against (REL). The gist of the argument is that one cannot simultane-
ously acknowledge the formal character of symbol manipulation in a
system, and affirm the causal efficacy of content in that system. (PRIM)
is the lynch pin of the argument, and will be the topic of the next section.
Jerry Fodor (1990) and Ned Block (1990) have each tried to reconcile
the causal relevance of content with the formal nature of primitive
processors. Fodor's approach involves rejecting (PRIM) (though he
accepts something very close to it). I will take up Fodor's view in
section III. Block's approach involves rejecting (WHOL). In section IV
we shall see that (WHOL) remains overwhelmingly plausible, despite
the argument Block makes against it.

IL AN ARGUMENT FOR (PRIM)

Zelda believes, at noon, that the plants need water and sunlight; that is
one of the causes of her believing, a moment later, that the plants need
water; which leads to other thoughts, and finally to the watering of the
plants.
How does it happen? The computationalist says that each of Zelda's
thoughts is the tokening of a mental sentence, and that the causal links
between her thoughts are forged by primitive processors. Let us suppose
that among Zelda's primitive processors is a processor that performs
236 JEANR. g_AZEZ

simplification: if the input is a symbol that means that p&q, the output
is a symbol that means that p, and a symbol that means that q. The
symbol THE PLANTS NEED WATER AND THE PLANTS NEED
SUNLIGHT is input to the simplification processor, and it outputs THE
PLANTS NEED WATER and THE PLANTS NEED SUNLIGHT. Let
us refer to the tokening of the first symbol as event r, and the tokening of
the second symbol as event r*. First r causes r*, and then r* leads, by a
moderately long sequence of additional symbol tokenings and primitive
operations, to Zelda's behavior, b.
I shall be discussing Zelda's simplification processor at length. I will
assume it is representative of primitive processors in computers. I will
assume, as well, that her simplification processor is typical of the brain's
primitive processors, whatever, exactly, these turn out to be like. 5
Our task is to determine what properties are relevant to r causing
r*. As a good physicalist, I will be assuming that r causes r* in virtue
of having some physical property P. Now, r also has the property of
having a particular content, C. 6 Below, I will be discussing the nature
of these two properties, P and C. Our question is whether both P and C
are relevant to r causing r*. It is at least initially problematic to add r's
having C to the account of r causing r*. If r's having P explains why r
causes r*, how could r's having C be explanatory as well?
Jaegwon Kim has proposed what he calls the "explanatory exclusion
principle": ultimately "there can be at most one complete and inde-
pendent explanation of a single explanandum" (Kim 1988, 1990). His
point, essentially, is this: multiple explanations of one explanandum are
in competition with each other. If you are offered two explanations of
one explanandum, you ought to satisfy yourself that one of the following
descriptions applies to the case:
(1) One explanation is derivative from the other. An example: you
press a circular, aluminum cookie cutter into dough. The result is
a round-shaped impression in the dough. The cookie cutter has that
effect because it is formed out of such and such molecules of aluminum,
arranged like so, and also because it is round. Being round is abyproduct
of being constituted by molecules, arranged in a certain way. So the
roundness-explanation is derivative.
COMPUTATIONALISM AND THE CAUSAL ROLE OF CONTENT 237

(2) Each explanation is incomplete; for example: the weight and the
shape of the foot each partially explain why it made the impression in
the fresh cement.
Although Kim is unsure on this point, I believe the explanatory
exclusion principle is not exceptionless. A third possibility must be
considered:
(3) You've got a case of overdetermination.7 There is overdetermina-
tion when property F of c and property G of c are individually sufficient
(under the circumstances) to explain why c causes e. An example:
Mary's skating is both very fast and very graceful. All she needs is
one perfect score, and she will earn the top prize of a blue ribbon. At
the same moment, a judge attentive only to speed hands the master of
ceremonies a perfect score, and a judge who cares only about grace-
fulness hands the master of ceremonies a perfect score. The master of
ceremonies sees both scores at the same moment, and awards the top
prize. The fact that Mary's skating causes her to win the blue ribbon is
overdetermined by its being fast and its being graceful.
If a case of multiple explanations meets none of the three descrip-
tions, then at lease one of the explanations is spurious and should be
discarded. This will be the assumption that guides us in the remaining
discussion. 8
In the case of Zelda, we seem to have two explanations why r causes
r*: we have r's having P and we have r's having C. I will be arguing
that the C-explanation is not derivative from the P-explanation; and the
C-explanation cannot be brought in to make the P-explanation more
complete; thus, if both P and C do explain r's causing r*, we have a case
of overdetermination. That, I assume, is an unacceptable description of
the situation. Thus, there isn't any acceptable way to understand how
the two explanations fit together. I have already stated that we must
accept the physical explanation. So we have no choice but to abandon
the explanation in terms of content.
It is essential to this argument what assumptions are made about the
nature of P, the physical property that explains why r causes r*. I shall
begin by making some rather natural assumptions; I will discuss them
more after we have seen where they lead.
238 JEAN R. KAZEZ

So far, I have described Zelda's simplification processor in semantic


terms: it is a mechanism that takes a symbol that means that p&q
as input, and yields a symbol that means that p, and one that means
that q, as output. It has also been stipulated that her simplification
processor is primitive: it does not perform simplification by performing
other symbolic operations, but its operation has some straightforward
explanation. As I said above, primitive processors are typically formal
mechanisms. We shall assume, for now, that Zelda's simplification
processor is such a mechanism.
On our present assumptions, we can explain why r causes r* by point-
ing to the form of the representation, THE PLANTS NEED WATER
AND THE PLANTS NEED SUNLIGHT. If the representation in
Zelda's head were just like the one on this page, then P would be
the property of having (literally) the shape of THE PLANTS etc. The
word "syntax" is sometimes used to refer to form in this sense; 9 so,
r causes r* because of its syntactic properties. The question, then, is
whether we can bring in C as an additional explanation why r causes r*
without winding up with overdetermination.

First Strategy: C is Identical to P


Let's consider whether C is dependent on P in such a way that the C-
explanation would simply be derivative from the P-explanation. Identity
of the two properties would no doubt be dependency enough. But there
is no account of content that will allow us to say that C is identical to P.
Whether content is construed widely or narrowly, and whatever account
of wide or narrow content is adopted, formal properties of representa-
tions are not the same as semantic properties of representations. 1°

Second Strategy: C Supervenes on P


Supervenience would also do the job. If C supervenes on P, then C
and P do not provide independent explanations for r causing r*. But
C doesn't supervene on P. Again, whether content is construed widely
or narrowly, and whatever one's favorite account of wide or narrow
COMPUTATIONALISM AND THE CAUSAL ROLE OF CONTENT 239

content, the formal properties of representations are not supervenience


bases for contents. 11
To see the point more clearly, let's focus more closely on one view
of content. On a functional role account of narrow content (Block
1986), the content of a representation is a matter of its belonging to a
type that plays a certain causal role. If the symbol AND is handled
by the simplification processor, and representations are combined with
AND by a conjunction processor, and so forth, then AND really means
conjunction. If PLANTS is appropriately related to representations
like LIVING and GROWING, it means plants. The formal properties
of a representation clearly do not determine the overall causal role
of the representation. A representation with the same form as THE
PLANTS NEED WATER AND THE PLANTS NEED SUNLIGHT can
play innumerable causal roles within different systems. So content,
construed narrowly, and in terms of functional role, does not supervene
on form. Clearly, the point generalizes to other accounts of content,
whether narrow or wide.

Third Strategy: C is Realized by P


How are P and C related? We may want to think of C as a functional
property, and P as the realization of that functional property. For exam-
ple, C, construed as the narrow content of r, and understood according to
functional role semantics, is the property of having some physical prop-
erty that plays a certain internal role, E E then, realizes C insofar as P is
the physical property that plays role E The realization relation is weaker
than identity or supervenience, but perhaps strong enough so that the
C-explanation for r's causing r* is derivative from the P-explanation.
This proposal quickly runs into trouble. Let's look at an analogous
case. Imagine a bowling ball that weighs 5 pounds (call this property W).
The ball also has the property of being amusing (A); we'll construe this
as a certain functional property: the property of having some property
that causes laughter. W is the property of the ball that realizes A; it's
the fact that the ball weighs only 5 pounds that makes people laugh.
Bear in mind that we have nothing like identity or supervenience here.
A is not the same property as W; for the property of being amusing is
240 JEAN R. KAZEZ

multiply realizable. Nor does A supervene on W; there are lots of 5


pound objects that are not in the least amusing.
Now suppose our amusing 5 pound bowling ball knocks over a pin.
The weight of the ball is relevant to its having that effect, and, let's
suppose, its being amusing is also relevant to its having that effect.
Is the relevance of A owing simply to the relevance of W? Is the A-
explanation derivative from the W-explanation?
The answer, it seems, is No. If the functional property plays a causal
role, it does so on its own; it doesn't ride piggyback on the bali's weight.
Either A and W are incomplete parts of one complete explanation, or
we have a case of overdetermination. The following consideration is
persuasive. Suppose I am told to build two bowling alleys, differing
only to the extent that in one, only W is relevant to any ball knocking
over any pins, and in the other, both W and A are relevant. If A's
causal role just comes along automatically with W's, this ought to be an
impossible task. But surely it's not. The bowling alley in which only
W is relevant is the ordinary one, the sort we have all been to. The
one in which W and A are both relevant will have to contain an extra
mechanism that is sensitive to amusingness and brings pins under the
influence of that property.
The moral, then, is this. A situation in which both property P and
property C of Zelda's belief are causally efficacious is one in which
there is some mechanism that detects content and brings the operation
of primitive processors under the influence of content. This is a situation
in which P and C are parts of a complete explanation for r causing r*,
or P and C overdetermine r causing r*. If P is a property that realizes
C, we can't appeal to both P and C as explanations on grounds that the
C-explanation is merely derived from the P-explanation.

Fourth Strategy: P is not a Formal Property


We still have not found a way to bring C into the causal picture. Perhaps
we are stumped because of our initial assumption that Zelda's simpli-
fication processor is a formal mechanism. It is time to explore that
assumption.
COMPUTATIONALISM AND THE CAUSAL ROLE OF CONTENT 241

The formal properties of a symbol are a special class of the sym-


bol's physical properties - they are intrinsic physical properties. We
have assumed up to this point that Zelda's simplification processor is
sensitive to formal properties alone. But it is possible to imagine a
simplification processor that is sensitive to other physical properties of
a symbol besides form. A processor could be built to respond to a
symbol's relational properties. Let's now suppose that Zelda's simpli-
fication processor does detect relational properties of symbols. In fact,
let's suppose the processor detects relational properties rich enough to
be supervenience bases for narrow contents. (I'11 call these "narrow
content grounding properties.") All of this means we must change our
assumptions about the physical property, P, that accounts for r causing
r*. Instead of construing P as a formal property of r, we shall now
construe P as a relational property of r, one rich enough to be the super-
venience base for the narrow content ofr. (We will ignore the possibility
of a processor that detects wide content grounding properties.)
Before evaluating this proposal, we must get a clear picture of the job
we are now assigning to Zelda's simplification processor. Her processor
responds selectively to P, and other narrow content grounding physical
properties. Consider just how rich P is, first considering the matter in
terms of a functional role view of content. The representation THE
PLANTS NEED WATER AND THE PLANTS NEED SUNLIGHT has
its narrow content because of the processors, stored representations,
sense organs, and motor systems that endow it with a particular role.
The physical property of the representation that encompasses all of the
relevant structures, as well as the relevant interconnections among them,
will be extraordinarily complex. It will encompass a great number of
the physical facts about Zelda's nervous system.
Narrow content grounding properties are no "thinner" on other views
of narrow content. On Fodor's account, (1987, chapter 2) the narrow
content of a primitive predicate of mentalese is a function from con-
texts to extensions. The narrow content of WATER, in Zelda's head,
depends on what WATER would refer to, in different possible environ-
ments. This, on Fodor's causal account (1987, chapter 4), depends on
what WATER would causally covary with, in different possible envi-
242 JEAN R. KAZEZ

ronments. And that depends on the way she perceives the world, the
theories she holds, and the way she reasons: on her sense organs, her
stored representations, and her processors. Broadly speaking, the same
sorts of facts about Zelda's mind determine the narrow content of her
thoughts on a functional role account and on Fodor's "function" view.
The physical supervenience base for the narrow content of Zelda's plant-
watering thought, r, is an extremely fat property, whichever account of
narrow content you prefer) 2
Narrow content grounding properties are fragile: they change as a
system acquires new knowledge or learns new ways of reasoning. Zel-
da's simplification processor handles the representation THE PLANTS
NEED WATER AND THE PLANTS NEED SUNLIGHT today, and
detects - we are supposing - a particular narrow content grounding
property, P Zelda becomes fascinated with botany for the next two
weeks, so that the next time she considers watering her plants, the
functional role, and thus the narrow content, of that representation has
changed. (If you like Fodor's function view, assume these changes in
functional role are of a sort to change the representation's dispositions to
covary with environmental conditions.) The physical property underly-
ing the new content is pr. If Zelda's simplification processor is sensitive
to narrow content grounding properties, it will have to be sensitive to
P's absence, and now detect p t
Clearly we have revised our characterization of Zelda's simplification
processor so that it is no ordinary simplification processor. It is not the
sort of processor an engineer might come up with. But is there anything
wrong, in principle, with the revision?
The first problem with the revision is that the simplification processor
we are imagining is a Rube Goldberg machine. It is doing something
easy the hard way around. Processors in computational systems are
symbol manipulators. The simplest kind of mechanism that will sim-
plify is a formal mechanism. A mechanism that simplifies by detecting
relational properties of symbols is expending needless energy.
But there is a more important worry here. We may now have
described the processor in a way that is incompatible with its being
primitive. Could a processor both enjoy sensitivity to properties like P
and P~, and qualify as a primitive processor? It seems unlikely.
COMPUTATIONALISM AND THE CAUSAL ROLE OF CONTENT 243

Let's begin with what the problem is not. T h e problem is not simply
that P and P~ are relational properties. One can imagine a processor
detecting relational properties without itself being composed out of
symbol manipulators. That is, one can imagine primitive processors
that are not formal mechanisms. A primitive processor might, for
example, be sensitive to whether a symbol, S, is at a particular address
or to what symbols are adjacent to S, or to what symbols were tokened
before S.13
The problem, rather, is that P and P~ are such complex and global
relational properties. Responding to properties like P or pt is the sort
of task that a computationalist would want to explain by informational
division of labor. It is a task on a par with detecting whether a bill
is worth a dollar, or whether a book is a best seller, or whether a
piece on a chessboard is a queen. One is immediately tempted to
decompose our imagined simplification processor into subsystems with
various information processing capacities. Insofar as the very complex
property, P, has to do with circumstances in various parts of Zelda's
nervous system, one imagines various subsystems keeping track of those
circumstances. One processor keeps track of other representations in
memory, another keeps track of relevant features of the visual system,
and the like.
Our discussion has the following upshot: the view that primitive
processors must be formal mechanisms in overly conservative. There
are s o m e relational properties that can be detected by a primitive mech-
anism. But the view that primitive processors can detect extremely
complex and global properties is too liberal. It is difficult to see how
Zelda's simplification processor could be a primitive processor, and
nevertheless detect the sorts of physical properties on which narrow
content supervenes. Defenders of (REL) will have to find another
strategy for bringing content into the causal picture.
I shall return to the assumption that primitive processors are formal
mechanisms. While this is an oversimplification, it is not a gross over-
simplification. Some primitive processors may be sensitive to relatively
simple and local relational properties of symbols; but those kinds of
properties do not "support the causal aspirations" of content properties
any more than formal properties do} 4
244 JZANR. I~.ZEZ

Fifth Strategy: C is a Background Condition


Let's review where we are. We can say that both P and C explain why r
causes r* if the C-explanation is dependent on the P-explanation. But,
I have argued, the dependency relations that are required are not to be
had.
I now want to consider an option that combines elements of the
dependency approach to multiple explanations and elements of the sec-
ond approach, which reconciles multiple explanations by seeing them
as individually incomplete. 15 Here's how the story goes: assume, as
we did earlier, that P is a formal property of Zelda's thought. Citing
P is citing everything about r relevant to r causing r*. On the other
hand, the fact that r has P does not completely explain why r causes
r*. For r causes r* not only because r has certain characteristics, but
also because certain background conditions obtain. Let's refer to the
relevant background conditions collectively as Condition B. Here, then,
is how content enters the picture: P is not a complete explanation why
r causes r*; Condition B is relevant to r causing r* as well; and content
can be another part of the explanation, because r's content supervenes
on Condition B.
Will this work? To fill out this story, a case must be made that there
really is a set of facts about Zelda that is restricted enough to be truly
relevant to r causing r*, but unrestricted enough to be the supervenience
base for the content of her thought. But there is no such set of facts,
Our conditions for membership in the set pull in opposite directions. If
we are trying to characterize the background conditions for r causing
r*, we will cite the intactness of Zelda's central nervous system, the
absence of physical or chemical interferences in the normal working
of the simplification processor. If we are trying to characterize the
supervenience base for the content of her thought, we will talk about
the other processors, representations, sense organs, and motor systems
in her head.
If there is a temptation to think that the latter conditions are relevant
to r causing r*, I would diagnose that temptation in this way: those con-
ditions are quite relevant to explaining why her simplification processor
is doing what it's doing. According to the account of psychological
COMPUTATIONALISM AND THE CAUSAL ROLE OF CONTENT 245

explanation discussed in section I, sophisticated information processing


capacities are explained in terms of the elementary capacities of prim-
itive processors. But explanation can go the other way as well: we
explain what a primitive processor is doing by seeing how it fits into a
larger system. (Likewise: we explain the purpose of a bolt, or cable,
or lever in a bicycle by seeing what purposes it serves in the system as
a whole.) But it is one thing for myriad and relatively remote compo-
nents of the larger system to be relevant to explaining the purpose of r
causing r*. It's another for those components to be genuine background
conditions for r causing r*. The fifth strategy is no more credible than
the others.

Sixth Strategy: Overdetermination


The only alternative remaining is that P and C overdetermine r causing
r*. What remains to be shown is that this would be at least an initially
coherent description of the situation if C, as well as P, were accorded a
causal role; and that it would be better to consider C causally irrelevant
than to countenance overdetermination here.
According to the rough notion of overdetermination introduced
earlier, we have overdetermination if P and C are individually suffi-
cient (under the circumstances) to explain why r causes r*. The idea
is that there are two "paths" from r to r*, one due to P, the other due
to C. It might be thought that this picture is excluded from the outset,
because there are, afterall, certain dependency relations between P and
C. Whether one thinks of C as the wide or the narrow content of r, and
whatever account of content one favors, P will be a part of the super-
venience base for C. Since r has P, and r occupies a cognitive system
of a certain sort, and (perhaps) r stands in certain relations to Zelda's
external environment, and who knows what else, r necessarily has C. It
can also be said that C depends on P, in the sense that if r hadn't had P, it
wouldn't have had C (though of course there are other representations
without P, in other heads, that may have C).
But neither of these dependency relations will suffice to remove the
specter of overdetermination. Let's return to the bowling ball example
discussed above. We supposed that a bowling ball weighed five pounds
246 JEAN R. KAZEZ

(W) and was amusing (A). The same dependency relations hold between
W and A that hold between P and C: first, A supervenes partly on W; and
second, if the ball hadn't had W, it wouldn't have had A. Nevertheless,
overdetermination is very much a possibility. In order for both the
weight of the ball and its being amusing to explain why the ball causes
the pins to fall, there will have to be the right mechanisms in place.
There will have to be two paths from the bali's impact to the pins'
falling, one running via weight, the other via amusingness. Most likely,
the weight path will be sufficient, and to the extent that the other path has
an explanatory role as well, the situation will involve overdetermination.
If P and C both explain why r causes r*, the causal relationship
between r and r* is best viewed as being overdetermined. But, to get to
the conclusion, that is not an acceptable picture. Though I expect this
to be an uncontroversial assessment, it is worth considering just what
makes the overdetermination option so unappealing. 16
No doubt overdetermination occurs sometimes. In the case of Mary's
skating, described above, there is an amazing coincidence: the two
judges turn in Mary's top score at the same time. Both the speed and
the gracefulness of her skating are individually sufficient to win her the
blue ribbon. In this quite unusual situation, each property is essentially
superfluous: neither one of them, alone, "makes a difference" to whether
or not the skating causes the winning of the prize. (Of course, this is not
to say that the property of being fast or graceful is superfluous). The
same goes for Zelda's thought process, if the causal connection between
r and r* is overdetermined by properties P and C: each property, taken
on its own, is superfluous.
So far, the two cases are similar. But further reflection reveals that
what we are envisaging in the second case is far more peculiar than
what we find in the first. There are situations in which being graceful
is causally relevant without overdetermination, and the same goes for
being fast. Thus, these two properties are not superfluous every time
they are instantiated. But every occasion on which C is accorded causal
relevance is a case just like the one we have been examining. For, to
begin with, it is implausible that there are strong candidates for having
causally efficacious semantic properties, other than thoughts. And if
the computationalist is right, all thought-to-thought transitions are like
COMPUTATIONALISM AND THE CAUSAL ROLE OF CONTENT 247

the transition from r to r* - they are mediated by primitive processors.


So reasons for seeing the case at hand as involving overdetermination
are reasons for seeing every case in which C is causally relevant as
a case of overdetermination. Thus, C has quite a peculiar causal sta-
tus: if it's a causal property, it's a causal property that never makes a
difference. Every time C is instantiated, it is causally efficacious, yet
superfluous. This is a picture of the status of content few will want
to accept. We ought to reject the content explanation for r causing
r*, rather than embrace the view that our thoughts have their causal
powers overdetermined by their formal and semantic properties. Since
according causal status to C leads inescapably to the overdetermination
picture, we cannot accord causal status to C.

IlL FODOR'S SOLUTION

The main business that remains is to defend the third premise of my


argument, (WHOL). But before doing so I will consider a response to
the argument so far. The response is inspired by Jerry Fodor's views,
but it is not entirely clear whether he would endorse it, exactly as I state
it.
In Psychosemantics (1987), Fodor distinguishes between the regu-
larities described by psychological laws and the mental processes or
mechanisms that implement those regularities. Psychological regulari-
ties are to be described in terms of content. On the other hand, mental
processes/mechanisms are computational, and thus sensitive only to for-
mal properties of symbols. Fodor points out that it is common for one
sort of property to track lawful regularities, while another property is
involved in the processes or mechanisms that implement the regulari-
ties. For example, it's (something of) a law that tall parents produce tall
children; but the mechanism in virtue of which the law holds involves
genes, not heights (p. 140).
In Psychosemantics, Fodor seems to say that the properties which
enjoy causal efficacy are the ones involved in underlying mechanisms.
If content plays no role in psychological mechanisms, Fodor seems to
say, then mental content is causally irrelevant. He writes:
248 JEAN R. KAZEZ

... I'd better 'fess up to a metaphysical prejudice... I don't believe that there are inten-
tional mechanisms. That is, I don't believe that contents per se determine causal roles.
In consequence, it's got to be possible to tell the whole story about mental causation (the
whole story about the implementation of the generalizations that belief/desire psycholo-
gies articulate) without referring to the intentional properties of the mental states that
such generalizations subsume... That is not, by the way, any sort of epiphenomenalism;
or if it is, it's patently a harmless sort. (1987, pp, 139-40)

In a later article, "Making Mind Matter More" (1990), Fodor


becomes an advocate of (REL). Now he proposes a nomological view
of causal relevance: properties projected by causal laws are ipso
facto causally relevant properties. This is so notwithstanding the fact
that the processes such laws subsume may involve mechanisms that
are sensitive to quite a different set of properties. "Specifying the
causally responsible macroproperty isn't the same as specifying the
implementing micromechanism" (1990, p. 146). Since it's a law that
meandering rivers erode their outside banks, being a meandering river
is causally relevant; nevertheless, the mechanism underlying the law
involves microstructural facts about particles suspended in water, abra-
sion, the Bernouilli effect, and so on. And since there are intentional
laws, contents are causally relevant; nevertheless, the mechanisms that
implement these laws are not content-sensitive.
The response to my argument Fodor's discussion suggests is this: the
content of r is relevant to r causing r*, because that content is a property
of r in virtue of which it is subsumed by a causal law: namely, if a
person believes that the plants need water and the plants need sunlight,
then she will believe that the plants need water, and she will believe that
the plants need sunlight. It is beside the point that the simplification
processor is a simple mechanism that merely detects formal properties
of symbols. The Fodor argument suggests that (PRIM) is attractive only
to the extent that it is easily confused with another claim:

(MECH) Primitive processors are exclusively sensitive to the form


of symbols.
While (MECH) is true (the argument goes), (PRIM) is false.
As appealing as it might be to rescue (REL), I don't find this response
convincing. To begin with, the issues of the previous section are trou-
COMPUTATIONALISM AND THE CAUSAL ROLE OF CONTENT 249

blesome. There is a syntactic law that subsumes r causing r*, as well as


a content law. So both formal property P and content property C turn
out to be relevant to r causing r*, on Fodor's account. This multiplicity
of explanations is not something we can accept, if the arguments in the
last section were sound. Fodor's account yields overdetermination.
Second, Fodor's admission that content plays no role in psychologi-
cal mechanisms leaves it quite unclear what is accomplished by main-
taining that content is causally efficacious. When we cite a mechanism
linking F's to G's, says Fodor, we answer "the question how do F's cause
G's"; we give "a story about what goes on w h e n - and in virtue of which
- F's cause Gs" ("Making Mind Matter More," p. 144). So the formal
properties of a plant-watering thought explain how the thought causes
plant-watering; it is the formal properties of plant-watering thoughts in
virtue of which they cause plant-watering behavior. What role, then, is
left for content? If form, not content, explains how thoughts have their
causal consequences, then in what way is content relevant to causation?
Perhaps Fodor will insist that the nomological view captures some
notion of causal relevance. There is "mechanical causal relevance"
(which content lacks) and "nomological causal relevance" (which con-
tent enjoys). Fodor seems to think that vindicating the nomological
causal relevance of content ensures that content properties have the
same kind of causal status as other special science properties. But some
special science properties have full-blooded mechanical causal rele-
vance. Clogged arteries cause heart attacks; the mechanism involves
the clogging. Quarter-shaped objects cause pay phones to work; the
mechanism involves being quarter-shaped. If nomological causal rele-
vance is any kind of causal relevance at all, it is a sort of causal relevance
weaker than what is enjoyed by many special science properties.
Finally, there is the tallness case from Psychosemantics, which makes
trouble for the nomological account of causal relevance in "Making
Mind Matter More." The tallness of a parent isn't causally efficacious
when she produces a tall child. But tallness comes out as a causally
relevant property, if we attend to the law that tall parents have tall
children, and ignore the genetic mechanism that implements this law. 17
Fodor does not offer a convincing reason to reject (PRIM). I turn,
finally, to the third premise of my argument.
250 JEAN R. KAZEZ

IV. BLOCK'S SOLUTION

Ned Block (1990) nicely describes the tension between the formal
character of primitive processors and the causal efficacy of content
- something he calls "The Paradox of the Causal Efficacy of Content":

The reasoning behind the paradox goes something like this: Any Turing machine can be
constructed from simple primitive processors such as a n d gates, or gates, and the like.
Gates are sensitive to the syntactic forms of representations, not their meanings. But if
the meaning of a representation cannot influence the behavior of a gate, how could it
influence the behavior of a computer- a system of gates? (p. 139)

Block's view is that this paradox is resolvable: the formal nature of


primitive processors is not, after all, an obstacle to affirming the causal
efficacy of content. Even though the meaning of a representation can't
influence a gate, that doesn't establish that it can't influence the behavior
of a computer. (It should be noted that Block, in the end, is an epiphe-
nomenalist; but his epiphenomenalism stems from general qualms about
the causal efficacy of functional properties. 18)
My case for the formal nature of primitive processors will not have
proven anything, if the meaning of a symbol can somehow influence the
behavior of a computer, without influencing any of its processors. But
I don't believe Block has shown that this is possible. The third premise
of my argument says as much:

(WHOL) If content is causally irrelevant in processes mediated by


primitive processors in a system, content is causally irrele-
vant to the whole system, including its behavioral outputs.

Let us consider Block's argument against (WHOL). Block claims


that there is an account of content (functional role semantics) and an
account of causal efficacy (the counterfactual view), on which (WHOL)
is false. Given these accounts of content and causal relevance, it can
be shown that a symbol's content need not be relevant to any primitive
processor to be relevant to the whole system's behavioral output.
The very simple counterfactual account of causal efficacy Block
makes use of says:
COMPUTATIONALISM AND THE CAUSAL ROLE OF CONTENT 251

(SC) F, aproperty of el, is relevant to el causing e2 iff: ire1 hadn't


had E el wouldn't have caused e2.
Is content - understood in terms of functional role - relevant to Zelda's
simplification processor and to processors like it? Not if we accept
(SC). Zelda's first thought r would have caused r* even if it had had
a different content; that is, even if it had had a significantly different
functional role in the system as a whole.
But now consider the causal connection between r and behavior b,
the watering of the plants. Now it looks like the content of r is causally
efficacious. Altering the content of a symbol just is altering some of its
potential causes or effects, so with a different content, a symbol might
not eventuate in the same behavior. If the symbol hadn't had its content
- i.e. its causal profile - it wouldn't, or mightn't, have caused behavior
b.
If we adopt functional role semantics, and the simple counterfactual
account of causal relevance, we can deny (WHOL). But underlying
(WHOL) is a simple and plausible assumption about causation, an
assumption more compelling than (SC) or functional role semantics.
I'll call this the "Continuity Principle":

(CP) If el is the first event in a causal chain el, e2 . . . . en, aproperty


of el is relevant to its causing en just in case it is relevant to
its causing e2.
That is so say, a property of an event is relevant to its having an indirect
effect, only if it's relevant to the event's having some more direct effect.
I move my hand, causing my break lever to move, which causes a cable
to be pulled, which causes my brakes to squeeze my tire, and my bicycle
wheel to slow down. The Continuity Principle says the temperature of
my hand can be relevant to the wheel slowing down only if it's relevant
to my break lever moving. (WHOL) applies (CP) to the special case of
computers: if a symbol tokening causes a second symbol tokening in
virtue of a primitive processor, the content of the first symbol is relevant
to the causation of a later behavior (indirect effect) just in case it is
relevant to the causation of the second symbol tokening (a more direct
effect).
252 JEAN R. KAZEZ

The Continuity Principle appears to be a reasonable constraint on


any account of causal relevance. If a simple counterfactual view of
causal relevance is incompatible with the Continuity Principle, that is a
conclusive reason to reject it.
My defense of (WHOL) could conclude right here: (WHOL) is
supported by the Continuity Principle, and that gives it a great deal of
plausibility. If (SC) leads to violations of the Continuity Principle, it
simply ought to be abandoned. Block's argument is thus undermined.
But a bit more needs to be said about (SC). To begin with, no one
attracted to a counterfactual view of causal relevance thinks such a view
can be stated as simply as (SC). 19 So revisions are inevitable. In aban-
doning (SC) on account of the Continuity Principle, we are not deserting
an otherwise sterling theory. Furthermore, we are not necessarily giving
up the general idea of a counterfactual account of causal relevance. (SC)
can be abandoned in favor of another sort of counterfactual view - one
that conforms to the Continuity Principle. Here is a suggestion:

(RC) If el is the first event in a causal chain el, e2 . . . . en, and F is


a property of e~,
(1) F is relevant to el causing e2 iff: if el hadn't had F, el
wouldn't have caused e2; and
(2) F is relevant to el causing en iff: F is relevant to el
causing e2.
There are many problems with ( R C ) . 20 However, it is more plausible
than (SC), and not only because it comports with the Continuity Prin-
ciple. (RC) can handle cases of preemption, and (SC) cannot. 21 Let's
return to Mary's skating, which, as you recall, was very fast and very
graceful. Either quality alone would win her a prize, but in fact (this
time) it's the speed that is taken into account, because the speed judge
reaches the master of ceremonies before the grace judge. The speed of
her skating is causally relevant to her winning the prize, and yet it's not
true that if her skating hadn't been fast, it wouldn't have caused her to
win the prize. The grace of Mary's skating would have won her the
prize, if the speed hadn't. (SC) doesn't provide a necessary condition
for the causal relevance of a property.
COMPUTATIONALISM AND THE CAUSAL ROLE OF CONTENT 253

On the revised view, (RC), there is no problem establishing the rele-


vance of the speed of the skating. The crucial counterfactual concerns
the connection between the speed of the skating and some effect earlier
than the winning of the prize, like the opinion of the speed judge. If the
skating hadn't been fast, the speed judge wouldn't have thought so well
of it. As long as that counterfactual holds, the speed of the skating is
relevant to the skating's causing, indirectly, the winning of the prize. 22
(RC) has another advantage over (SC). Several philosophers have
suggested that a "metaphysical independence" clause needs to be includ-
ed in any counterfactual account of causal relevance. 23 We don't want to
say that the property of causing the jubilation explains why the election
caused the jubilation. Diagnosis: causing the jubilation is n o t a property
of the election that's metaphysically independent of the jubilation, i.e.,
it's not a property the election has or lacks, irrespective of the jubilation
occuning or not occurring.
But if a "metaphysical independence" clause is desirable, the place
to put it is in the first part of (RC), not in (SC).

(RRC) If el is the first event in a causal chain el, e2 . . . . en, and F is


a property of el,
(1) F is relevant to el causing e2 iff: (a) if el hadn't had F,
et wouldn't have caused e2; and (b) el having F and e2
are metaphysically independent of each other.
(2) F is relevant to el causing en iff: F is relevant to el
causing e2.

(RRC) implies, correctly, that it's not impossible for there to be a prop-
erty of el relevant to el causing en, but metaphysically dependent on en
occurring. What must be the case, for this to happen, is that en is an
indirect effect of el.
Consider this story: At 12:00 1 hear the phone ring at the other end
of the house and let it ring for 3 minutes. Using my crystal ball, at 12:01
I detect that the ringing has the property of causing the dog to leap at
12:03 (property Q). But I am puzzled: the dog continues to sleep at my
feet. So at 12:02 1 open a door that seems to be making it hard for the
254 JEANR.KAZEZ

dog to hear the phone. At 12:03, the dog leaps. Property Q is relevant to
the leaping, even though it's metaphysically dependent on the leaping.
Property Q is relevant in the first instance to the connection between
the ringing and the puzzlement. Clause (1) of (RRC) explains why: (a)
if the ringing hadn't had Q, it wouldn't have caused puzzlement, and (b)
the ringing's having Q is metaphysically independent of the puzzlement.
Quite properly, clause (2) or (RRC) allows Q to be relevant to the dog
leaping. Because Q is relevant to the ringing causing the puzzlement, it
is relevant to the ringing causing the leaping. It doesn't matter that the
ringing's having Q metaphysically depends on the leaping taking place.
To conclude: we ought to abandon (SC) in favor of a theory of
causal relevance congruent with the Continuity Principle. We may be
able to do that and still accept some sort of counterfactual account of
causal relevance. (RRC) is compatible with the Continuity Principle
and has two additional advantages over (SC): (1) (RRC) allows for
causal relevance in cases involving preemption; (2) (RRC) handles
properties like Q in the right way - it makes them irrelevant in m o s t
cases, without making them necessarily irrelevant, in all cases. (RRC)
may not, ultimately, be acceptable; but it's a better theory than (SC).
Relying on (RRC) rather than (SC), Block would have no counter-
example to (WHOL). Given (RRC), the content of Zelda's first thought
r m u s t be relevant to r*, if it is to be relevant to her behavior b. Block's
attempt to reconcile the formal nature of primitive processors with the
causal efficacy of content fails.

CONCLUSION

The price of accepting computationalism is being forced to abandon


a certain intuition: that it is the very content of your thought that
p&q that makes it cause your thought that p. Those who defend the
causal relevance of content are not willing to pay this price. They
think they can adopt the computational account of mental processes,
and vindicate their intuitions as well. But, as I have argued, this is a tall
order. So long as we conceive of content as supervenient on extremely
complex relational properties of symbols, and we conceive of primitive
COMPUTATIONALISMAND THE CAUSALROLEOF CONTENT 255

processors as formal m e c h a n i s m s , there is no way to w o r k content into


the causal picture.
Content e p i p h e n o m e n a l i s m cannot be avoided b y distinguishing
between wide and n a ~ o w content. T h e narrow content o f a thought
m a y be "in the head," but still supervenes on global properties o f the
thought's internal environment. It is just as implausible that narrow
content is causally relevant as it is that wide content is causally rele-
vant.
Should c o m p u t a t i o n a l i s m be rejected in order to give content causal
status? Presumably, the answer is No. But the question cannot be
answered fully without an account o f the explanatory role of content in
psychological theories. I would w a g e r that this role can be understood
non-cansally, but that is a view to b e presented elsewhere. 24

NOTES

i Throughout the paper, I will be viewing events as concrete particulars, which can
instantiate many different properties, not as property exemplifications,
2 For example Horgan (1989), Lepore and Loewer (1987, 1989), and Heil and Mele
(1990) defend the causal relevance of content without apparent concern over the specific
mechanisms that are involved in mental processes. Ned Block (1990) is one philoso-
pher who pays proper attention to the conflict between affirming the causal efficacy of
content and endorsing a computational view of mental processes. Block's views are
discussedin section IV below.
3 In this paper I will be concerned exclusively with orthodox computationalism. Prop-
ositional attitudes, representations, content, processors, etc., are viewed sufficiently
differently by connectionists that the causal relevance of content in connectionist sys-
tems merits separate treatment.
4 Various stronger positive constraints have been offered, but I find all of them prob-
lematic. In (1983), Block suggests that primitive processors are typified by being
explained nomologically (p. 597). A primitive processor yields particular outputs for
inputs because it instantiates some basic physical law. But Haugeland rightly points out
that the best account of a primitive processor's capacities may fit the pattern of"system-
atic" explanation, not that of nomological explanation (1980, p. 262): it may be that the
best account of the way a processor works describes the functionally significant parts
of the processor and their interconnections. There is no reason to deny that primitive
processors have systematic explanations, as long as the processor-components invoked
in such explanations aren't symbol manipulators.
256 JEAN g. r~ZBZ

In (1990), Block proposes that a primitive processor is a processor that must be


explained by some field other than cognitive science- e.g., physics or neurophysiology.

The primitive processors of a system are the ones for which there is no explanation
within cognitive science of how they work; their operation can be explained only in
terms of a lower level branch of science, physiology, in the case of human primitive
processors, electronics in the case of standard computer primitives. (p. 142)
The problem is that, on this view, primitive processors by definition are not content-
driven. For physicists and neurophysiologists do not explain anything by invoking
content. But it seems to me that it takes an argument to show that primitive processors
aren't content-driven; it shouldn't fall immediately, and trivially, out of the definition
of "primitive processor."
The same problem affects Haugeland's definition of a primitive processor as a pro-
cessor whose information processing abilities are explained"by physical instantiation"
(1980, p. 262). It may be true that a primitive processor's capacities must be explained
by appeal to physical, rather than semantic, properties of symbols, but I hesitate to
define "primitive processor" so that this is a trivial, definitional fact.
5 John Haugeland offers a useful overview of various sorts of primitive processors in
chapter 5 of (1985).
6 What has P and C, to be more exact, is the representation constitutive ofr. The event
r has the property of having a constituent with P and C. I will continue to speak loosely,
in the text.
7 Kim seems to be undecided what to say about overdetermination. In one place he
says that in cases of overdetermination, each cause or factor is incomplete, in the sense
that "failing to mention either of the overdetermining causes gives a misleading and
incomplete picture of what happened, and that both causes should figure in any complete
explanation of the event" (t 989, p. 91). Kim also toys with the idea that causes in cases
of overdetermination are incomplete in the sense that on close inspection, they are really
only partial causes (1988, p. 237). The third option Kim considers (1988, p. 237) is to
view overdetermination as an exception to the explanatory exclusion principle. This is
the option I favor.
s For the framework I use in this section, ! am indebted to Kim's papers on explana-
tory exclusion (1988, 1989, 1990). However, the conclusions I reach are contrary to
his. He defends the causal relevance of narrow content (1990); I will be arguing that
neither wide nor narrow content play a causal role, if a computationalist view of mental
processes is correct. See note 11.
9 Fodorunderstandssyntax(syntaxl, asI shall say) in this way (1980). Syntax has been
understood in a second, importantly different way. On this conception, the syntax (or
syntax2, as I shall say) of a representation is its grammatical or logical structure. The
syntax2 of "John loves Betty" is its being a two-place predication, with "loves" as the
predicate, "John" as the first argument, and "Betty" as the second argument. Objects
with completely different physical properties could be tokens of that syntactic2 type. It
COMPUTATIONALISMAND THE CAUSALROLE OF CONTENT 257

is not obvious (to me at least) just what syntactic2 properties are on this conception; that
is, just what makes two physically different objects tokens of the same syntactic2 type.
But some say that syntactic2 properties are functional properties (Devitt 1991; Block
1990).
Is it tendentious to start with the hypothesis that primitive processors respond to
syntax1 rather than syntax2? I don't think so. If syntactic2 properties are functional
properties, then it is just as problematic to accept the causal role of syntax2 as to accept
the causal role of content. But it is uncontroversial,I think, that processors are sensitive
to the brute physical properties (the syntax1) of inputs. Those who think primitive
processors detect syntax2 are being misled by the following consideration: if one were
to define what an AND-gate or an OR-gate is, one would want to cover physically
diverse devices. One would therefore specify the inputs and outputs of these devices
in terms of syntax2, rather than syntaxt. It doesn't follow that any specific AND-gate
detects syntactic2 properties.
"Syntax" in the text will always means syntax1.
t0 I shall assume familiarity with the distinction between wide and narrow content, and
with various accounts of content, throughout the paper. See Fodor (1987), Cummins
(1989), Block (1986).
11 I have in mind "strong supervenience," since this is what is required to make the
C-explanation derivative from the P-explanation. See Kim (1984, 1990a) for definition
and discussion of various forms of supervenience.
Kim holds that narrow content can be assigned a causal role via the supervenience
option (1990). He conceives of a mental event as the instantiation of a neurophysio-
logical property by an object at a time, but doesn't speculate about the nature of the
neurophysiologicalproperty. A computationalistwill conceive of that property as being
too thin to be a superveniencebase for narrow content. I see no reason to think that any
reasonable way of identifying mental events will facilitate the supervenience option,
but that issue lies beyond the scope of this paper.
12 All of this is perhaps not what one would expect. The great advantage of a causal
theory of content over a functional role theory, in Fodor's view, is that the former is
"atomistic" and the latter "holistic." That is, on the former, contents can be assigned
to representations one at a time, and on the latter, contents can be assigned to rep-
resentations only in groups. It might seem to follow that narrow content, as Fodor
"constructs" it, supervenes on more localized properties of brains than narrow content,
construed functionally. But not so. In either case, the narrow content of a representation
supervenes on quite global facts about its cognitive environment.
13 This possibility is suggested by Block and Bromberger, in their response (1980) to
Fodor's paper, "Methodological Solipsism Considered as a Research Strategy in Cog-
nitive Psychology" (1980).
14 I borrow the nice phrase from Stephen Yablo (1992), who thinks physical properties
do support the causal aspirations of mental properties. But he does not consider the
issue of mental causation in a computational framework.
258 mAN R. KAZEZ

15 This option emerged out of a discussion with Ran Lahav.


16 I owe the argument in the next two paragraphs to Stephen Schiffer (1987). In
explaining what is implausible about the view that behavior is overdetermined by dis-
tinct (non-identical) mental and neural states, he writes,

If this sort of causal overdetermination obtained, then a mental event could never cause
a bodily movement except in a case of causal overdetermination where there was a
simultaneous and distinct neural cause of the movement... This causal superfluousness
is hard to believe in; it is hard to believe that God is such a bad engineer. Certainly this
causal superfluousness is not a feature of the kind of causal overdetermination that is
unproblematic. If the firing of a gun and a soprano's hitting a high note are simultaneous
causes of the shattering of a wine glass, we do not suppose that either cause could only
have been causally operative in the presence of a cause of the other type." (1987, p. 148)

He makes a similar case against the view that a mental event has its causal powers
overdetermined by mental and physical properties.
17 Segal and Sober (1991, p. 4) present a similar counterexample to Fodor's nomolog-
ical account of causal relevance.
18 I accept Block's position on functional properties. Related arguments, also per-
suasive, are made by Jackson and Pettit (1988). If content properties are functional
properties, then there is more than one sound argument for content epiphenomenalism.
19 See Horgan (1989), Antony (1992), Lepore and Loewer (1987, 1989), and Kazez
(forthcoming) for problems and/or revisions.
20 See note 19.
21 Authors of counterfactual accounts of causal relevance (Lepore and Loewer 1987;
Horgan 1989) have not tried to accommodate phenomena like preemption and overde-
termination. They have seen them as tangential to the main contours ofa counterfactual
account of causal relevance.
22 This solution to the problem of preemption should come as no surprise. Preemption
is as much a problem for a counterfactual view of causally relevant properties as it
is for a counterfactual view of causation itself. Suppose c preempts c* as a cause of
e. It's not true that e counterfactually depends on c: if c hadn't occurred, c* would
have, and e still would have occurred. David Lewis's solution (1973) takes into account
the causal chain connecting a preempting cause, c and its effect, e. While e does not
counterfactually depend on c, there must be a chain linking to c to e - c, c', c" . . . . e -
in which c' counterfactually depends on c, c" on C, and so on.
23 For example, Lepore and Loewer (1987), Horgan (1989).
24 For valuable comments on earlier versions of this paper, I am grateful to Doug
Ehring, Ray Elugardo, Kihyeon Kim, Ran Lahav, Steve Laurence, Chris Swoyer, and
especially Rob Cummins.
COMPUTATIONALISMAND THE CAUSALROLEOF CONTENT 259

REFERENCES

Antony, Louise (1992), ' ~ e Causal Relevance of the Mental: More on the Mattering
of Minds." Mind and Language 6: 295-327.
Block, Ned (1986), "Advertisement for a Semantics for Psychology." In: E A. French
et al., eds., Midwest Studies in Philosophy, Vol. X. Minneapolis: University of
Minnesota Press.
Block, Ned (1983), "Mental Pictures and Cognitive Science." Reprinted in W. Lycan,
ed., Mind and Cognition: A Reader. Oxford: Blackwell.
Block, Ned (1990),"Can the Mind Change the World?" In: G. Boolos, ed., Meaning and
Method: Essays in Honor of Hilary Putnam. Cambridge: Cambridge University
Press.
Block, Ned and Bromberger, Sylvain (1980), "State's Rights." Brain and Behavioral
Sciences 3: 73-74.
Cummins, Robert (1983), Psychological Explanation. Cambridge: MIT Press.
Cummins, Robert (1989), Meaning and Mental Representation. Cambridge: MIT Press.
Dennett, Daniel (1974), "Why the Law of Effect Won't Go Away." Reprinted in Brain-
storms. Cambridge: MIT Press (1978).
Devitt, Michael (1991), "Why Fodor Can't Have It Both Ways." In: B. Loewer and G.
Rey, eds., Meaning and Mind: Fodor and His Critics. Oxford: Blackwell.
Fodor, Jerry (1980), "Methodological Solipsism Considered as a Research Program
in Cognitive Psychology." Reprinted in Representations. Cambridge: MIT Press
(1981).
Fodor, Jerry (1987), Psychosemantics. Cambridge: MIT Press.
Fodor, Jerry (1990), "Making Mind Matter More." In: A Theory of Content. Cambridge:
MIT Press.
Haugetand, John (1980), "The Nature and Plausibility of Cognitivism." In: J. Hauge-
land, ed., Mind Design. Cambridge: MIT Press.
Haugeland, John (1985), Artificial Intelligence: The Very Idea. Cambridge: MIT Press.
Hell, John and Mele, Alfred (1990), "Mental Causation." American Philosophical
Quarterly 28: 61-71.
Horgan, Terence (1989), "Mental Quausation." Philosophical Perspectives 3: 47-76.
Jackson, Frank and Pettit, Philip (1988), "Functionalism and Broad Content." Mind 97:
381-400.
Kazez, Jean (forthcoming), "Can Counterfactuals Save Mental Causation?" Aus-
tralasian Journal of Philosophy.
Kim, Jaegwon (1984), "Concepts of Supervenience." Philosophy and Phenomenologi-
cal Research 56: 153-176.
Kim, Jaegwon (1988), "Explanatory Realism, Causal Realism, and Explanatory Exclu-
sion." Midwest Studies in Philosophy 12: 225-239.
Kim, Jaegwon (1989), "Mechanism, Purpose, and Explanatory Exclusion." Philosoph-
ical Perspectives 3: 77-108.
260 JEAN R. KAZEZ

Kim, Jaegwon (1990), "Explanatory Exclusion and the Problem of Mental Causation."
In: E. Villanueva, ed., Information, Semantics and Epistemology. Cambridge:
BlackwelI.
Kim Jaegwon (1990a), "Supervenience as a Philosophical Concept." Metaphilosophy
21: 1-27.
Lepore, Ernest and Loewer, Barry (1987), "Mind Matters." Journalof Philosophy 84:
630-641.
Lepore, Ernest and Loewer, Barry (1989), "More on Making Mind Matter." Philosoph-
ical Topics19: 175-191.
Lewis, David (1973), "Causation." Reprinted in PhilosophicalPapers. New York:
Oxford University Press.
Schiffer, Stephen (1987), Remnantsof Meaning. Cambridge: MIT Press.
Segal, Gabriel and Sober, Elliott (1991), "The Causal Efficacy of Content." Philosoph-
ical Studies63: 1-30.
Yablo, Stephen (1992), "Mental Causation." PhilosophicalReview 101: 245-280.

Department of Philosophy
Southern Methodist University
Dallas, TX 75275-0142
USA

You might also like