Professional Documents
Culture Documents
Philip Clapson
a
To cite this article: Philip Clapson (2001) Consciousness: The Organismic Approach, Neuropsychoanalysis: An
Interdisciplinary Journal for Psychoanalysis and the Neurosciences, 3:2, 203-220, DOI: 10.1080/15294145.2001.10773356
To link to this article: http://dx.doi.org/10.1080/15294145.2001.10773356
203
Descartes
204
make their way to the brain, which, transferred to the
mind, are felt, and this causes, or allows the I (of, or
as, the mind) to initiate remedial and precautionary
action. Descartes' language, as is characteristic of this
kind of discussion, is not precise. Mind, self, soul, and
I are often interchangeably deployed. Descartes was
aware of the brain and that it had great powers; but
the true being of the human was the mind as a distinct
substance with its own characteristics.
There seelns to be no quibble that our experience
involves the awareness of the world, and of our
thoughts and feelings. There seems to be, in conscious
experience, something grasped; and indeed the possibility of grasping anything, about the world or ourselves, seems to depend upon the fact of consciousness.
Philip Clapson
where (or how) the forces of world as Will representationally act themselves out.
For Hume, the I becomes opaque to the mind, for
there is no way of looking into the mind to find the I
whilst it is looking into itself. 2 But where, then, is the
I? Kant (1781) attempted to resolve Hume's difficulty
by making the I the logical requirement of experience
itself (thence Fichte). But he also gave this I a noumenal character to elevate it from the causal forces of the
world (thence free will). His solution has not found
much favor (thence Schopenhauer).
The mind, characterized thus, is deeply problematic. It seems to defy a simple account or to be functionally consistently explainable; and besides which:
(1) What ontologically and operationally, exactly is
it? (2) How can its operation in the physical universe
be understood?
205
Philip Clapson
206
Bewildering Experience
What is not at issue is that whatever goes on in the
brain and nervous system is operating to effect what
the body does. If we could imagine the body operating
without experience, then we would be dealing with a
"pure" physical entity acting in the world. We would
be dealing with one of the philosophers' favorite characters: the zombie. This entity has sensory organs, an
operative body, a ratiocinating brain full of plans and
programs designed to achieve prespecified and developing goals. It is, in fact, just the same as a human
being except that it does not experience. And for such
philosophers as David Chalmers (1996), therein is the
mystery: Why do humans experience? What is its
point?
I put this slightly differently from the strictly nonreductive thesis of Chalmers and fellow travelers, because the nonreductiveness is really secondary. The
central and positive question is: What is experiencing
for? Its nonreductiveness or potentially epiphenomenal character only become an issue if no biological
purpose for experiencing is identifiable. And the difficulty lies exactly where Descartes posed his solution.
.') About this most writers are absolutely explicit, for example Dennett
(1991, 1996) and Damasio (1999) see below.
Into Biology
Transition to the Organismic Approach-Various
Writers
It is indicative of its power that even those who do
not accept nonreductive positions like Chalmers' can
find themselves snared by the soul notion.
The organismic approach resists what consciousness appears to reveal "to us." It is de facto antiCartesian. Why? Because what the organism does it
must do without recourse to what we appear to be as
"a knowing thing" as consciousness, unless we insist
on elevating it above biology (cf. Clar k' s final chapter,
1997). There is no way of creating a divide between
organism and experience, and granting experience
some grounds the organism does not have, without
accepting the sense, if not the letter, of Descartes' proposal (otherwise, in Ryle's words [1949], the ghost
remains in the machine).
John Searle's (1992) particular physicalism attempts to do so by saying that brains are of such a
biological character that consciousness (whatever that
is deemed to be) just is its result. Thus, mental processes in consciousness-like introspection and transparency-can be understood as a given of the nature
of the emergent domain consciousness is. But, as
many understand it, this is nonexplanatory and/or implausible.
207
208
what? Behavior. Why? How? Dennett never convincingly explains.
Since, as is well known, our behavior is extensively controlled by neural processes that are never
conscious, including some decision taking,8 and much
that we are conscious of seems irrelevant to our current behavior,9 and sometimes as consciousness we
say things differently from what we actually intend
and do, and as Freud pointed out we often do things
explaining ourselves for reasons that are not why we
are doing them at all (see Nisbett and Ross, 1980, for
experimental results), what is needed is an explanation
of the significance of those that actually are conscious.
Thus Dennett's account, that our judgments, our narrative, our images control our behavior, simply does not
adequately explain either why they are actually there
as they are-or not, while behavior still goes on. The
soul might necessarily meaningfully control (or Descartes' mind with its clearly thought self-understanding), but this is not what consciousness necessarily
does. lo Moreover, even if, for Dennett, conscious
states are brain states, he is still left with the apparent
duality of domains controlling our behavior: viz, why
does control have to be this emergent consciousness
as and from the physicality?
He says that: "Such questions betray a deep confusion, for they presuppose that what you are is something else, some Cartesian res cogitans in addition to
all this brain-and-body activity" (1997, p. 206). But
that is not the issue. The issue is: Why/how do just
these images, words, feelings, moods, thoughts control
what the organism does? What is the relation between
my seeing the roses and my buying some explained as
brain function? What functional contribution to what
I do (as organism) is it that I have a seeing (or thought
or feeling)-point 4? The soul notion may embrace
domains as me or mine, but it cannot tacitly be assumed to explain their interconnection; it does not.
The neurobiologist, Antonio Damasio, whose
work The Feeling of What Happens (1999) has been
widely discussed, takes an explicitly organismic approach. "Consciousness begins when brains acquire
the power ... of telling a story without words, the
story that there is life ticking away in an organism,
and that the states of the living organism, within body
bounds, are continuously being altered by encounters
R Experiments of unconsciously influenced decision making (e.g., Damasio, 1999, p. 301).
9 Not only our rich explicit fantasy life, but much else besides.
10 Memory, for example, is procedural as well as factual and personal;
but procedural memory (e.g., learning to ride a bicycle) is not rendered
by conscious conceptual thought.
Philip Clapson
with objects or events in its environment, or, for that
matter, by thoughts and by internal adjustments of the
life process" (p. 30).
The crucial move Damasio makes is to locate
consciousness as an organismic biological component. 11 What that component does, in principle, is portray the encounter between organism and world
(including itself as "inner" world). It is a story of
organismic engagement. So Damasio takes on point 4
directly. Thus he disengages the intuition of consciousness as the viewed scene of the observer, Dennett's Cartesian Theater. Moreover, and more true to
the phenomenology, he does not insist on Dennett's
unconvincing essential tie of consciousness and the
control of behavior.
Damasio takes as a central fact the findings of
Libet, Curtis, Wright, and Pearl (1983). In the last
two decades, Libet's location of the delay between
the nonconscious beginnings of voluntary action and
awareness of same have carried the implication that
consciousness (and thus what we experience) cannot
be a state that has its own powers of initiation or
control of action. 12 For consciousness seems to come
at the end of the brain (and nervous system) processing
cycle, when action (for example) has already been initiated; the delay is between 350 and 500 msec.
Damasio's original project (1994) along these
lines was to rehabilitate emotion into an account of
the reasoning processes of the human organism.
Broader now, as the title of this book indicates, he
looks to link the facts of experiencing and the facts of
neurophysiology, thus to position consciousness in the
biology of the organism. The following summarizes
various elements of Damasio' s position:
The idea of consciousness as a feeling of knowing is
consistent with the important fact I adduced regarding
the brain structures most closely related to consciousness: such structures, from those that support the
proto-self to those that support second-order mapII One reason for this is that Damasio does not come from, or endorse,
the computational view.
12 For example, "The brain evidently 'decides' to initiate, or, at least,
prepares to initiate the act at a time before there is any reportable subjective awareness that such a decision has taken place. It is concluded that
cerebral activity even of a spontaneously voluntary act can and usually
does begin unconsciously" (Libet et aI., 1983, p. 640). However, Libet's
own explanation of this, that consciousness acts potentially to veto unconsciously initiated action, is dualistic and implausible, since any veto consciousness imposes must itself be unconsciously initiated 350 to 500 msec
previously, as commentators have pointed out.
Libet's findings actually support our conlmon experience. In fast
sporting action "decisions" are taken before awareness of them. And we
duck before hearing the thunder, when it is overhead.
209
terms) for the organism?l3 While neural states function, we are simply adding in another stratum of apparentness, that of the mind where all this
representational occurrence is taking place. And why
should this bring evolutionary benefits? That there are
occurrences does not explain why those occurrences
function as claimed. Specifically, a feeling of pain
does not explain its function, since in Descartes' account, the mind has to be assumed for the function to
operate, and so far no fundamental explanation of the
mind exists.
Moreover, as the Libet data suggest, everything
that is involved in the causal process of seeing the
roses and deciding to buy occurs unconsciously. We
are left purely with the results of the process (the neural portrayal) as what is conscious. Consciously I decide nothing. My ccnsciousness simply follows along
behind what my brain is deciding nonconsciously.
"The idea that consciousness is tardy, relative to the
entity that initiates the process of consciousness, is
supported by Benjamin Libet's pioneering experiments," Damasio says (p. 287).
Thus Damasio's account remains on parallel
tracks. The nonreconciliation is caused, we diagnose,
by-the soul notion; because, in the principle of the
soul, whatever is going on in the organism has to be
in and for the organism qua organism; both neural
states and their conscious portrayal: isolated, solipsistic, self-operative.
Damasio correctly begins by establishing the precedence of point 1 (organism) over 2 (consciousness),
and has an attempt at explanation of point 4 (biology).
But because point 3 (psychology) is still his operative
understanding, he lapses in his further explanation to
a position where points 1 and 2 are equal and irreconcilable.
Indeed, it has been said truly that neuroscience
has not yet done its job: It has no explanatory domain
for the neuro- but simply grafts psychological concepts onto brain locations and their interconnections.
In the words of J. Graham Beaumont (1999): "Neuropsychology is in a conceptual morass. Neuropsychologists seek to study the relationship between brain and
mind, but without ever really addressing the status
of these two constructs, or what potential forms the
relationship between them might take" (p. 527).
The wretched ghost still haunts the feast.
Some writers, on Libet et al.' s findings, have
come to the conclusion that there is something myste13 That feelings (or sensations) are motivational is a standard premise
in neuroscience and neuropsychology.
210
rious going on about what is happening as consciousness. 14 One such is Guy Claxton. In his paper,
"Whodunnit? Unpicking the 'Seems' of Free Will"
(1999), Claxton attempts to reconcile Libet with the
sense, in consciousness, that we have free will. The
notion of free will (of the mind, self, soul, person,
etc.), it has been long argued, does not coincide with
the determinism that neurophysiology implies.
The sense of free will, the seems of it in Claxton's
terms, is exactly knowledge occurring as consciousness to which Damasio refers. But this "knowledge"
is clearly not factual or justified (in philosophers'
terms). It is simply the that that the process of acting
can incorporate as a portrayal of the requisite neural
states. When I buy the red roses, I seem to be (I have
a sense that I am) acting under my own volition. Dennett's heterophenomenology is apposite because, although I may say I act voluntarily, though my
subjective experience does not describe the reality in
the physical universe, that I describe myself in this
manner from my experience is an important fact about
how neural states can portray themselves. Claxton
gives an example this way:
Conscious premonitions are attached precisely to actions that look as if they are going to happen-and
then sometimes don't. What folk psychology construes as "will power" -with its notorious' 'fallibility" -turns out to be a property not of some imperfect
agent but a design feature of the biocomputer. ... An
updating of prediction [as when we appear to "change
our mind"] is reframed as an inner battle between
conflicting intentions-and as further evidence of the
existence of the instigatory self [po 109].
Philip Clapson
Claxton recruits Libel's findings to mark the differentiation between what the organism does in its
modus operandi and what is portrayed as, for example,
the' 'self-as-instigator." 15 This is undoubtedly in the
organismic camp.16 The features raised, put together
with others in this section, build an emerging picture
which we will address explicitly later in this paper.
But still, of his account, we must ask exactly what
the point of there being both a biocomputer and conscious experience which is deceptive of its actual powers? For it must be the biocomputer that causes the
conscious experience, and if the conscious experience
is, in some sense, a fake or an illusion or deceptive 17-all these words are used in the literature to
describe our experience as opposed to some other
(e.g., neural) actuality: (1) What is the point of our
experience in the first place? (2) Why is experiencing's illusory nature, in this account, simply (in Claxton's words) "comforting"? (To what, the soul?)
Claxton does not address either of these points.
A similar problem arises with the views of Peter
Halligan and David Oakley in their New Scientist article (2000), which refers to (and endorses) Claxton's
paper. Halligan and Oakley confuse a mentalist model
of the conscious-unconscious with the model conscious-neural-their ' 'unconscious parts of the
brain." There is an assumption that whatever neural
processing is doing, which is of course inaccessible
per se to consciousness, it supports the still justifiable
distinction, which Freud's work exemplifies, between
an unconscious mind full of mechanisms, schemes,
and plans, including popping things into consciousness, and a consciousness ("us"), which gullibly accepts all these pulled strings of the unconscious. They
finish their article thus:
Perhaps by now you will have begun to think of yourself differently, to realise that "you" are not really
in control. Nevertheless, it will be virtually impossible to let go of the myth that the self and free will
are integral functions of consciousness. The myth is
something we are strongly adapted to maintain, and
almost impossible to escape from. Maybe that's not
15 He also seems to endorse Libel's own account of the significance
of consciousness, which we reject.
16 Although, for similar reasons to Dennett, the computer analogy is
not helpful.
17 For example, The User Illusion is the title of Tor Norretranders'
book on consciousness (1998).
211
212
bly doing things is redundant, an unhelpful fa~on de
parler. For example, the notion that the mind associates memories or thoughts with the image of the tree
offers nothing over and above the brain binding its
structure to give whatever occurs. Suppose I think of
another tree I saw, or a girl I once knew under a similar
tree, or a past tree struck by lightning: All that needs
be said is that the brain has bound whatever is apposite
to its operation now. It is occurrent.
Consider further this line of argument. Every neural state that is bound as "consciousness" expresses
(portrays) the neural status of the brain. Therefore we
need not suppose there is a neural connection between,
for example, memories qua memories. With the image
of the tree and the memory of a girl under a similar
tree, my "consciousness" does not thus remember the
girl by (causal) mental association. She is simply a
bound neural state. Of course, the girl may occur, associated with bindings of other neural elements that bespeak "this is the past girl under a similar tree," but
may not explicitly or immediately or ever. The girlbinding may be a neural process on some kind of neural association, that is, neural connections laid down
when the original experiences occurred as neural
states, or linked to others for brain-biological functioning. As they occur they may have the appearance, that
is, part of the way they are expressed as "consciousness," may involve the sense that there is a mentalist
association. But as in Claxton's example of the sense
of free will as causal, we would consider this an aspect
of the portrayal process, not indicative of mental powers. The girl may occur as a mere unrecognized
fragment.
Thus the argument would conclude that "consciousness" is not causal upon what happens in the
organism. Neural states cause each other; they cause
behavior; they also cause what is "conscious." But a
"consciousness" does not cause another' 'consciousness." Nor does it cause nonconscious activity of the
brain: that is, the brain does not read its own conscious
states on the supposition that otherwise it would not
know them (as is the case in, for example, Baars's
Work Space Theory of Consciousness [1996]).
This denial of conscious causality is most dramatically understood by pain not being a motivator. 19
However the neural states causing the experience of
pain achieve their result, the pain itself causes nothing
in the organism. For what the organism does-in tak19 That the brain in fact regulates pain felt in appropriate contexts
(i.e., that the experience is not a necessity of physically damaged states),
is described by Patrick Wall in his book Pain: The Science of Suffering
(1999).
Philip Clapson
213
operate. For this is what they have separately but in
common, and need common command of.
And there is a requirement, too, that coexisting,
or corelated elements of what is to be conveyed in the
communication should be copresented, or integral, as
the communication. Significant interrelation of the elements also needs to be represented. 22
Now to develop what is involved in this proposal,
we build up the steps:
1. For what the organism takes in and manipulates of the environment and conspecifics, what is
needed is a fast, task-specific yet widely comparative/
significance-balancing processing structure, to facilitate behavioral response to a complex and changing
environment.
2. For communicating to conspecifics, what is
required is a reciprocal, apposite, and consistent codification between the conspecifics.
3. Thus in the communication of A behaving in
the environment which B is to receive, A must assume
in B that what it (A) is communicating can be received
by B, and then reciprocated by some appropriate response.
4. If A is behaving in an environment which will
influence B's behavior-for example if they are hunting lions with a gazelle in view-A must suppose implicitly that B can represent "A's behavior in the
environment" and respond to it.
5. Now consider what happens in B as a result
of receiving the communication of A. B processes the
scene by its complex silent method that does not require representational structures that have the form or
content of the environment. What is involved in its
output to A? Well, B also cannot communicate the
environment in which it responds to A; it also must
assume (implicitly) that A is able to represent "B's
behavior in the environment" in A.
6. So, without any notion of the environment or
behavior or the gazelle or the conspecific, communicative success between the lions must assume all aspects
are reciprocally represented within them by some
means.
7. Thus there are two representational domains
involved in communication, input and output. And
there are two relevant factors: the actual behavior and
gestures of the organisms, and the environment of
their actions.
22 The obvious distinction being drawn is between the massive neural
network, parallel processing of the brain, and the (brain) images of "consciousness. ' ,
214
8. B processes input from A by the complex silent method. The "conscious" state in B, as a result,
then entails A's movement, A's gestures, the gazelle
and the environment. That very "consciousness," it
is concluded here, is the response of B' s organism to
A's input.
In other words, from the input processing in B,
the "consciousness" of the scene in B becomes the
analogical output to A. It is in a representational form
that fits with A's requirement for adequate communication. This is because it is the analogical recreation
in B of what A is doing in the environment, together
with B' s actual reaction-thus output.
To lay it out further: If A moves to the right, B' s
silent neural states analyze A's action, and then A's
move to A's right is created as B's "consciousness."
For A (implicitly), this is B' s response (i.e., output)
to A's move. But in addition if, for hunting success,
B also should move in a direction in accord with A's
move, B's silent processing of A's move to A's right
also causes B' s move. Though this is a move per se,
it is also output to A. Thus the input processing locks
each to the other in the hunt program, and the output
signifying that interlock is the reciprocal "conscious"
state (which the other must assume) together with the
behavioral change.
What is explicitly communicated is movement,
gesture, and vocalization-what we call the reactive
element. What is implicitly communicated is the environment, including the conspecific and gazelle-what
we call the contextual element. 23 Of course the commonality of context is not precisely the same; it has,
as it were, a plasticity within its mutually defined operative locale.
9. Put another way, when you and I are talking
(which lions cannot) we do not have to say to each
other constantly: There is a wall, there is a chair, there
is a table, there is a window. Our organisms are already communicating this (nonconceptual) commonality that we assume by our common image as our
separate "consciousnesses." When my words (the input) are processed by your silent neural states and then
become "conscious," you take them as coming from
me (without considering it), for I am represented as
your "consciousness" too. But when they are your
"consciousness" :
Already they represent your (organismic) output to me,
for already they have associated the reaction you (organ23 Here we are treating of the "directly successful" proper function
(Millikan, 1984) of "consciousness." Output continues whether there is
a conspecific around or not.
Philip Clapson
68).
215
Now a riposte to this might be: But what if I'm
alone? I'm not communicating with anyone, so on this
principle my "consciousness" is redundant. But this
is not so. The biological proper function of "consciousness, " in Millikan's sense, is to be the portrayal
of neural status, and this will happen regardless of
whether anyone else is a participant in the communicative process (Millikan, 1984). The riposte, again, presupposes our experience is going on "for us," as
mind, as soul.
Suppose one is thinking and has, alone, a new
idea or realization. The organism, at some later time,
may well reuse this idea or realization. But it will
not remember and use that "conscious" state. Silent
neural states will repeat and the idea will reoccur (including possibly the bound' 'fact of past occurrence' ')
and may be useful in some communicative context.
Even if one is only "using the idea" again in some
isolated situation, that the idea occurs does not guide
what the organism does, but is simply available to
be the communication of whatever the organism is
operating as, those silent neural states. To say (as some
do), that a verbally expressed thought feeds back per
se into neural states, has not grasped the mechanisms,
for it preserves the notion that mentality is in the brain.
Indeed, as "conscious" appearance, no mental state
is ever exactly the same, for the brain's states will
never be the same.
A riposte to the banishment of "mind," and all
its terminology, which is so familiar to us, might be
that if one is describing what neural states are doing,
and the construct "mind" is such a description, why
is it not adequate?29 There are three responses. (1)
Mind concepts were invented before more insightful
organismic understanding occurred. De factor starting
from here we would not presuppose the mind, and
therefore owe it no historical allegiance. (2) The brain
does things that have rendered the mind story in the
first place-and we need to understand what they are
explicitly, which the current mind story obscures. (3)
We simply will not grasp an organismic understanding, which is biologically of great significance, if we
perpetuate a fiction in the midst of it.
Most disturbing is the understanding that writers
have reached, from Freud onwards, that our existence
is planned and controlled without any conscious
awareness until whatever the brain' 'decides" to make
conscious. But we now conclude that, even when
"conscious" awareness arrives, it is not awareness in
29 Many writers might take this view, including such diverse figures
as the philosopher Donald Davidson and cognitive scientist Bernard Baars.
216
Since the approach here is novel, any research program requires a fresh start in both understanding and
describing what is going on in the organism, and developing an adequate descriptive vocabulary. There
may be interest in associating aspects of mentalist terminology with the new, but finding the new is the
prime aim. For space reasons, it is not possible here
to layout formally the structure and content of such
a program. The following is an indication.
The Brain
Philip Clapson
states can identify objects and perform logical operations without wor king on any kind of problematic
symbolic content of those entities or processes. This
is all well known.
I can only raise two key issues.
1. The death of phosphorescence. The first is a
mistake about what consciousness does, which, despite material here being widely available, has not
been discarded. This is the supposed difference of processing between the conscious and nonconscious; that
when we are conscious "we" are active in a way
that is not the case when nonconscious. For example:
"PET studies, published in the 1990s, were stunning
for the sheer dynamism of the change.... When a task
was novel, wide areas of the brain were lighting
up.... But with just a few minutes' practice, the same
results could be produced with hardly any visible effort at all. A skill that had been learned or explored
in the full limelight of consciousness had become
downloaded to create a low-level action template"
(McCrone, 1999, p. 192).
This description by McCrone of Richard Haier's
experiment on students undergoing a computer game
test (typically) confuses "conscious" apparentness
with causal brain activity. It is reasonable to assume
that the brain must modify itself for the task, and to
begin with processing is wide ranging and complex.
Once modified, it can be replayed without the modification process. But this does not mean that "consciousness" pours its "full limelight" 30 on the
learning process-whatever that may mean. "Consciousness" is the (selected, regulated) portrayal of
what the brain is doing; the brain does by neural processes which are not, nor can be, conscious. While
McCrone's glamorizing misdescription is so inscribed
in our vocabulary, a grasp both of "consciousness"
(the brain's communication mechanism), and the neural causation, will remain elusive.
2. Biological translation categories. But clearly,
what causes the organism to act, although describable
in neural terms, must also be describable in terms of
biological categories. For neurophysiology implements biological aims. Freud understood that human
activity realized biological aims, which themselves
were unavailable to consciousness and generalized,
and became specific in the process of realization. This
is how one object could substitute another in satisfying
some primary biological goal. With a different intent,
but resonant insight, Lakoff (1987) and Johnson
(1987) have found within mental activity and its verbal
30
Phenomenology
There are various aspects to reinterpreting phenomenology upon biological and neural understanding;
some of this has happened. The brain's operation may
account for aspects of our experience: Brain states occur for a limited time as does attention; immediate
memory can hold up to only seven items on average,
which presumably is a physiological constraint; emotional states coincide with the presence of certain hormones and peptides. These physical-experiential
coincidences (with the brain deficit and imaging analyses) indicate that "conscious experiencing" is physically locatable.
More important for the approach here, however,
is to analyze "conciousness" on the understanding
that what we are dealing with is a biological interpretation of physical states. "Consciousness" portrays
the operation of neurophysiological currency, upon
which already developed structures and states are influential. It does this in terms of how it finds the world
217
(or its own states as memories, concerns, etc.), and
how it is reacting to that finding. It is a twin poled
expressiveness, as Damasio, and Freud before him,
understood (cf. Freud's two [inner, outer] representational surfaces, as discussed by Mark Solms [1997]).
It is remar kable that evolution should have taken
this path. The hungry infant screams with a "conscious" state of presumably almost complete indeterminacy. The adult has a fine communicable sense of
his or her pangs of hunger together with discriminatory
premonitions of cuisine, and an inventory of appropriate restaurants. "Consciousness" is, as it were, a
biological fabric of interpreted neurophysiology that
during life is woven into communicative possibilities,
particularly with the acquisition of language. It wor ks
because brains are evolutionarily constructed so. In
this sense we are not separate individuals. Although
we may explore the functional details in lower animals
without language-where, for example, it begins in
evolution-we appreciate that language (with little
communicative content) can function because of its
effect on the massive neural structure it activates. 31
This mostly goes unnoticed, being mostly not part of
"consciousness" in the communicators.
Many characteristics of the phenomenology,
looked at this way, have simply passed by investigation. As in the previous section, only two key issues
can be raised.
1. The biological' 'sense of " "Consciousness"
brings with it its own biological function, sense of
Claxton dwells on the sense of free will. But in fact
this sense of underlies the whole fabric of "conscious" experience, indicating not some actuality of
the person's grasp of the world (a mentalist belief category), but merely a preparedness of the organism to
continue to the next moment on this indicator (a biological status or action category).
In the case of blindsight sufferers it is said, for
example, that lack of conscious experience limits their
actions. Under experiment, though they are capable of
guessing what object is in their blind field from a pregiven list with above average success, which implies
their brains have access to some visual information,
they would never act voluntarily in relation to that
object because they assert they cannot see it. This
seems to confirm that it is the visual experience itself
that allows a normal individual to act voluntarily
31 Euan MacPhail (1998), for example, is doubtful that our conscious
fabric can be anything like that of other animals despite our sentiments,
for only language, he thinks, enables consciousness of self which he deems
prerequisite. But his view of consciousness is not as communication.
218
(Marcel, 1988). But the conclusion is not so established.
Blindsight sufferers have brain damage that prevents their organism from registering objects in a way
that, interpreted from the neurophysiology, will appear
"consciously." But this does not mean that it is the
, 'conscious" experience that enables voluntary action.
It is what, and how the brain registers what then appears as "consciousness" that enables voluntary action. With "conscious" experience of seeing comes
the sense that what is seen is there. But this sense
belongs to what "consciousness" delivers, not what
the organism grasps of the object that makes it prepared to act. And the sense arises so that, in the communication mechanism that "consciousness" is, the
expressed assurance of the object being there is made.
If the object is uncertain, through fogginess or obscurity, the organism aims to grasp the object and cannot,
and what is apparent, being ill-defined, may refuse to
be one thing or another. What is expressed as "consciousness" is the obscure and involuntary coming
and going of the nonobject, until suddenly the object
appears with a sense of certainty. "Ah yes, it's an X.
Definitely an X." For the X is "seen." And one would
act on it. But this sense of certainty can be shattered
at the next moment because the brain has gone on
wor king, quite unbeknownst to what is registered as
, 'consciousness," until there appears before one not
an X but a Y, about which there is again certainty.
There is a clear distinction between what is there
and the biological function "what one is certain of."
What normals confirm over blindsighters is thus not
that experiencing establishes a certainty about the
world (the power of consciousness), but that a neurophysiological trigger that will enable the behavior of
the organism is portrayed. Which is, of course, exactly
what one would expect if not supposing "consciousness" has a transcendental reality-acquiring capacity
(cf. Millikan's attack on meaning rationalism [1984]).
The illusion is, therefore, contra Claxton, not our
experience, but that our experience is itself causal. The
biocomputer is not creating an illusion by the sense
of certainty of a perception: That would only be the
case if "consciousness" aspired to the causality of
the silent neural states. It does not. The sense of certainty is (in this case) to convey that the organism
would act on what is apparent in the perception. Belief
misses this distinction of the biology, as indicated.
2. The death ofmentalist reflexivity. Another factor in mentalist phenomenology that misleads is reflexivity. For the way the brain presents its
interpretation gives the sense that, out of our con-
Philip Clapson
scious states, we can and do reflect on, reconsider, or
probe the contents of our own minds. The philosophers' introspection. We do not.
If I have forgotten a name and probe the depths
of my memory, I experience: the state of forgotten,
the state of realizing that I will have to recall because
I have forgotten, the state of attempted recall, and a
vaguer state of trying to "let my mind go blank" so
that the name will emerge. These states are sustained,
rather than being in an exclusive sequence. More accurately, the states not immediate to the moment will
seem to be on a kind of periphery (as vision has a
foveated area, with the periphery blurred or unfocused). And this tell us something about how, in the
brain, the assembly of neural states, bound as "consciousness," present themselves: that there is a time/
task-localized-concurrency function (called, to different purpose in the literature, working memory) which
serves to copresent different material that contextualizes extended operation. In thinking of a name, I do
not forget that it is a name I am trying to think of. But
this does not mean that each of these, as experienced,
causally interacts with the others. They are just the
portrayal of the brain's operation, which here is a
whole made up of overlapping segments.
A zombie presumably would not need all this
elaborate presentation for, since "consciousness"
cannot be its means of communication, it (presumably) does not need communicative contextualization
of an extended operation. But a human presents the
context of its state as well as an individual segment
because only thus is it explanatory. Indeed, this very
biological function may become a disadvantage, for
"letting the mind go blank" is the attempt, by the
brain, to prevent the context presentation getting in
the way of the recall task itself.
Copresentation of the segments gives the strong
impression, ensconced in our ideas and language, that
it is our experience that is causal in the interrogation
of our experience or minds. It seems to indicate, for
example, that realizing I have forgotten the name, I
"look inward" for it. But obviously the brain does
not' 'look inward." It tries to engage the right location
in its search method, which it presents as "looking
inward." Perhaps more than any other one topic, analyzing reflexivity will disabuse us of our mentalist interpretation of the nature of "consciousness."
Summary
The aim here has been to identify and outline an understanding of the human organism that can be turned
References
Baars, B. J. (1996), In the Theatre of Consciousness: The
Workspace of the Mind. New York: Oxford University
Press.
Beaumont, J. G. (1999), Neuropsychology. In: The Blackwell Dictionary ofNeuropsychology, ed. J. G. Beaumont,
P. M. Kenealy, & M. J. C. Rogers. Oxford: Blackwell.
Brentano, F. (1874), Psychology from an Empirical Standpoint, tr. A. Rancurello, D. Terrell, & L. McAllister.
London: Routledge.
Chalmers, D. J. (1996), The Conscious Mind. Oxford: Oxford University Press.
Churchland, P. M. (1981), Eliminative materialism and the
propositional attitudes. In: The Nature of Mind, ed. D.
M. Rosenthal. Oxford: Oxford University Press, 1991,
pp.601-612.
Clark, A. (1997), Being There: Putting Brain, Body and
World Together Again. Cambridge, MA: MIT Press.
Claxton, G. (1999), Whodunnit? Unpicking the "seems"
of free will. In: The Volitional Brain, ed. B. Libet, A.
Freeman, & K. Sutherland. Exeter, U.K.: Academic,
pp. 99-114.
Damasio, A. (1994), Descartes' Error. New York: Putnam.
- - - (1999), The Feeling of What Happens. London:
Heinemann.
Dennett, D. C. (1987), The Intentional Stance. Cambridge,
MA: MIT Press.
- - - (1991), Consciousness Explained. New York: Little Brown.
- - - (1994), Dennett. In: A Companion to the Philosophy
of Mind, ed. S. Guttenplan. Oxford: Blackwell, pp.
236-244.
- - - (1996), Kinds of Minds. London: Weidenfeld &
Nicolson.
D'escartes, R. (1985), The Philosophical Writings of Descartes, Vols. 1 & 2, tr. J. Cottingham, R. Stoothoff, &
219
D. Murdoch. Cambridge, U.K.: Cambridge University
Press.
Fichte, J. G. (1994), Introductions to the Wissenschaftslehre
(1797-1800), tr. D. Breazle. Indianapolis: Hackett.
Freud, S. (1895), Project for a Scientific Psychology. Standard Edition, 1:281-391. London: Hogarth Press, 1966.
Halligan, P., & Oakley, D. (2000), Greatest myth of all.
New Scientist, November 18, p. 34.
Hume, D. (1739-1740), A Treatise of Human Understanding. London: Fontana.
Humphrey, N. (1983), Consciousness Regained. Oxford:
Oxford University Press.
- - - (1992), A History of the Mind. London: Chatto &
Windus.
Johnson, M. (1987), The Body in the Mind. Chicago: University of Chicago Press.
Kant, I. (1781), Critique ofPure Reason, tr. N. Kemp Smith.
London: Macmillan, 1929.
Lakoff, G. (1987), Women, Fire and Dangerous Things.
Chicago: University of Chicago Press.
Laplanche, J., & Pontalis, J.-B. (1973), The Language of
Psychoanalysis. London: Hogarth Press.
LeDoux, J. (1998), The Emotional Brain. New York: Simon & Schuster.
Libet, B., Curtis, A. G., Wright, E. W., & Pearl, D. K.
(1983), Time of conscious intention to act in relation
to onset of cerebral activity (readiness potential). The
unconscious initiation of a freely voluntary act. Brain,
106:623-642.
MacPhail, E. M. (1998), The Evolution of Consciousness.
Oxford: Oxford University Press.
Marcel, A. J. (1988), Phenomenal experience and functionalism. In: Consciousness and Contemporary Science, ed.
A. J. Marcel & E. Bisiach. Oxford: Clarendon Press,
pp. 121-158.
McCrone, J. (1999), Going Inside. London: Faber & Faber.
Millikan, R. G. (1984), Language, Thought and Other Biological Categories. Cambridge, MA: MIT Press.
Munz, P. (1993), Philosophical Darwinism. London:
Routledge.
Nisbett, R. E., & Ross, L. (1980), Human Inferences: Strategic Shortcomings of Social Judgment. Englewood Cliffs,
NJ: Prentice Hall.
Norretranders, T. (1998), The User Illusion, tr. J. Sydenham. New York: Penguin-Putnam.
Ryle, G. (1949), The Concept of Mind. London: Hutchinson.
Seager, W. (1999), Theories of Consciousness. New
York: Routledge.
Searle, J. (1992), The Rediscovery of the Mind. Cambridge,
MA: MIT Press.
Schopenhauer, A. (1818), The World as Will and Representation, Vols. 1 & 2. New York: Dover, 1966.
Solms, M. (1997), What is consciousness? J. Amer. Psychoanal. Assn., 45:681-703.
Sulloway, F. (1983), Freud: Biologist of the Mind. Cambridge, MA: Harvard University Press.
Philip Clapson
220
Varela, F. J., Thompson, E., & Rosch, E. (1991), The Embodied Mind. Cambridge, MA: MIT Press.
Wall, P. (1999), Pain: The Science of Suffering. London:
Weidenfeld & Nicolson.
Wittgenstein, L. (1976), Philosophical Investigations, tr. G.
E. M. Anscombe. Oxford: Basil Blackwell.
Philip Clapson
P. O. Box 38225
London NW3 5XT
United Kingdom
e-mail: philipclapson@yahoo.co.uk