You are on page 1of 358

The Unity of Mind, Brain and World

Issues concerning the unity of minds, bodies and the world have often
recurred in the history of philosophy and, more recently, in scientific
models. Taking into account both the philosophical and scientific knowl-
edge about consciousness, this book presents and discusses some the-
oretical guiding ideas for the science of consciousness. The authors
argue that, within this interdisciplinary context, a consensus appears
to be emerging assuming that the conscious mind and the functioning
brain are two aspects of a complex system that interacts with the world.
How can this concept of reality one that includes the existence of
consciousness be approached both philosophically and scientifically?
The Unity of Mind, Brain and World is the result of a three-year online
discussion between the authors who present a diversity of perspectives
that tend towards a theoretical synthesis, aimed to contribute to the
insertion of this field of knowledge in the academic curriculum.

a l f r e d o pe r e i r a j r. is Adjunct Professor in the Department of


Education at the Institute of Biosciences, Sao Paulo State University
(UNESP).
d i e t r i c h l e h m a n n is Professor Emeritus of Clinical Neurophysiol-
ogy at the University of Zurich and a Member of The KEY Institute for
Brain-Mind Research at the University Hospital of Psychiatry, Zurich.
The Unity of Mind, Brain
and World
Current Perspectives on a Science
of Consciousness

Edited by
Alfredo Pereira Jr. and Dietrich Lehmann
University Printing House, Cambridge CB2 8BS, United Kingdom

Published in the United States of America by Cambridge University Press,


New York
Cambridge University Press is part of the University of Cambridge.

It furthers the Universitys mission by disseminating knowledge in the pursuit of


education, learning and research at the highest international levels of excellence.
www.cambridge.org
Information on this title: www.cambridge.org/9781107026292

C Cambridge University Press 2013

This publication is in copyright. Subject to statutory exception


and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without the written
permission of Cambridge University Press.
First published 2013
Printed in the United Kingdom by MPG Printgroup Ltd, Cambridge
A catalogue record for this publication is available from the British Library
Library of Congress Cataloguing in Publication data
The unity of mind, brain, and world : current perspectives on a science of
consciousness / edited by Alfredo Pereira Jr. and Dietrich Lehmann.
pages cm
Includes bibliographical references and index.
ISBN 978-1-107-61729-2
1. Consciousness. I. Pereira, Alfredo, Jr., editor of compilation.
BF311.U58 2013
153 dc23 2013009531
ISBN 978-1-107-02629-2 Hardback
Cambridge University Press has no responsibility for the persistence or
accuracy of URLs for external or third-party internet websites referred to
in this publication, and does not guarantee that any content on such
websites is, or will remain, accurate or appropriate.
Contents

List of figures page vii


List of tables ix
List of contributors x

Introduction 1
a l f r e d o pe r e i r a j r. a n d d i e t r i c h l e h m a n n
1 Body and world as phenomenal contents of the brains
reality model 7
bjorn merker
2 Homing in on the brain mechanisms linked to
consciousness: The buffer of the perception-and-action
interface 43
c h r i s t i n e a . g o d w i n , a d a m g a z z a l e y, a n d
ezequiel morsella
3 A biosemiotic view on consciousness derived from
system hierarchy 77
ron cottam and willy ranson
4 A conceptual framework embedding conscious
experience in physical processes 113
wo l f g a n g b a e r
5 Emergence in dual-aspect monism 149
r a m l . p. v i m a l
6 Consciousness: Microstates of the brains electric
field as atoms of thought and emotion 191
dietrich lehmann
7 A foundation for the scientific study of consciousness 219
a r n o l d tr e h u b

v
vi Contents

8 The proemial synapse: Consciousness-generating


glial-neuronal units 233
b e r n h a r d j. m i t t e r a u e r
9 A cognitive model of language and
conscious processes 265
l e o n i d pe r l ov s k y
10 Triple-aspect monism: A conceptual framework
for the science of human consciousness 299
a l f r e d o pe r e i r a j r.

Index 338
Figures

1.1 A minimal sketch of the orienting domain. page 15


1.2 Two constituents of the decision domain embedded in the
schematism of the orienting domain of Fig. 1.1. 21
1.3 Ernst Machs classical rendition of the view through his
left eye. 28
1.4 The full ontology of the consciousness paradigm
introduced in the text. 33
2.1 Buffer of the Perception-and-Action Interface (BPAI). 65
3.1 Limitation in the perceptional bandwidth of differently
sized perceptional structures within the same environment
causes them to be partially isolated from each other. 86
3.2 The representation of a multi-scalar hierarchy. 89
3.3 Hierarchical complex layers and scaled-model/ecosystem
pairings. 93
3.4 Unifications of the first and second hyperscales. 103
3.5 Mutual observation between the model hyperscale and its
ecosystem hyperscale. 105
4.1 A first-person cognitive cycle with a nave model of
physical reality. 120
4.2 Cognitive loops with a reality belief. 124
4.3 Architecture of a human thought process that creates the
feeling of permanent objects in our environment. 131
4.4 Mapping quantum theory to the architecture of a
cognitive cycle. 140
4.5 Reality model of space and content. 142
6.1 Power spectra of EEG recordings during times when
subjects signaled experiencing visual hallucinations or
body image disturbances. 202
6.2 Sequence of maps of momentary potential distribution on
the head surface during no-task resting, at intervals of 7.8
ms (there were 128 maps per second). 204

vii
viii List of figures

6.3 Maps of momentary potential distribution on the head


surface during no-task resting. 205
6.4 Maps of the potential distribution on the head surface of
the four standard microstate classes during no-task
resting, obtained from 496 healthy 6 to 80-year-old
subjects (data of Koenig et al. 2002). 206
6.5 Glass brain views of brain sources that were active during
microstates associated with spontaneous or induced
visual-concrete imagery and during microstates associated
with spontaneous or induced abstract thought. 208
7.1 Dual-aspect monism. 221
7.2 The retinoid system. 224
7.3 Non-conscious creatures and conscious creatures. 226
7.4 Illusory experience of a central surface sliding over the
background. 227
7.5 Perspective illusion of size reflected in fMRI. 228
7.6 Rotated table illusion. 230
8.1 Schematic diagram of possible glial-neuronal interactions
at the glutamatergic tripartite synapse. 238
8.2 Basic pathways of information processing in a
glial-neuronal synapse. 241
8.3 Outline of an astrocyte domain organization. 245
8.4 Tritogrammatic tree. 247
8.5 Outline of an astrocytic syncytium. 251
8.6 Negations operate on a cyclic proemial relationship. 257
9.1 An example of DL perception of smile and frown
objects in noise. 271
9.2 A hierarchy of cognition. 272
9.3 The dual hierarchy of language and cognition. 275
10.1 The apparent mind and nature paradox. 306
10.2 The TAM tree. 312
10.3 Kinds of temporal relations between and within aspects of
a conscious dynamic system. 322
10.4 The conscious continuum of an episode. 327
Tables

5.1 Status of the three steps of self-as-knower under various


conditions; see also (Damasio 2010, pp. 225240)
and our endnotes. page 163
8.1 Tritostructure. 248
8.2 Quadrivalent permutation system arranged in a
lexicographic order. 252
8.3 Example of a Hamilton loop generated by a sequence of
negation operators. 253
8.4 Guenther matrix consisting of 24 Hamilton loops. 256
8.5 Hamilton loop generated by a sequence of negation
operators. 258

ix
Contributors

wo l f g a n g b a e r , Ph.D., Associate Research Professor of


Information Sciences, Naval Postgraduate School, Monterey (retired)
and Research Director, Nascent Systems Inc., USA.
r o n c o t t a m , Ph.D., researcher at the Vrije Universiteit Brussel,
Belgium.
a d a m g a z z a l e y , M.D., Ph.D., Associate Professor of Neurology,
Physiology, and Psychiatry at the University of California, San
Francisco, USA.
c h r i s t i n e a . g o d w i n , Masters degree student at the Department
of Psychology at San Francisco State University, USA.
d i e t r i c h l e h m a n n , M.D., M.D. (Hon), Professor Emeritus of
Clinical Neurophysiology, University of Zurich. Member of The
KEY Institute for Brain-Mind Research, University Hospital of
Psychiatry, Zurich, Switzerland.
b j o r n m e r k e r , Ph.D., independent scholar residing in Kristianstad,
Sweden.
b e r n h a r d j. m i t t e r a u e r , M.D., Professor of Neuropsychiatry
(Emeritus) at the University of Salzburg; Volitronics-Institute for
Basic Research, Psychopathology and Brainphilosophy, Austria.
e z e q u i e l m o r s e l l a , Ph.D., Associate Professor of Psychology at
San Francisco State University and Assistant Adjunct Professor at the
Department of Neurology at the University of California, San
Francisco, USA.
a l f r e d o pe r e i r a j r . , Ph.D., Professor at Sao Paulo State University
(UNESP); researcher of the National Research Council (CNPQ),
and sub-coordinator of a Thematic Project of the Foundation for
Research Support of the State of Sao Paulo (FAPESP), Brasil.

x
List of contributors xi

l e o n i d pe r l ov s k y , Ph.D., Visiting Scholar, Harvard University,


Athinoula A. Martinos Center for Biomedical Imaging; Principal
Research Physicist and Technical Advisor, Air Force Research
Laboratory, USA.
w i l l y r a n s o n , Dr., Ir., researcher at the Vrije Universiteit Brussel,
Belgium.
a r n o l d tr e h u b , Ph.D., Adjunct Professor of Psychology at the
University of Massachusetts at Amherst, USA.
r a m l . p. v i m a l , Ph.D., Amaravati-Hraman.i Professor (Research)
at Vision Research Institute, 25 Rita St., Lowell, MA 01854, USA,
and Dristi Anusandhana Sansthana at: (1) Ahmedabad, India;
(2) Pendra, C.G., India; and (3) Betiahata, Gorakhpur, India.
Introduction

Alfredo Pereira Jr. and Dietrich Lehmann

In this book we present and discuss, in ten chapters, a range of views,


theories, and scientific approaches that concern the phenomenon of con-
sciousness. The chapters draw a broad panorama of the diversity of
thoughts that characterize this field of studies. While preserving the
variety of ideas, this book also makes progress towards a systematic
approach that aims to support consciousness science as a general disci-
pline in undergraduate courses, and as the subject of specialized graduate
courses dedicated to complete the training of professionals from different
fields such as neuroscience, sociology, economics, physics, psychology,
philosophy, and medicine.
A consensus appears to be emerging, assuming that the conscious
mind and the functioning brain are two aspects of a complex system that
interacts with the world. How could this concept of reality one that
includes the existence of consciousness be approached philosophically
and scientifically? Contrary to a majority of publications in this field,
this book takes into account both philosophical and interdisciplinary
scientific knowledge about consciousness. Our ten chapters resulting
from a three-year online discussion between the authors present a
diversity of perspectives that tend towards a theoretical synthesis.
Issues concerning the unity of minds, bodies, and the world have often
recurred in the history of philosophy and, more recently, in scientific
models. For instance, in classical Greek philosophy Plato proposed a
dualistic view of reality, as composed of a world of ideas and a material
world of appearances. The connection between the two worlds was made
by the Demiurge. Platos disciple Aristotle criticized such a dualism and
proposed a monistic view Hylomorphism considering that ideas do not
exist in a separate world. He conceived ideas as embodied in a material
substrate in nature, composing the form of things and processes. Form
and matter would work together to shape reality: forms are responsible
for the determination of kinds of beings (e.g., biological species), while
matter is the principle of individuation (e.g., what makes an individual
different from others belonging to the same species).
1
2 Alfredo Pereira Jr. and Dietrich Lehmann

In occidental philosophy and culture, until quite recently the most


influential theory about the relationship of mind and body has been Sub-
stance Dualism, proposed by Descartes. The human being, according
to this concept, is composed of an immaterial thinking substance and
a material body, putatively connected by the pineal gland. Interestingly,
one of Descartes followers, Spinoza, repeated Aristotles move towards
Monism, conceiving of nature as the totality of all that exists with two
different but inseparable aspects: the mental and the physical. One of the
consequences of this move concerns the status of feelings and emotions:
instead of mere perturbations to the flow of clear and distinct ideas, they
become central aspects of human personality. This view has re-emerged
particularly in the work of Antonio Damasio (2003) who explicitly rec-
ognized the influence of Spinoza.
Towards the end of the twentieth century, the appearance of cogni-
tive sciences supported several approaches to understanding minds as
physical systems. Many of these approaches assumed that minds are
computational functions that could be instantiated not only in brains but
also in other material systems such as mechanical robots. Set against this
reductionist approach, it was argued that the conscious mind is more
than computation, including experiences of qualitative features (Jackson
1986) and a first-person perspective (the lived experience of what its
like to be in a given condition; Nagel 1974).
Defenders of reductionist views were then confronted with what
Chalmers (1995, 1996) called the hard problem of consciousness,
here summarized in two statements: (1) Conscious processes supervene
on (derive from) physical processes, but (2) conscious experiences can-
not be causally explained (or deduced) from currently known physical
processes, laws, and principles. The hard problem builds on the work
of a generation of philosophers including Nagel and Jackson who
addressed the mind-brain problem. Part (1) above was extensively devel-
oped by Kim (1993), while part (2) was deeply discussed by Levine
(1983).
Since Chalmers classical paper of 1995, many attempts to solve the
problem have been made. Emphasis on conscious phenomenology as
advanced by Varela et al. (1991) has revealed the richness of our experi-
ences, thus undermining the reductionist approach proper of Western
sciences in the domain of consciousness studies. Moving one step
further, from phenomenology to ontology, we claim that in a monist
perspective it is not necessary to look for causal explanations of how the
physical could generate the mental, since both mental and physical are two
aspects of the same underlying reality. Beyond the hard problem, other
questions about consciousness can be posed which are more amenable
to a scientific approach. Both the physical and mental aspects of reality
Introduction 3

are fundamental, but in evolution the appearances of mentality and con-


sciousness require the operation of specific mechanisms (Lehmann 1990,
2004). What are these mechanisms? Which aspects of phenomenology
are they expected to explain? How could the inherently different first-
and third-person perspectives be about the same world? These are the
main questions raised in the book.
The group of authors contributing to this volume has been interacting
in a variety of ways, in a common effort to systematize the philosophical
foundations of current scientific approaches to consciousness. Some of us
have participated in meetings aimed at promoting a scientific approach
to consciousness, such as the series Towards a Science of Consciousness,
and the annual meetings of the Association for the Scientific Study of
Consciousness. A public discussion in Nature Networks forum on Brain
Physiology, Cognition and Consciousness in 2008 was an opportunity to
elaborate the very definition of consciousness (Pereira Jr. and Ricke
2009). A series of private discussions in 2009 led to a collective publi-
cation about current theoretical approaches (Pereira Jr. et al. 2010). In
2010, the Consciousness Researchers Forum was formed as a private group
in the Nature Network to support the organization of the present book.
Its chapters reflect the rapport of philosophers and scientists from dif-
ferent disciplines, collectively discussing philosophical issues crucial for
the establishment of a theoretical framework for the emerging field of
Consciousness Science.
The chapters cover varied perspectives on what a science of conscious-
ness would be. What they have in common is a search for unity, in terms
of similarities: some authors look for kinds of brain activity that are ana-
log to mental activities, while others look for perception-action cycles
by which mental activity reflects the structure of the world we live in.
Philosophically, these views share the conception of Dual-Aspect Monism
proposed by Max Velmans (2009), the idea that mind and brain activities
derive from a common ground, as well as Reflexive Monism (proposed
by the same author), the idea that the mind and the world reflect each
other.
Our guiding line is that the phenomenon of consciousness is a com-
plex result of the interplay of series of events, which can be organized
on two dimensions, corresponding respectively to Reflexive and Dual-
Aspect Monism: a horizontal, mirroring interaction of brains, bod-
ies and their world, and a vertical, complementary, non-reductive
combination of micro and macro processes to form conscious experi-
ences. Eventually, the expressions Double-Aspect or Triple-Aspect
Monism are used in some of our chapters to stress this complementar-
ity of aspects, because the term Dual has the unwanted connotation of
Dualism.
4 Alfredo Pereira Jr. and Dietrich Lehmann

According to this line of reasoning, our Chapter 1, written by Bjorn


Merker, presents an ontological framework for the interaction of brains,
bodies and their world, based on the idea of a dynamical interface con-
taining three selection processes. This dynamical activity both generates
consciousness and makes the process of generation opaque to introspec-
tion.
In Chapter 2, Christine Godwin, Adam Gazzeley, and Ezequiel
Morsella report and discuss empirical findings in support of the the-
sis that consciousness is necessary for the execution of coherent actions
by animals that interact with their world in complex modalities (as those
described by Merker).
Chapter 3, by Ron Cottam and Willy Ranson, departs from the view
that the interplay of events that generate consciousness involves multiple
scales of description. An approach to meaningful signaling would require
the tools of biosemiotics, a discipline derived from the work of Charles
Sanders Peirce.
In Chapter 4, Wolfgang Baer presents the view that conscious episodes
are contained in physical processes and analyzes the metaphysics of quan-
tum theory to show how cognitive operations can be accomplished within
a single physical framework.
In Chapter 5, Ram Vimal argues that the construction of conscious
episodes requires specific mechanisms for the matching of patterns which
are found in biological systems (and possibly replicable in other kinds of
systems), having the function of transformation of micro potentialities
into macro actualities.
In Chapter 6, Dietrich Lehmann reviews the state dependency of con-
sciousness. He reports that brain electrical activity is organized in brief,
split-second packages, the functional microstates that are concatenated
by rapid transitions, and that these temporal microstates of the brain elec-
tric field incorporate the conscious experiences as atoms of thought and
emotions. In his framework, consciousness is the inner aspect while the
electric field is the outer aspect of the brains momentary functional state.
Different microstate field configurations incorporate different modes of
thought and emotion; different microstate sequences incorporate differ-
ent strategies of thinking.
In Chapter 7, Arnold Trehub formulates theoretical foundations for
a science of consciousness, based on a working definition of the term
and a basic methodological principle. In addition, Trehub proposes
that neuronal activity within a specialized system of brain mechanisms,
called the retinoid system, can explain consciousness and its phenomenal
content.
In Chapter 8, Bernhard Mitterauer argues, based on the philosophy of
Gotthard Guenther, that the dialogic structure of subjectivity requires a
Introduction 5

polymodal mechanism that can be modeled in terms of neuro-astroglial


interactions.
In Chapter 9, Leonid Perlovsky claims that conscious processes tran-
scend formal logics, thus explaining the act of decision-making (free
will), and elaborates on a model aimed to cover the main dimensions
and activities of a human mind.
In Chapter 10, Alfredo Pereira Jr. argues for Triple-Aspect Monism
(TAM), a philosophical position holding that reality continuously unfolds
itself as the physical world, the informational world, and the conscious
world. TAM is claimed to be an adequate theoretical framework for the
science of consciousness.
These chapters are the rewarding result of a fruitful cooperation that
was a pleasure to experience. We as editors are very thankful to the
authors for the creative interaction. We would like to express our grat-
itude to Cambridge University Press, in particular to Hetty Marx and
Carrie Parkinson, to our kind reviewers who displayed the wisdom of
criticizing the weak points while calling attention to the strengths, to
all who collaborated to make the project a reality especially Chris
Nunn and Kieko Kochi, to Erich Blaichs family for the permission
to use his painting on the cover of the book, and last but not least to
Maria Alice Ornellas Pereira and Martha Koukkou for their continued
support.

REFERENCES
Chalmers D. J. (1995). Facing up to the problem of consciousness. J Consciousness
Stud 2:200219.
Chalmers D. (1996). The Conscious Mind. New York: Oxford University Press.
Damasio A. (2003). Looking for Spinoza: Joy, Sorrow, and the Feeling Brain.
Orlando, FL: Harcourt.
Jackson F. (1986). What Mary didnt know. J Philos 83:291295.
Kim J. (1993). Supervenience and Mind. Cambridge University Press.
Lehmann D. (1990). Brain electric microstates and cognition: The atoms of
thought. In John E. R. (ed.) Machinery of the Mind. Boston: Birkhauser, pp.
209224.
Lehmann D. (2004). Brain electric microstates as building blocks of mental
activity. In Oliveira A. M., Teixeira M. P., Borges G. F., and Ferro M. J.
(eds.) Fechner Day 2004. Coimbra: International Society for Psychophysics,
pp. 140145.
Levine J. (1983). Materialism and qualia: The explanatory gap. Pac Philos Quart
64:354361.
Nagel T. (1974). What is it like to be a bat? Philos Rev 83(4):435450.
Pereira Jr. A. and Ricke H. (2009). What is consciousness? Towards a preliminary
definition. J Consciousness Stud 16:2845.
Pereira Jr. A., Edwards J., Lehmann D., Nunn C., Trehub A., and Vel-
mans M. (2010). Understanding consciousness: A collaborative attempt
6 Alfredo Pereira Jr. and Dietrich Lehmann

to elucidate contemporary theories. J Consciousness Stud 17(56):213


219.
Varela F. J., Rosch E., and Thompson E. (1991) The Embodied Mind: Cognitive
Science and Human Experience. Cambridge, MA: MIT Press.
Velmans M. (2009). Understanding Consciousness, 2nd Edn. London: Routledge.
1 Body and world as phenomenal contents
of the brains reality model

Bjorn Merker

1.1 Introduction 7
1.2 Stratagems of solitary confinement 9
1.3 A dual-purpose neural model 11
1.3.1 The orienting domain: A nested remedy for the liabilities
of mobility 11
1.3.2 The decision domain: Triple play in the behavioral final
common path 14
1.3.3 A forum for the brains final labors 18
1.3.4 A curious consequence of combined implementation 22
1.4 Inadvertently conscious 24
1.5 Conclusion: A lone but real stumbling block on the road to a science
of consciousness 32

1.1 Introduction
The fact that we find ourselves surrounded by a world of complex objects
and events directly accessible to our inspection and manipulation might
seem too trivial or commonplace to merit scientific attention. Yet here,
as elsewhere, familiarity may mask underlying complexities, as we dis-
cover when we try to unravel the appearances of our experience in causal
terms. Consider, for example, that the visual impression of our surround-
ings originates in the pattern of light and darkness projected from the
world through our pupils onto the light sensitive tissue at the back of our
eyes. On the retina a given sudden displacement of that projected image
behaves the same whether caused by a voluntary eye movement, a passive
displacement of the eye by external impact, or an actual displacement
of the world before the still eye. Yet only in the latter two cases do we
experience any movement of the world at all. In the first case the world
remains perfectly still and stable before us, though the retinal image has
undergone the selfsame sudden displacement in all three cases. But that
means that somewhere between our retina and our experience, the facts

I am indebted to Louise Kennedy for helpful suggestions regarding style and presentation,
and also to Wolfgang Baer, Henrik Malmgren, and Ezequiel Morsella for helpful comments
on matters of content.

7
8 Bjorn Merker

of self-motion have been brought to bear on retinal information to deter-


mine our experience. That in turn implies that the reality we experience
is more of a derivative and synthetic product than we ordinarily take it
to be.
That implication only grows as we pursue the fate of retinal patterns
into the brain. There, visual neuroscience discloses not only a diverse
set of subcortical destinations of the optic tract, but an elaborate cortical
system for visual analysis and synthesis. Its hierarchical multi-map orga-
nization for scene analysis and visuospatial orientation features functional
specialization by area (Lennie 1998) and functional integration through a
pattern of topographically organized bidirectional connectivity that typi-
cally links each area directly with a dozen others or more (Felleman and
Van Essen 1991).
From the point of view of our experience, a remarkable fact about this
elaborate system is the extent to which we are oblivious to much of its
busy traffic. As we go about our affairs in a complex environment we
never face half-analyzed objects at partial way stations of the system, and
we never have to wait even for a moment while a scene segmentation
is being finished for us. We have no awareness of the multiple partial
operations that allow us to see the world we inhabit. Instead it is only
the final, finished products of those operations that make their way into
our consciousness. They do so as fully formed objects and events, in the
aggregate making up the interpreted and typically well-understood visual
scene we happen to find ourselves in.
So compelling is this finishedness of the visual world we inhabit that
we tend to take it to be the physical universe itself, though everything
we know about the processes of vision tells us that what we confront in
visual experience cannot be the physical world itself but, rather, must
be an image of it. That image conveys veridical information about the
world and presents some of the worlds properties to us in striking and
vivid forms, but only to the extent that those properties are reflected in
that tiny sliver of the electromagnetic spectrum to which our photore-
ceptors are sensitive, and which we therefore call visible light. The fact
that this tiny part of the spectrum serves as medium for the entirety of
our visual world suggests that somehow the experienced world lies on our
side of the photoreceptors. That would mean that what we experience
directly is an image of the world built up as an irremediably indirect
and synthetic internal occurrence in the brain. But where then is that
brain itself, inside of which our experienced world is supposedly syn-
thesized on this account of things? And indeed, does not the location
of our retina appear to lie inside this world we experience rather than
beyond it?
Body-world: Phenomenal contents of the reality model 9

These legitimate questions bring us face-to-face with the problem of


consciousness in its full scope. That problem, they remind us, is not
confined to accounting for things like the stirrings of thoughts in our
heads and feelings in our breasts what we might call our inner life. It
extends, rather, to everything that fills our experience, among which this
rich and lively world that surrounds us is not the least. In fact, as we shall
see, there are attributes of this experienced world that provide essential
clues to the nature of consciousness itself. It may even be that short of
coming to grips in these terms with the problem of the experienced world
that surrounds us, the key to the facts of our inner life will elude us.

1.2 Stratagems of solitary confinement


Our visual system not only provides us with robust conscious percepts
such as the sight of a chair or of storm clouds gathering on the horizon,
it presents them to us in a magnificently organized macro-structure,
the format of our ordinary conscious waking reality. Our mobile body
is its ever-central object, surrounded by the stable world on all sides.
We look out upon that world from a point inside our body through a
cyclopean aperture in the upper part of the face region of our head. This
truly remarkable nested geometry, in three dimensions around a central
perspective point, is a fundamental organizing principle of adult human
consciousness (Hering 1879; Mach 1897; Roelofs 1959; Merker 2007a,
pp. 7273). It requires explanation despite or rather, exactly because
of its ever-present familiarity as the framework or format of our sensory
experience. As such it provides unique opportunites for analysis, because
it offers specificities of structure whose arrangement simply cries out for
functional interpretation.
The key to that interpretation, I suggest, is the predicament of a brain
charged with guiding a physical body through a complex physical world
from a position of solitary confinement inside an opaque and sturdy skull.
There it has no direct access to either body or world. From inside its bony
prison, the brain can inform itself about surrounding objects and events
only indirectly, by remote sensing of the surface distribution of the worlds
impact on a variety of receptor arrays built into the body wall. Being
fixed to the body, those sensors move with it, occasioning the already
mentioned contamination of sensory information about the world by the
sensory consequences of self-motion. But even under stable, stationary
circumstances, primary sensory information is not uniquely determined
by its causes in the world. Thus an ellipsoid retinal image may reflect an
oval object seen head-on, or a circular one tilted with respect to our line
of sight, to give but one example of many such problems occasioned by
10 Bjorn Merker

the brains indirect access to the world (Helmholtz 1867; Witkin 1981,
pp. 2936).
Nor is the brains control of its body any more direct than is its access
to circumstances in the world on which that control must be based.
Between the brain and the body movements it must control lie sets of
linked skeletal joints, each supplied by many muscles to be squirted
with acetylcholine through motor nerves in a sequence and in amounts
requisite to match the resultant movement to targets in the world. In
such multi-joint systems, degrees of freedom accumulate across linked
joints (not to mention muscles). A given desired targeting movement
accordingly does not have a unique specification either in terms of the
joint kinematics or the muscle dynamics to be employed in its execution
(Bernstein 1967; Gallistel 1999, pp. 67).
On both the sensory and motor sides of its operations the brain is faced,
in other words, with under-specified or ill-posed problems in the sensing
and control tasks it must discharge. We know, nevertheless, that somehow
it has managed to finesse these so-called inverse problems, because we
are manifestly able to function and get about competently even in quite
complex circumstances. The brain has in fact mastered its problems in
this regard to such an extent that it allows us to remain oblivious to the
difficulties, to proceed with our daily activities in a habitual stance of
naive realism. We look, and appear to confront the objects of the world
directly. We decide to reach for one or another of them, and our arm
moves as if by magic to land our hand and fingers on the target. Much
must be happening behind the scenes of our awareness to make such
apparent magic possible.
Reliable performance in inherently underconstrained circumstances is
only possible on the basis of the kind of inferential, probabilistic, and
optimization approaches to which engineers resort when faced with sim-
ilar problems in building remote controllers for power grids or plant
automation (McFarland 1977). In such approaches a prominent role is
played by so-called forward and inverse models of the problem domain to
be sensed or controlled, and they have been proposed to play a number of
roles in the brain as well (Kawato et al. 1993; Wolpert et al. 1995; Kawato
1999). In effect they move the problem domain inside the brain (note:
this does not mean into our inner life ) in the form of a neural model,
in keeping with a theorem from the heyday of cybernetics stating that an
optimal controller must model the system it controls (Conant and Ashby
1967).
There is every reason to believe that a number of these neural models
contribute crucially to shaping the contents of our experience. They may
be involved, for example, in the cancellation of sensory consequences of
Body-world: Phenomenal contents of the reality model 11

self-produced movement to give us the stability of our experienced world


despite movements of our eyes, head, and body (Dean et al. 2004). At
the same time, there is no a priori reason to think that these neural
models themselves are organized in a conscious manner. The proposal
that they are has been made on the basis of their essential contribution
to sophisticated neural processes, rather than by reference to some prin-
ciple that would make these and not other sophisticated neural processes
conscious (Kawato 1997). There is, however, a potential functional niche
for casting one such neural modelling device in a format yielding a con-
scious mode of operation, partial versions of which have been sketched
in previous publications of mine (Merker 2005, 2007a).

1.3 A dual-purpose neural model


Two different functional constructs are joined in the present proposal
regarding the role and organization of the conscious state. One intro-
duces a comprehensive central solution to the captive brains sensors-in-
motion problems in the form of a dedicated orienting domain. The
other achieves savings in behavioral resource expenditure by exploiting
dependencies among the brains three principal task clusters (namely tar-
get selection, action selection, and motivational ranking) through con-
straint satisfaction among them in a decision domain. Each of the
functional problem constellations addressed by these two domains may
be amenable to a variety of piecemeal neural solutions, in which case
neither of them requires a conscious mode of organization. The central
claim of this paper, and the key defining concept of the approach to
the conscious state it is proposing, is that when comprehensive spatially
mapped solutions to both domains are combined in a single neural mechanism,
an arrangement results which defines a conscious mode of operation. The two
functional domains have not been treated explicitly as such in the tech-
nical literature, so I begin by giving a thumbnail sketch of each before
outlining the prospects and consequences of combining the two. Both
concern movement and the immediate run-up to the brains decision
about the very next overt action to take, but they relate to it in very
different ways.

1.3.1 The orienting domain: A nested remedy for the liabilities of mobility
The already mentioned contamination of information about the world by
the sensory consequences of self-motion is not the brains only problem
caused by bodily mobility. The body moves not only with respect to the
world, but relative to itself as well. The brains sensor arrays come in
12 Bjorn Merker

several modalities differently distributed on the body and move with its
movements. Its twisting and flexing cause sensors to move with respect to
one another, bringing the spatial information they convey out of mutual
alignment. In the typical case of gaze shifts employing combined eye and
head movements, vision and audition are displaced with respect to the
somatosensory representation of the rest of the body. To this is added
misalignment between vision and audition when the eyes deviate in their
orbits (Sparks 1999). The brains sensors-in-motion problem combines,
in other words, aspects of sensor fusion (Mitchell 2007) with those of
movement contamination of sensor output (von Holst and Mittlestaedt
1950).
A number of local solutions or piecemeal remedies for one or another
part of this problem complex are conceivable. Insects, for example,
rely on a variety of mechanisms of local feedback, gating, efference
copy, inter-modal coordinate transformations, and perhaps even for-
ward models to this end (see examples reviewed by Webb 2004; also
Altman and Kien 1989). More centralized brains than those of insects
offer the possibility of re-casting the entire sensors-in-motion problem
in the form of a comprehensive, multi-modal solution. In so doing,
the fundamental role of gaze displacements in the orchestration of
behavior can be exploited to simplify the design of the requisite neural
mechanism.
The first sign of evolving action in the logistics of the brains con-
trol of behavior is typically a gaze movement. Peripheral vision suffices
for many purposes of ambient orientation and obstacle avoidance (Tre-
varthen 1968; Zettel et al. 2005; Marigold 2008). However, when loco-
motion is initiated or redirected towards new targets or planned while
traversing complex terrain, the gaze leads the rest of the body by fixating
strategic locations ahead (Marigold and Patla 2007). This is even more
so for reaching and manipulative activity, down to their finely staged
details. Fine-grained behavioral monitoring of eyes, hand, and fingers
during reaching and manipulation in the laboratory has disclosed the
lead given by the gaze in such behavior (Johansson et al. 2001). Arm and
fingers follow the gaze as if attached to it by an elastic band. In fact the
coupling of arm or hand to the gaze appears to be the brains default
mode of sensorimotor operation (Gorbet and Sergio 2009; Chang et al.
2010).
The centrality of gaze shifts, also called orienting movements, in the
orchestration of behavior makes them the brains primary and ubiquitous
output. The gaze moves through combinations of eye and head move-
ments and these can be modelled to a first approximation as rotatory
displacements of eyes in orbit and head on its cervical pivot, using a con-
venient rotation-based geometry (see Masino 1992; Smith 1997; Merker
Body-world: Phenomenal contents of the reality model 13

2007a, p. 72).1 This opens the possibility of simplifying the transforma-


tions needed to manage a good portion of movement-induced sensory
displacement and misalignment of sensory maps during movement by
casting them in a spatial format adapted to such a rotation-based geom-
etry. The orienting domain is a term introduced here for a hypothetical
format that does so by actually nesting a map of the body within a map
of the world, concentrically around an egocentric conjoint origin.
Let the brain, then, equip itself with a multi-modal central mech-
anism in the form of neural coordinate space which nests a spatially
mapped model of the body within a spatially mapped model of the entire
enclosing world surround. Within this spatial framework, let all global
sensory displacement be reflected as body map displacements (of one kind
or another) relative to the model world surround, which remains station-
ary (Brandt et al. 1973). Also, let all global mismatches between sensory
maps caused by body movement be reflected in displacement relative
to one another of sensor-bearing parts of the model body (such as eyes
relative to head/ears). This artificial allocation of movement between
model world and model body has its ultimate rationale in the clustering
of correlated sensory variances during random effector movement (for
which see Dean et al. 2004; Philipona et al. 2004). It presupposes a com-
mon geometric space for body and world within which their separation
is defined, but not necessarily a metrical implementation of its mode of
operation (Thaler and Goodale 2010).
To exploit the simplifying geometry of nested rotation-based transfor-
mations, the origin of the geometric space shared by body and world
must be lodged inside the model bodys head representation, that is,
the space must be egocentric.2 During the ubiquitous gaze shifts of
orienting this egocentric origin remains fixed with respect to the world,

1 Rotation-based geometry is a non-committal shorthand for a geometry of spatial refer-


ence that implements a nested system relating the rotational kinematics of eyes and head
to target positions in the world (see Smith 1997, for a useful introduction to rotational
kinematics related to the eye and head movements of the vestibulo-ocular reflex, and
Thaler and Goodale 2010, for alternative geometries that might implement spatial refer-
ence). Much remains to be learned about the neural logistics of movement-related spatial
reference, and the role of, say, gain fields in its management (Andersen and Mountcastle
1983; Chang et al. 2009; see also Cavanagh et al. 2010).
2 This basic egocentricity does not exclude an adjunct oculocentric specialization for the
eyes, as briefly touched upon later in the text. The center of rotation of the eye does not
coincide with that of the head. Therefore, the empirically determined visual egocenter
single despite the physical fact of the two eyes lies in front of that of the auditory
egocenter (far more cumbersome to chart empirically than the visual one, see Cox 1999;
Neelon et al. 2004). The ears, of course, are fixed to the head and move with it. The
egocenter in Figs. 1.1, 1.2, and 1.4 therefore lies closer to that of audition than to that
of vision, a placement motivated by the fact that the limits of the visual aperture are
determined largely by the bony orbit of the eyes, which is fixed to the head. Thus, a 45
rightward eye deviation extends the visual field by far less than 45 to the right.
14 Bjorn Merker

while the head map turns around it, interposed between egocenter and sta-
tionary model world surround. Translatory and other locomotion-related
sensory effects would be registered in the model space as continuous
replacement of the mapped contents of the world map as new portions
of the physical world come within range of receptor arrays during loco-
motion. Note the purely geometric terms in which these map operations
are introduced. They imply no commitment regarding the manner in
which they might be implemented neurally, whether through gain-fields
or other means. Figure 1.1 illustrates the principle of the orienting
domain in minimal outline.
So far this model scheme is only an attempt to manage the sensors-
in-motion problem by segregating its natural clusters of correlated vari-
ances into the separate zones of mobile and deformable body on the one
hand and enclosing movement-stabilized world surround on the other.
Needless to say this model sketch employing rotation-based nesting is
a bare-bones minimum only. To accommodate realistic features such as
limb movements it must be extended through means such as supplemen-
tal reference frames centered on, say, shoulder or trunk (see McGuire
and Sabes 2009). These might be implemented by yoking so-called gain-
fields associated with limbs to those of the gaze (Chang et al. 2009), which
would directly exploit the leading behavioral role of the gaze emphasized
in the foregoing. However implemented, the centrality and ubiquity of
gaze movements in the orchestration of behavior means that a simplifi-
cation of the brains sensors-in-motion problem is available in the nested
rotation-based format proposed for the orienting domain.

1.3.2 The decision domain: Triple play in the behavioral final


common path
Given a workable solution to the sensors-in-motion problem, the brain
as controller faces the over-arching task of ensuring that behavior the
time series of bodily locations and conformations driven by skeletal mus-
cle contractions comes to serve the many and fluctuating needs of the
body inhabited by the brain. In so doing it must engage the causal struc-
ture of a world whose branching probability trees guarantee that certain
options will come to exclude others (Shafer 1996). With branches trailing
off into an unknown future there is, moreover, no sure way of determining
the ultimate consequences of choosing one option over another. Poten-
tial pay-offs and costs, and the various trade-offs involved in alternate
courses of action, are therefore encumbered with a measure of inherent
uncertainty. Yet metabolic facts alone dictate that action must be taken,
and choices therefore made, necessarily on a probabilistic basis.
Body-world: Phenomenal contents of the reality model 15

W
O R L D

visual
aperture

ego

center

B
O D Y

Fig. 1.1 A minimal sketch of the orienting domain. A centrally located


neural space organized as a rotation-based geometry is partitioned into
three nested zones around the egocentric origin: egocenter, body zone
(here represented by the head alone), and world zone. The latter two
house spatial maps supplied with veridical content reflecting circum-
stances pertaining to the physical body and its surrounding world,
respectively. In this mapping, global sensory motion is reflected as move-
ment of the body alone relative to the world, indicated by curved arrows
(such as in this case might map, and hence compensate for, sen-
sory effects of a gaze movement). The rotation-based transformations
that supply the means for such stabilization of the sensory world dur-
ing body movement require the geometric space to be anchored to an
origin inside the head representation of the body zone. The device,
visual aperture marks the cyclopean aperture discussed in the penulti-
mate section of the text. Auditory space, in contrast to that of vision,
encompasses the entire world zone, including its shaded portion. Image
by Bjorn Merker licensed under a Creative Commons Attribution-
NonCommercial-NoDerivs 3.0 Unported License.

The world we inhabit is not only spatially heterogeneous in the sense


that things like shelter, food, and mates are often not to be found in
the same place, it is temporally lively such that the opportunities it
affords come and go, often unpredictably so. Since the needs to be filled
are diverse and cannot all be met continuously, they change in relative
strength and therefore priority over time, and compete with one another
16 Bjorn Merker

for control over behavior (McFarland and Sibly 1975). Few situations
are entirely devoid of opportunities for meeting alternate needs, and one
or more alternatives may present themselves at any time in the course of
progress toward a prior target. The utility of switching depends in part
on when in that progress an option makes its appearance. Close to a goal,
it often makes sense to discount the utility of switching (McFarland and
Sibly 1975), unless, of course, a windfall is on offer. Keeping options
open can pay, but the capacity for doing so must not result in dithering.
The liveliness of the world sets the pace of the controllers need to
update assessments, and saddles it with a perpetual moment-to-moment
decision process regarding what to do next. In the pioneering analysis
just cited, McFarland and Sibly (1975) introduced the term behavioral
final common path for a hypothetical interface between perceptual, motor,
and motivational systems engaged in a final competitive decision process
determining moment-to-moment behavioral expression. In a previous
publication I sketched out how savings in behavioral resource expenditure
are available by exploiting inherent functional dependencies among the
brains three principal task clusters (Merker 2007a), and to do so the
brain needs such an interface, as we shall see.
The three task clusters consist of selection of targets for action in
the world (target selection), the selection of the appropriate action for a
given situation and purpose (action selection), and the ranking of needs
by motivational priority (motivational ranking). Though typically treated
as separate functional problems in neuroscience and robotics, the three
are in fact bound together by intimate mutual dependencies, such that
a decision regarding any one of them seldom is independent of the state
of the others (Merker 2007a, p. 70). As an obvious instance, consider
prevailing needs in their bearing on target selection. More generally,
bodily action is the mediator between bodily needs and opportunities in
the world. This introduces the on-going position, trajectory, and energy
reserves of the body and its parts as factors bearing not only on target
selection (see Kording and Wolpert 2006) but also on the ranking of
needs. Thus the three task clusters are locked into mutual dependencies.
The existence of these dependencies means that savings are available
by subjecting them to an optimizing regime. To do so, they must be
brought together in a joint decision space in which to settle trade-offs,
conflicts, and synergies among them through a process amounting to
multiple constraint satisfaction in a multi-objective optimization frame-
work (for which, see Pearl 1988; Tsang 1993). Each of the three task
clusters is multi-variate in its own right and must be interfaced with
the others without compromizing the functional specificities on which
their mutual dependencies turn. Those specificities include, for sensory
Body-world: Phenomenal contents of the reality model 17

systems, the need to be represented at full resolution of sensory detail,


since on occasion subtle sensory cues harbor momentous implications
for the very next action (say, a hairline crack in one of the bars of a cage
housing a hungry carnivore).
Moreover, constraint-settling interactions among the three must occur
with swiftness in real time. It is over the ever-shifting combination of states
of the world with the time series of bodily locations and conformations
under ranked and changing needs that efficiency gains are achievable.
Hence the need for a high-level interface late in the run-up to behavioral
expression, that is, McFarland and Siblys (1975) behavioral final common
path.
In the aggregate these diverse requirements for a constraint satisfac-
tion interface between target selection, action selection, and motivational
ranking may appear daunting, but they need not in principle be beyond
the possibility of neural implementation. As first suggested by Geoffrey
Hinton (1977), a large class of artificial so-called neural networks are
in effect performing multiple constraint satisfaction (Rumelhart et al.
1986). Algorithmically considered, procedures for constraint satisfaction
that rely on local exchange of information between variables and con-
straints (such as survey propagation) are the ones that excel on the
most difficult problems (Achlioptas et al. 2005). They are accordingly
amenable to parallel implementation (Mezard and Mora 2009), suggest-
ing that the problem we are considering is neurally tractable.
A number of circumstances conspire to make the geometry of the
orienting domain a promising framework for parallel implementation
of constraint satisfaction among the brains three principal task clusters.
Two of these target selection and action selection are directly matched
by the world and body zones of the orienting domain, already cast
in parallel, spatially mapped formats. Even on their own, their nested
arrangement must satisfy a number of mutual constraints to set contents
of the body map properly turning and translating inside those of the
stabilized world map. To that end it must utilize information derived from
cerebellar decorrelation (Dean et al. 2004); vestibular head movement
signals, eye movements, and visual flow patterns (Brandt et al. 1973).
Moreover, implemented in the setting of the orienting domain, constraint
satisfaction for the decision domain would be spared the need to address
discrepant sensory alignment and data formats already managed in the
arrangement of the orienting domain.
Add to this the circumstance, already noted, that the gaze leads the
rest of the body in the execution of behavior (Johansson et al. 2001). This
means that the decision domains most immediate and primary output
the very next action is typically a gaze shift, that is, the very same
18 Bjorn Merker

combined eye and head movements that furnish the rationale for casting
the orienting domain in nested rotation-based format. Taken together
these indications suggest that the two domains lend themselves to joint
implementation in a single unitary neural mechanism.

1.3.3 A forum for the brains final labors


The fact that both the decision domain and the orienting domain find
themselves perpetually perched on the verge of a gaze movement means
that there is no time to conduct constraint satisfaction by serial assess-
ment of the relative utility of alternative target-action-motivation com-
binations. To be accomplished between gaze shifts, constraints must be
settled by a dynamic process implemented in parallel fashion, as already
mentioned. In this process, the late position of both domains in the
run-up to overt behavior (i.e., the behavioral final common path) is a
major asset: at that point all the brains interpretive and inferential work
has been carried as far as it will go before an eye movement introduces yet
another change in its assumptions. Image segmentation, object identity,
and scene analysis are as complete as they will be before a new move-
ment is due. Since the neural labor has been done, there is every reason
to supply its results in update form to the orienting domain mechanism
involved in preparing that movement.
The orienting domain would thus host a real-time global synthetic
summary of the brains interpretive labors, and is ideally disposed for
the purpose. The spatial nesting of neural maps of body and world in
itself imposes a requirement for three spatial dimensions on their shared
geometric space. Extending such a space to accommodate arbitrarily rich
three-dimensional object detail is a matter of expanding the resolution
of this three-dimensional space but involves no new principle. While the
brains many probabilistic preliminaries require vast neural resources, a
space hosting a global estimate of their outcomes does not (see further
Merker 2012). About a million neurons presumably suffice to compactly
represent our visual world at full resolution in its seven basic dimensions
of variation (Rojer and Schwartz 1990, p. 284; Adelson and Bergen 1991;
Lennie 1998, pp. 900901; see also Watson 1987). Vision is by far our
most demanding modality in this regard, so a full multi-modal sensory
synthesis might be realizable in a neural mechanism composed of no
more than a few million neurons (Merker 2012).
In and of itself such a mechanism would only supply a format for
whatever facts or estimates the rest of the brain extracts from its many
sources of information by inferential, probabilistic means. Within its con-
fines these results would be concretely instantiated in spatially mapped
Body-world: Phenomenal contents of the reality model 19

fashion through updates to a fully elaborated multi-modal neural model.


Its contents would reflect, in other words, the brains best estimate of its
current situation in three-dimensional space with full object-constellation
spatial detail. It would provide the brain with a stand-in for direct access
to body and world, denied it by its solitary confinement inside its skull.3
Every veridical detail entered into the neural reality space from the
rest of the brain would have the effect of fixing parameters of decision
domain variables, foreclosing some action options, and opening others.
The decision domains options are then those that remain unforeclosed
by the models typically massive supply of veridical constraints. Real-
time constraint satisfaction accordingly would be concerned only with
residual free parameters in this rich setting of settled concrete fact. Think
of the latter as reality, of the action possibilities latent in residual free
parameters as options within it, and of the neural mechanism as a
whole as the brains reality model. Its principal task is essentially no
more than to determine how to continue efficiently, given the rich set
of constraints already at hand, determined by convergent input from the
rest of the brain. In the language of the gestalt psychologist we might say
that the task of the reality model is to educe global good continuation given
current states of world, body, and needs. The preliminaries are conducted
elsewhere and the reality model settles on the global best estimate.
At this penultimate stage of the brains final labors, the options reside in
whatever combinations of targets in the world and bodily actions are still
available for filling motivational needs. The process of selecting among
alternate such combinations saddles a potential decision mechanism with
a set of functional requirements, the most basic of which is global and
parallel access to both world and body zones. Much remains to be learned
by formal modeling about how these requirements might be fulfilled in
the proposed format, but one of its features is bound to figure in any
plausible solution. This is the fact that the orienting domain contains
a location with equal and ubiquitous access to both body and world
zones. That location is the egocenter serving as origin for the geometry
that holds the two zones together. As such it maintains a perpetually
central position vis-a-vis the ever-shifting constellations of world and
body, and accordingly offers an ideal nexus for implementing decision
making regarding their combined states.

3 As such it would be in receipt of signals from any system in the brain, cortical or
subcortical, relevant to decision making aimed at the very next action (typically a targeted
gaze movement, as we have seen). The decision making in question therefore should by
no means be identified with deliberative processes or prefrontal executive activity. Such
activities serve as inputs among many others converging on the more basic and late
decision process envisaged here (see Merker 2007b, pp. 114, 118).
20 Bjorn Merker

Let its place in the system be taken, then, by a miniature analog map,
compactly housing an intrinsic connectivity presumably inhibitory
dedicated to competitive decision making (Merker 2007a, p. 76; see also
Richards et al. 2006). This central map would be connected with both
world and body zones, in parallel and in tandem as far as its afferents
go, but with principal efference directed to eye and head movement
control circuitry associated with the body zone (cf. Deubel and Schneider
1996). This competitive mechanism lodged at the systems egocentric
origin might be regarded as either a decision maker or a monitoring
function, depending on which stage of its dynamic progress towards its
principal output triggering the next gaze shift is under consideration.
Its monitoring aspect is most in evidence at low levels of situational
decision-making pressure. It is marked e in Fig. 1.2.
So far, this decision nexus lacks one of the three principal sources of
afference it needs in order to settle residual options among the brains
three task clusters, namely afference from the composite realm of moti-
vational variables. Again, the orienting domain offers a convenient topo-
logical space, so far unused, through which a variety of biasing signals
of this kind can be brought to bear on the decision mechanism. While
the world zone must extend up to the very surface of the body map,
there is nothing so far occupying the space between that surface and the
decision nexus occupying the egocenter inside its head representation.
That space can be rendered functional through the introduction of a vari-
ety of biasing signals, motivational ones among them, along and across
the connectivity by which the decision nexus is interfaced with the body
and world zones. This would embed the miniature analog decison map
in a system of afferents of diverse origin injecting bias into its decision
process, as depicted schematically in Fig. 1.2.
Interposed between the model body surface and the egocentric deci-
sion nexus, this multi-faceted system of extrinsically derived signals
would interact with those derived from body and world zones in their
influence on the central decison nexus. Current values of motivational
state variables would thus be introduced into the constraint satisfaction
regime as a whole (see Sibly and McFarland 1974 for a state-space treat-
ment of motivation). In keeping with their biasing function they would
not assume the spatially articulated forms characterizing the contents of
world and body zones, but something more akin to forces, fields, and ten-
sional states. Even then, each would have some specificity reflecting the
source it represents, implemented by means of differential localization
within the space enclosed by the model body surface. There they would
in effect supply the neural body with what amounts to agency vectors,
animating it from within, as it were. Motivational needs undoubtedly
Body-world: Phenomenal contents of the reality model 21

O R L D
W

visual
aperture

BIAS B
e O
D
Y

MOTIVATION

Fig. 1.2 Two constituents of the decision domain embedded in the


schematism of the orienting domain of Fig. 1.1. The egocenter zone
(e) has been equipped with a decision-making mechanism based on
global mutual inhibition or other suitable connectivity. It is connectively
interfaced with both the world and body maps, as before, but in addition
with a zone of afferents interposed between the spatially mapped body
surface and the decision mechanism. This zone introduces biasing sig-
nals derived from a set of motivational systems and other sources into the
operation of the central decision mechanism. These biases are depicted
as sliding annular sectors, each representing a different motivational
system (hunger, fear, etc.). Each has a default position from which it
deviates to an extent reflecting the urgency of need signalled by a cor-
responding motivational system outside of the orienting domain itself
(collectively depicted in the lower portion of the figure). The central
decision mechanism e is assumed to direct its output first and fore-
most to circuitry for eye and head movement control associated with the
body zone. Image by Bjorn Merker licensed under a Creative Commons
Attribution-NonCommercial-NoDerivs 3.0 Unported License.
22 Bjorn Merker

represent the principal source of such signals, but the functional logic
is not limited to these. Other signals of global import, such as the out-
come of a range of cognitive and memory-based operations conducted
elsewhere, could enter the scheme as biasing signals in this manner.
The introduction of the biasing mechanism into the body interior of
the reality model completes in outline the mechanism as a whole. Each of
its components three separate content zones hosting bias, body, and
world contents nested and arrayed in tandem around a central egocen-
tically positioned decision nexus is essential to its mode of operation.
The mechanism is unlikely to serve any useful purpose in the absence
of any one of them. It is thus a functionally unitary mechanism which
allows the highly diverse signals reflecting needs, bodily conformations,
and world circumstances to interact directly. Dynamic interactions across
the three within their shared coordinate space supplies a kind of func-
tional common currency (cf. McFarland and Sibly 1975; Cabanac
1992) that allows the brain to harvest in real time the savings hidden
among their multiple mutual dependencies (Merker 2007).
The substantial and mutually reinforcing advantages of implementing
constraint satisfaction among the brains principal task clusters in the
setting of the orienting domain suggest that the brain may in fact have
equipped itself with an optimizing mechanism of this kind.4 Whether it
actually has done so can only be determined by empirically canvassing
the brain for a candidate instantiation of a neural system that matches
the functional properties of the proposed mechanism point for point.
Preliminary results of pursuing such a search into the functional anatomy
of the brain are contained in a recent publication of mine (Merker 2012).
Here, however, we are concerned only with the functional implications
and consequences of such a hypothetical arrangement, and turn to one
of those consequences next.

1.3.4 A curious consequence of combined implementation


All three content categories featured in the joint mechanism, whether as
biases, body conformations, or world constellations, have this in common

4 The present account violates two programmatic commitments of the subsumption archi-
tecture of behavior based robotics introduced by Brooks (1986), namely the stipulations
little sensor fusion and no central models. It would therefore seem to have to forego
some of the advantages claimed for subsumption architectures, but this is only apparently
so. The reality model of the present account is assumed to occupy the highest functional
level (which is not necessarily cortical; see Merker 2007a) of such an architecture with-
out being the sole input to behavior control. The initial phase of limb withdrawal on
sudden noxious stimulation and the vestibulo-ocular reflex are examples of behaviors
which access motor control independently of the reality model. See Merker (2007a,
pp. 69, 70, and 116) for further details.
Body-world: Phenomenal contents of the reality model 23

that their specific momentary states and contents are determined by


brain mechanisms outside the confines of the reality model itself. Its own
structural arrangement shorn of its externally supplied contents is no
more than a neural space connectively structured for interactions across
its nested subspaces. This in order to provide as efficient a format as
possible for hosting a running synthetic summary of the interpretive work
of the rest of the brain, a global best estimate of its current situation
(Merker 2012). In its own terms this mechanism is, in other words, a
pure functional format. It is designed for decision-making over all possible
combinations of motivated engagement between body and world, and
not for any specific such combination. It is a format, in other words, of
ubiquitously open options.
This aspect of the mechanism follows directly from its postulated
implementation of a running constraint satisfaction regime. To serve
in this capacity the reality model must be capable of representing every
possible motivated body-world combination in order to settle optimality
among them. We can call the mechanism that hosts such a process a
relational plenum. As we saw in the previous section, the entry of veridical
content from the rest of the brain into this framework forecloses open
option status for the corresponding portions of this plenum, leaving the
remainder causally open. In the end, what remains of these open causal
options at the point where the next action is imminent is a matter only
for the decision mechanism lodged at its core. These are the residual
options with which its decision process engages to settle on a choice that
triggers the next gaze movement. We might figuratively picture this in
terms of these residual options being interposed, functionally speak-
ing, between the decision nexus and the currently specified contents of
the mechanism as a whole.
The role of residual options as mediators between decision nexus and
contents is played out in a decidedly asymmetric context: the egocentric
format of the mechanism as a whole ensures that the decision nexus itself
is categorically excluded from the mechanisms contents. It occupies the
central location from which all contents are accessed in such a geometry
and its implementing connectivity. Taken together these circumstances
amount to a global functional bipartition of the model space into a cen-
tral decision or monitoring nexus on the one hand and, on the other, a
relational plenum (of biases, body, and world) whose given contents are
interfaced with the centrally placed decision nexus across intervening por-
tions of the plenum that remain in open option status. Such a partitioning
of a coherent connective space into a decision nexus or monitor on the
one hand, and monitored content on the other, with causal options inter-
vening, in fact supplies the most fundamental condition for a conscious
mode of operation, to be considered more fully in the section that follows.
24 Bjorn Merker

1.4 Inadvertently conscious


It is time to step back from this conjectural model construction to see
what has been done. A number of neural mechanisms with particular
design features have been proposed as solutions to problems encoun-
tered by a brain charged with acting as body controller from a position
of solitary confinement inside its skull. The design features were intro-
duced not as means to make the brain conscious, but rather to provide
solutions for functional problems the brain inevitably faces in informing
itself about the world and controlling behavior. The logistical advan-
tages of implementing a mechanism of multiple constraint satisfaction
for optimal task fusion within a mechanism serving sensor fusion
suggested an integrated solution in the form of a reality model. Within
its nested format the brains interpretive labors would receive a perpet-
ually updated synthetic summary reflecting the current veridical state of
its surroundings, of its body, and of its needs. The core of its egocentric
frame of reference would host a decision nexus spatially interfaced with
a world map from inside a body map, and subject to action dynam-
ics driven, ultimately, by motivational needs and serving behavioral
optimization.
A conservative version of the foundational conjecture for the approach
to the conscious state proposed here can be formulated as follows. A
comprehensive regime of constraint satisfaction among the brains principal
task clusters implemented within the framework of an egocentrically mapped
body-in-world solution to its sensors-in-motion problem operates consciously
by virtue of this functional format alone. This claim rests, ultimately,
on the nature of a fundamental functional asymmetry that inheres in
such a reality models mode of operation. That asymmetry is geo-
metric as well as causal. In geometric terms, the decision nexus, by
virtue of its location at the origin of the models egocentric geome-
try, stands in an inherently asymmetric (perspectival) relation to any
content defined by means of that geometry. This geometric asymmetry
defines a first-person perspective as inherent to the operation of the reality
model.
In causal terms, open options intervene between the decision nexus
and the models veridically specified contents. Options are inherently
asymmetric: they are options only for that which has them, which in this
case is the decision mechanism at the egocentric core of the reality model.
Causally speaking, the options are that about which the decision nexus
makes its decisions, and geometrically speaking the decision nexus is that
for which the veridically specified contents of the rest of the mechanism
are in fact contents. This causal asymmetry invests the decision nexus,
Body-world: Phenomenal contents of the reality model 25

as the mechanism that settles upon one or another of the open options,
with agency relative to the contents of the reality model.
In the setting of the models spatial organization, this places the deci-
sion nexus in the presence of those contents in the sense that its egocen-
trically defined here is separated from the there of any and all con-
tents, and relates to them via relationally unidirectional causal options.
Relative to the vantage point of the decision nexus such a state of affairs
has all the attributes of a first-person perspective imbued with agency,
and accordingly defines a conscious mode of operation for the reality
model it serves (see Merker 1997 and Merker forthcoming for additional
detail).
To be clear on this crucial point of attribution of conscious status: the
crux of the matter is neither decision making itself nor its occurrence
in a neural medium. Decisions are continually being made in numerous
subsystems of the brain in fact wherever outcomes are settled compet-
itively without for that reason implying anything regarding conscious
status. It is only in the setting of the orienting domain, on account of the
interactions mandated by global constraint satisfaction within its egocen-
tric and spatially nested format, that decision making of the particular
kind we have considered has this consequence. This is because it entails
a global partitioning of the decision space into an asymmetric relation
between monitor and monitored, marked by an intervening set of causal
options. It is on account of this functional asymmetry that an inherently
perspectival relation between an agent the decision-making vantage
point (egocenter) and the tri-partite contents of the reality space (bias,
body, world) is established. Such a relation is the defining feature of
the first-person perspective of consciousness, and of it alone, rendering
the mechanism that implements it conscious. It operates consciously,
that is, by virtue of this functional format alone, and not by virtue of
anything that has been or needs to be added to it in order to make it
conscious.
The only natural setting in which such a format is likely to arise would
seem to be the evolution of centralized brains, given the numerous spe-
cific and interlocking functional requirements that must conspire in order
to fashion such a format. Its functional utility is predicated solely on the
predicament of a brain captive in a skull and under pressure to opti-
mize its efficiency as controller. Since the proposed mechanism would
generate savings in behavioral resource expenditure, it would hardly be
surprising if some lineages of animals, our own included, had in fact
evolved such a mechanism. If, therefore, the claim that such a functional
format defines a conscious mode of operation is sound, it would be worth
examining the thesis that it is the so-far hypothetical presence of such a
26 Bjorn Merker

mechanism in our own brains that accounts for our status as conscious
beings. For that to be the case we ourselves would have to be a part of
and a quite specific part of that mechanism. This follows from the fact
that the functional asymmetry at the heart of the mechanism ensures
that the only way to attain to consciousness on its terms is to occupy the
position of egocentrically placed decision maker within it. Let us exam-
ine, therefore, the fit of that possibility with some of what we know about
our own conscious functioning.
Consider, first, the curious format of our visual waking experience, that
namely, by which we face, from a position inside our head, a surrounding
panoramic visual world through an empty, open, cyclopean aperture in
our upper face region. Anyone can perform the Hering triangulation to
convince themselves that the egocentric perspective point of their visual
access to the world is actually located inside their head, some 4 cm right
behind the bridge of the nose (Roelofs 1959; Howard and Templeton
1966). That, in other words, is the point from where we look. But that
point, biology tells us, is occupied and surrounded on all sides by biological
tissues rather than by empty space. How then can it be that looking from
that point we do not see body tissues, but rather the vacant space through
which in fact we confront our visual world in experience?
From the present perspective such an arrangement would be the brains
elegant solution to the problem of implementing egocentric nesting of
body and world, given that the body in fact is visually opaque, but the
egocenter must lodge inside the model bodys head, for simplicity of
compensatory coordinate transformations between body and world. The
problem the brain faces in this regard is the following: the body must
be included in the visual mapping of things accessible from the egocen-
tric origin inside the bodys head. Such inclusion is unproblematic for
distal parts of the body, which can be mapped into the reality space as
any other visual objects in the world. However, in the vicinity of the
egocenter itself, persistence in the veridical representation of the body
as visually opaque would block the egocenter from visual access to the
world, given the egocenters location inside the head representation of the
body.
In this mapping quandary the brain has an option regarding the design
of a model neural body that is not realizable in a physical body. That
option is to introduce a neural fiction into the reality model. This is the
cyclopean aperture through which the egocenter is interfaced with the
visual world from its position inside the head (Merker 2007a, p. 73). But
this is exactly the format in which our visual experience demonstrably
comes to us, namely that format of direct confrontation of the surround-
ing world from inside the body which naive realism erroneously interprets
as a direct confrontation of the physical universe.
Body-world: Phenomenal contents of the reality model 27

We actually do find ourselves looking out at the world from inside our
heads through an oval empty aperture. This view, though for one eye
only, is captured in Ernst Machs classical drawing, reproduced here as
Fig. 1.3 (Mach 1897). When both eyes are open the aperture is a symmet-
rical ovoid within which the nose typically disappears from view. What
then is this paramount fact of our conscious visual macrostructure other
than direct, prima facie, evidence that our brain in fact is equipped with
a mechanism along the lines proposed here, and that we do in fact form
part of this mechanism by supplying its egocentric perspectival origin?
This body, which we can see and touch and which is always present
wherever we are and obediently follows our every whim, would accord-
ingly be the model neural body contrived as part of the brains reality
model. And this rich and complex world that extends all around us and
stays perfectly still, even when we engage in acrobatic contortions, would
be the brain models synthesized world, stabilized at considerable neural
expense. How else could that world remain unaffected by those contor-
tions, given that visual information about it comes to us through signals
from our retinae, organs which flail along with the body during those
contortions?
From this point of view a number of phenomena pertaining to the
nature and contents of our consciousness can be interpreted as products
of the workings of the proposed reality model and of our suggested place
as decision-making egocenter within its logistics. Recall the suggestion
that the representationally unutilized space between the model neural
body wall and the egocenter lodged inside of it could be used to introduce
a variety of biasing signals from motivational and other systems. This is
where in fact we do experience a variety of impulses and tensional states
impelling us to a variety of actions in accordance with their location and
qualitative attributes. Motivational signals such as hunger, fear, bladder
distension, and so forth, do in fact enter our consciousness as occurences
in various parts of our body interior (such as our chest region). Each of
these variously distributed biases feels a bit different and makes us want
to do different things (Sachs 1967; Izard 1991, 2007). Thus bladder dis-
tension is not only experienced in a different body location than is hunger
or anger, it feels different from them, and each impels us to do different
things. Common to them all is their general, if vague, phenomenological
localization to the body interior, in keeping with what was proposed in
the section devoted to the decision domain.
Far from all bodily systems and physiological mechanisms are thus
able to intrude on our consciousness, or have any reason to do so. An
analysis by Morsella and colleagues shows that those among them that
do so involve, in one way or another, action on the environment (or
on the body itself) by musculoskeletal means (Morsella 2005; Morsella
Fig. 1.3 Ernst Machs classical rendition of the view through his left
eye. Inspection of the drawing discloses the dark fringe of his eyebrow
beneath the shading in the upper part of the figure, the edge of his
moustache at the bottom, and the silhouette of his nose at the right-
hand edge of the drawing. These close-range details framing his view
are available to our visual experience, particularly with one eye closed,
though not as crisply defined as in Machs drawing. In a full cyclo-
pean view with both eyes open the scene is framed by an ovoid within
which the nose typically disappears from view (see Harding, 1961, for a
detailed first-person account). Apparently, Mach was a smoker, as indi-
cated by the cigarette extending forward beneath his nose. The original
drawing appears as Figure 1 in Mach (1897, p. 15). It is in the pub-
lic domain, and is reproduced here in a digitally retouched version,
courtesy of Wikimedia (http://commons.wikimedia.org/wiki/File:Ernst
Mach Innenperspektive.png, accessed March 1, 2013).
Body-world: Phenomenal contents of the reality model 29

and Bargh 2007; Morsella et al. 2010). This principle fits well with the
present perspective, which traces the very existence and nature of con-
sciousness to functional attributes of a mechanism designed to optimize
exactly such behavioral deployment. Thus, the regulation of respiration
is normally automatic and unconscious, but when blood titres of oxygen
and carbon dioxide go out of bounds it intrudes on consciousness in the
form of an overwhelming feeling of suffocation (Liotti et al. 2001; see also
Merker 2007a, p. 73). Correcting the cause of such blood gas deviations
may require urgent action on the environment (say, to remove a physical
obstruction from the airways or to get out of a carbon dioxidefilled pit).
The critical nature of the objective matches the urgency of the feeling
that invades our consciousness in such emergencies. For additional con-
siderations and examples bearing on the relation between motivational
factors and consciousness, see Cabanac (1992, 1996) and Denton et al.
(2009).
Just as many bodily processes lack grounds for being represented in
consciousness, so do many neural ones. As noted in the introduction,
the busy neural traffic that animates the many way stations along our
sensory hierarchies is not accessible to consciousness in its own right.
Only its final result a synthetic product of many such sources con-
jointly enters our awareness: the rich and multi-modal world that sur-
rounds us (Merker 2012). There is no dearth of evidence regarding neural
activity unfolding implicitly without entering consciousness (for vision
alone, see Rees 2007 and references therein). This includes activity at
advanced stages of semantic interpretation, motor preparation at the
cortical level, and even instances of prefrontal executive activity (Luck
et al. 1996; Dehaene et al. 1998; Eimer and Schlaghecken 1998; Frith
et al. 1999; Gaillard et al. 2006; Lau and Passingham 2007; van Gaal et al.
2008).
One straightforward interpretation of this kind of evidence assigns
conscious contents to a separate and dedicated neural mechanism, as
proposed under the name of the conscious awareness system by Daniel
Schacter (1989, 1990). The present conception of a dedicated reality
model is in good agreement with that proposal in its postulation of a
unitary neural mechanism hosting conscious contents. In fact, on the
present account, the reality model must exclude much of the brains
ongoing activity sensory as well as motor in order to protect the
integrity of its constraint satisfaction operations. To serve their purpose
those operations must range solely over the models internal contents
with respect to one another, and they should occur exclusively within
the nested format that hosts them in the reality space. Such functional
independence is presumably most readily achieved through local and
30 Bjorn Merker

compact implementation of the model in a dedicated neural mechanism


of its own.5
This need to keep the constraint satisfaction operations of the real-
ity space free of external interference has a crucial consequence bear-
ing on the operation of our consciousness. The need to update the
models contents has been repeatedly mentioned, but never specified.
As we have seen, the reality space arrives at the brains best estimates
of current states of world, body, and needs on the basis of convergent
inputs as a means to deciding, through constraint satisfaction among
them, on an efficient next move. With pressure on decision making at
times extending down to subsecond levels (think of fencing, for example)
constraint settling would typically fill the entire interval from one deci-
sion to the next. Externally imposed parameter changes in the course of
this might prolong constraint settling indefinitely. This means that con-
tents must not be updated until a decision has been reached, and then
updating must occur wholesale and precipitously. Wholesale replace-
ment of contents is feasible, because the sources that delivered prior
content are available at any time an update is opportune. The ideal time
to conduct such a refresh or reset operation is at the time of the
gaze shift (Singer et al. 1977; Henderson et al. 2008) or body move-
ment (Svarverud et al. 2010) that issues from a completed decision pro-
cess. As already noted, such movements in and of themselves alter the
presuppositions of decision making, rendering prior reality space content
obsolete.
The same logic applies to instances in which sudden change is detected,
signalled by a transient that attracts attention and typically (but not
necessarily) a gaze shift to the change. Such a change also alters the
presuppositions of the models current operation, again favoring whole-
sale resetting of its contents. When transients are eliminated by stim-
ulus manipulation in the laboratory, a change that otherwise would be
conspicuous goes undetected (Turatto et al. 2003). Assuming, then, that
the reality models contents are subject to repeated wholesale replace-
ment, the reality it hosts has no other definition than the constellation of
its current content alone, maintained as such only till the next reset. Since

5 The cerebral cortex appears to offer a most inhospitable environment for such an arrange-
ment. The profuse, bidirectional, and exclusively excitatory nature of cortical inter-areal
connectivity poses a formidable obstacle to any design requiring a modicum of functional
independence (see Merker 2004). There is also no known cortical area (or combination
of areas) whose loss will render a patient unconscious (cf. Merker 2007a). On the present
account, the cerebral cortex serves, rather, as a source of much of the sophisticated infor-
mation utilized by the models reality synthesis, supplied to it by remote and convergent
cortical projections. Candidate loci of multi-system convergence are of course available
in a number of subcortical locations. See Merker (2012) for further details.
Body-world: Phenomenal contents of the reality model 31

in present terms reality space contents are the contents of our sensory
consciousness, this means that we should be oblivious to veridical sen-
sory changes introduced at the exact time of a reset. So we are, indeed, as
demonstrated by the well-documented phenomenon of change blindness
(Simons and Ambinder 2005).
The only conscious contents that appear to reliably escape oblivion
in the reset are those held in focal attention (Rensink et al. 1997), a
circumstance most readily interpretable as indicating a privileged rela-
tion between the contents of focal attention and memory (Turatto et al.
2003; see also Wolfe 1999, and Merker 2007a, p. 77), allowing pre- and
post-reset focal content to be compared. Focal attention and its contents
accordingly would be the key factor maintaining continuity of conscious-
ness across frequent resets of model content, as might be expected from
the central role of a competitive decision mechanism of limited capac-
ity at its operational core. The more intense and demanding the focal
attentional engagement, the higher the barrier against its capture by an
alternate content, as dramatically demonstrated in inattention blindness
(Mack 2003; Most et al. 2005; see also Cavanagh et al. 2010 for further
considerations relevant to update operations).
More generally, our limited capacity to keep in mind or track inde-
pendent items or objects simultaneously (Miller 1956; Mandler 1975;
Cowan 2001) presumably reflects the late position of the reality model
(consciousness) in the brains functional economy. As emphasized pre-
viously, the decision nexus engages only the final (residual) options to
be settled in order to trigger the next gaze movement (and reset), and
as such forms the ultimate informational bottleneck of the brain as a
whole. It receives convergent afference from the more extensive (though
still compact) world, body, and bias maps of the reality space, and they
in turn are convergently supplied by the rest of the brain. Some such
arrangement is necessary if the brains distributed labors are to issue,
as they must, in unitary coherent behavioral acts (McFarland and Sibly
1975). Moreover, in its capacity as final convergence zone, the decision
nexus brings the informational advantages of the quite general bottleneck
principle to bear on the models global optimization task (Damasio 1989;
Moll and Miikulainen 1997; Kirby 2002).
The crucial position of the decision nexus at the core of the reality space
may account for a further, quite general, aspect of our consciousness as
well: our sense of facing the world as an arena of possiblities within
which we exercise choice among alternative actions. As we have seen, the
models operational logic interposes causal options between the decision
nexus and the veridical contents of the reality space. Our sense of having
options and making choices a sense that presumably underlies the
32 Bjorn Merker

concept of free will thus corresponds, on this account, to a reality. It


follows as a direct corollary of our own hypothesized status as decision
nexus at the core of the brains reality model, determining what to do
next under the joint influence of a rich set of veridical constraints and
a few residual causal options. In our capacity of decision mechanism,
its deterministic decision procedure is our choice among options. This
sense of free choice among options, moreover, would be expected to vary
inversely with the pressure of events on decision-making, as indeed it
appears to do on intuitive grounds.
Finally, to conclude this brief survey of aspects of the proposed mech-
anism that can be directly related to the nature of our conscious func-
tioning, consider the fact that the objects of our sensory awareness have
sidedness and handedness. This fact cannot be captured by any set
of measurements on the objects themselves and was noted by William
James as a puzzle requiring explanation (James 1890, vol. II, p. 150). In
present terms, it follows from the fact that in our position at the egocen-
tric core of the nested geometry framing our conscious contents we have
no choice but to relate to those contents perspectivally. It is a direct con-
sequence of the egocentric arrangement of the reality space as a whole.
Though we cannot see the back side of objects, we can of course know
that they have one. Allocentric representations are accordingly cognitive
rather than perceptual ones.
Figure 1.4 provides a summary diagram of the present conception
as a whole, depicting the nested phenomenal space of the neural reality
model in its nesting within a larger noumenal setting (to use Kants precise
terminology) of physical brain, physical body, and physical world. All our
experience moves within the confines of the phenomenal space alone,
and this phenomenal space in its entirety (featuring egocenter, body, and
world) is nested inside a physical (noumenal) brain, body, and world, to
result in a doubly nested ontology of the whole.

1.5 Conclusion: A lone but real stumbling block on the


road to a science of consciousness
As an exercise intended to bring out the ontological implications of the
approach to the conscious state introduced here, consider being asked to
indicate the approximate location of the brain which on present terms
hosts the neural model that furnishes you with the reality you are at that
moment experiencing. Consider doing so in the experienced space it
makes available to you, extending from inside your body, to its surface
and beyond it through the world all the way to the horizon (Trehub 1991;
Lehar 2003). The task is, in other words, to indicate, in first-person terms,
Body-world: Phenomenal contents of the reality model 33

SENSORY MOTOR

R L D
W O
visual b
aperture r
a
i
n

BODY
e
b
o
d
y

M
O T I VA T I O N
w
o
r
l
d

Fig. 1.4 The full ontology of the consciousness paradigm introduced


in the text. The joint orienting and decision mechanism illustrated in
Fig. 1.2 supplies a neural format defining a conscious mode of operation
by virtue of its internal organization, as explicated in the text. It forms
a unitary and dedicated subsystem set off from the rest of the brain
by the heavy black line, beyond which neural traffic is assumed to take
place without consciousness. Broad white arrows mark interfaces across
which neural information may pass without entering consciousness.
The physical brain, which except for its phenomenal subsystem is
devoid of consciousness is part of the physical body, in turn part of the
physical world, both depicted in black in the figure. See text for further
detail. Image by Bjorn Merker licensed under a Creative Commons
Attribution-NonCommercial-NoDerivs 3.0 Unported License.
34 Bjorn Merker

within the space accessible to experience from a first-person perspective,


the location of the brain within which, on present terms, that space is
realized. Where in that space of your conscious reality is the brain that I
claim synthesizes that conscious reality located? The answer provided by
the present account is, of course, that there is no such location because
that entire perceived space is a neural artifice contrived for control purposes
in a dedicated neural mechanism inside the brain you are asked to localize.
To ask you to localize the containing brain inside the space it contains is to
ask for the impossible, obviously.
Strictly speaking there is no unique location to point to, but if one
nevertheless were to insist on trying to point, all directions in which one
might do so would be equally valid. In particular, pointing to ones head
would be no more valid than pointing in any random direction. That
head, visually perceptible to the first person only at the margin of the
cyclopean aperture through which we look out at the world, is but a part
of the model body inside the model world of the brains reality space.
This was already recognized by Schopenhauer when he declared this
familiar body, which we can see and move, to be a picture in the brain
(Schopenhauer 1844, vol. II, p. 271).
I hasten to offer my assurances that I am no less a slave to the inerad-
icable illusion of naive realism than anyone else. I cannot shake the
impression that in perception I am directly confronting physical reality
itself in the form of a mind-independent material universe, rather than
a neurally generated model of it. Far from constituting counter-evidence
to the perspective I have outlined, this unshakeable sense of the reality
of experienced body and world supports it, because it is exactly what the
brains reality model would have to produce in order to work, or rather,
that is how it works. It defines the kind of reality we can experience, and
its format is that of naive realism: through the cyclopean aperture in our
head we find ourselves directly confrontating the world that surrounds
us, taking it to be the physical universe itself. In fact it is a neural model
of it, serving as our reality by default. This elaborate neural contrivance
repays, or rather generates, our trust in it by working exceedingly well:
the brains model world is veridical (Frith 2007). It faithfully reflects
those aspects of the physical universe that matter to our fortunes within
it, while sparing us the distraction of having to deal with the innumerable
irrelevant forms of matter and energy that the physical universe actually
has on offer, just as it spares us distraction by the busy neural traffic of
most of the brains activity.
Thus, for all practical purposes the deliverances of the reality model
provide a reliable guide to the physical world beyond the skull of the
physical body, in a manner similar to that of a situation room serving
Body-world: Phenomenal contents of the reality model 35

a general staff during wartime, from which operations on faraway bat-


tlefields are conducted by remote control on the basis of the veridical
model assembled in the situation room itself (Lehar 2003). The corre-
sponding neural models usefulness in that regard is the reason it exists,
if my account has any merit. And that usefulness extends beyond our
everyday life to our endeavors in every area of science so far devel-
oped, because those endeavors have been concerned with defining in
increasingly precise and deep ways the causal relations behind the sur-
face phenomena of our world. Our reality model is an asset in these as
in our other dealings with the world, because typically it reflects circum-
stances in the physical universe faithfully. For aspects of the world that
lie beyond our sensory capacities, suitable measuring instruments have
been devised to bring them within range of those capacities. Accordingly,
even a conceptual commitment to naive realism is perfectly compatible
with these scientific endeavors. Conceptions of the ontological status of
our experience of the world do not affect their outcomes, because they are
not concerned with the nature of our experience but with explaining the
world, regarding which our experience supplies reliable guidance in most
respects.
This is no longer the case, so the moment the scientific searchlight
is turned to the nature of experience itself, as in a prospective science
of consciousness. Here, the ontological status of experience itself is the
principal question under investigation, along with its various character-
istics, such as the scope and organization of its potential contents, its
genesis, and its relation to the rest of the picture of reality science has
pieced together for us with its help. Now, suddenly and uniquely, adher-
ence to naive realism as a conceptual commitment, even in the form of a
lingering tacit influence, becomes an impediment and a stumbling block.
By its lights the world we experience is the physical universe itself rather
than a model of it. Such a stance seriously misconstrues the scope and
nature of the problem of consciousness, most directly by excluding from
its compass the presence of the world we see around us. When the lat-
ter is taken for granted as mind-independent physical reality rather than
recognized as a principal content of consciousness requiring explanation,
the problem of consciousness contracts to less than its full scope. Under
those circumstances, inquiry tends to identify consciousness with some
of its subdomains, contents, or aspects, such as thinking, subjectivity,
self-consciousness, an inner life, qualia, and the like.
Any such narrowing of the scope of the problem of consciousness
allows the primary task of accounting for the very existence of experience
itself to be bypassed and promotes attempts to account for the worlds
experienced qualities (hence qualia) even before addressing the prior
36 Bjorn Merker

question of why there is a world present in our experience at all, or


indeed why experience itself exists. Experienced qualities can be referred
to our inner life, the stirrings of thoughts in our heads and feelings in
our breasts, and so might seem exempt from the problem of the external
world. Yet our experience is not thus confined to our inner life. It extends
beyond it to encompass a wide and varied external world, bounded by the
horizon, the dome of the sky, and the ground on which we walk (Trehub
1991; Lehar 2003; Lopez 2006; Revonsuo 2006; Frith 2007; Velmans
2008). The objects and events of the world, whose attributes provide
many an occasion for the events of our inner life, are no less contents of
consciousness than the stirrings those attributes may occasion in us.
When consciousness is identified with our inner life the concept of
simulation or modeling tends to be used in the sense of the manifest
power of our imagination to create scenarios in the minds eye for
purposes of planning or fantasy. This is not the sense in which the concept
of modelling has been used in this article, of course. The concept as used
here refers to neural synthesis of the entire framework and content of
our reality, including the rich and detailed world that surrounds us when
we open our eyes in the morning and which stays with us till we close
them at night. From the present point of view, our additional capacity
for imaginative simulation is derivative of this prior, massive, and more
basic reality modelling, as are the scenes enacted in our dreams.
The distinctions of the past few paragraphs are made by way of clar-
ification of the present perspective and should not be taken to imply
that these inner life topics lack interest or validity as objects of scien-
tific scrutiny. As contents of consciousness they provide worthy topics of
study in their own right, and we have already indicated reasons for their
experienced location inside our skin in the foregoing. A full account
of this inner domain is accordingly intimately tied up with the larger
issue of accounting for the fact and existence of experience itself, the
circumstances under which it arises, and the manner of its arrangement
where it is present, as in our own case.
In this chapter I have presented my approach to such questions, ques-
tions I take to define the primary subject matter of a prospective science
of consciousness. It may seem ironic that pursuing those questions with-
out heeding the alarms rung by naive realist predispositions should issue
in an account of sensory consciousness according to which its global for-
mat necessarily bears the stamp of naive realism. That, however, should
be a reason to take my account seriously, because that is one of the
basic attributes of our sensory consciousness that any valid account of its
nature must, in the end, explain. By the same token, the neural fiction
that provides our sensory experience with its naive realist format must
Body-world: Phenomenal contents of the reality model 37

not be allowed to deceive us into thinking that the world that appears to
us in direct sensory experience is the physical universe itself rather than
a model of it.
A sound theory of consciousness therefore must abandon, in its turn
and on this point uniquely, trust in the deliverances of consciousness as
a guide to the realities we wish to understand. It must affirm, in other
words, the soundness of the fundamental tenet of philosophical idealism,
namely that the first person has direct access to contents of consciousness
alone, and to nothing but contents of consciousness. That tenet mandates
that the world we experience around us be included among the contents
of consciousness. Doing so keeps our conception of consciousness from
being confined to less than its actual scope, and so saves us from fatal
error.

REFERENCES
Achlioptas D., Naor A., and Peres Y. (2005). Rigorous location of phase transi-
tions in hard optimization problems. Nature 435:759764.
Adelson E.H. and Bergen J.R. (1991). The plenoptic function and the elements
of early vision. In Landy M. S. and Movshon J. A. (eds.) Computational
Models of Visual Processing. Cambridge, MA: MIT Press, pp. 120.
Altman J. S. and Kien J. (1989). New models for motor control. Neural Comput
1:173183.
Andersen R. A. and Mountcastle V. B. (1983). The influence of the angle of gaze
upon the excitability of the light-sensitive neurons of the posterior parietal
cortex. J Neurosci 3:532548.
Bernstein N. A. (1967). The Co-ordination and Regulation of Movements. Oxford:
Pergamon.
Brandt T., Dichgans J., and Koenig E. (1973). Differential effects of central
versus peripheral vision on egocentric and exocentric motion perception.
Exp Brain Res 16:476491.
Brooks, R.A. (1986) A robust layered control system for a mobile robot. IEEE J
Robotic Autom 2:1423.
Cabanac M. (1992). Pleasure: The common currency. J Theor Biol 155:173200.
Cabanac M. (1996). The place of behavior in physiology. In Fregly M. J. and
Blatteis C. (eds.) Handbook of Physiology, Section IV: Environmental Physiol-
ogy. Oxford University Press, pp. 15231536.
Cavanagh P., Hunt A. R., Afraz A., and Rolfs M. (2010). Visual stability based
on remapping of attention pointers. Trends Cogn Sci 14:147153.
Chang S. W. C., Papadimitriou C., and Snyder L. H. (2009). Using a compound
gain field to compute a reach plan. Neuron 64:744755.
Conant R. C. and Ashby W. R. (1970). Every good regulator of a system must
be a model of that system. Int J Syst Sci 1:8997.
Cowan N. (2001). The magical number 4 in short-term memory: A reconsider-
ation of mental storage capacity. Behav Brain Sci 24:87185.
38 Bjorn Merker

Cox P. H. (1999). An Initial Investigation of the Auditory Egocenter: Evidence for


a Cyclopean Ear. Doctoral Dissertation, North Carolina State University,
Raleigh, NC.
Damasio A. R. (1989).The brain binds entities and events by multiregional
activation from convergence zones. Neural Comput 1:123132.
Dean P., Porrill J., and Stone J. V. (2004). Visual awareness and the cere-
bellum: possible role of decorrelation control. Prog Brain Res 144:61
75.
Dehaene S., Naccache L., Le Clec H. G., Koechlin E., Mueller M., Dehaene-
Lambertz G., et al. (1998). Imaging unconscious semantic priming. Nature
395:597600.
Denton D. A., McKinley M. J., Farrell M., and Egan G. F. (2009). The role of
primordial emotions in the evolutionary origin of consciousness. Conscious
Cogn 18:500514.
Deubel H. and Schneider W. X. (1996). Saccade target selection and object
recognition: Evidence for a common attentional mechanism. Vision Res
36:18271837.
Eimer M. and Schlaghecken F. (1998). Effects of masked stimuli on motor acti-
vation: behavioral and electrophysiological evidence. J Exp Psychol Human
24:17371747.
Felleman D. J. and Van Essen D. C. (1991). Distributed hierarchical processing
in the primate cerebral cortex. Cereb Cortex 1:147.
Frith C. (2007). Making up the Mind: How the Brain Creates our Mental World.
London: Blackwell.
Frith C. D., Perry R., and Lumer E. (1999). The neural correlates of conscious
experience: An experimental framework. Trends Cogn Sci 3:105114.
Gaillard R., Del Cul A., Naccache L., Vinckier F., Cohen L., and Dehaene S.
(2006). Nonconscious semantic processing of emotional words modulates
conscious access. Proc Natl Acad Sci USA 103:75247529.
Gallistel C. B. (1999). Coordinate transformations in the genesis of directed
action. In Bly B. M. and Rumelhart D. E. (eds.) Cognitive Science. New
York: Academic Press, pp. 142.
Gorbet D. J. and Sergio L. E. (2009). The behavioural consequences of dissociat-
ing the spatial directions of eye and arm movements. Brain Res 1284:7788.
Harding D. E. (1961). On Having No Head: Zen and the Re-discovery of the
Obvious. London: Arkana (Penguin).
Helmholtz H. (1867). Handbuch der physiologischen Optik, Vol. 3. Leipzig: Leopold
Voss.
Henderson J. M., Brockmole J. R., and Gajewski D. A. (2008). Differential
detection of global luminance and contrast changes across saccades and
flickers during active scene perception. Vision Res 48:1629.
Hering E. (1879). Spatial Sense and Movements of the Eye. Trans. C. A. Radde,
1942. Baltimore, MD: American Academy of Optometry.
Hinton G. E. (1977). Relaxation and Its Role in Vision. Ph.D. thesis, University of
Edinburgh.
Howard I. P. and Templeton W. B. (1966). Human Spatial Orientation. New York:
John Wiley & Sons, Inc.
Body-world: Phenomenal contents of the reality model 39

Izard C. E. (1991). The Psychology of Emotions. New York: Plenum.


Izard C. E. (2007). Levels of emotion and levels of consciousness. Behav Brain
Sci 30:9698.
James W. (1890). Principles of Psychology. London: Macmillan.
Johansson R. S., Westling G., Backstrom R., and Flanagan J. R. (2001). Eye-hand
coordination in object manipulation. J Neurosci 21:69176932.
Kawato M. (1997). Bi-directional theory approach to consciousness. In Ito M.
(ed.) Cognition, Computation and Consciousness. Oxford: Clarendon Press,
pp. 233248.
Kawato M. (1999). Internal models for motor control and trajectory planning.
Curr Opin Neurobiol 9:718727.
Kawato M., Hayakawa H., and Inui T. (1993). A forward-inverse optics model of
reciprocal connections between visual cortical areas. Network-Comp Neural
4:415422.
Kirby S. (2002) Learning, bottlenecks and the evolution of recursive syntax. In
Briscoe T. (ed.) Linguistic Evolution Through Language Acquisition: Formal
and computational models. Cambridge University Press, pp. 173204.
Kording K. P. and Wolpert D. M. (2006) Bayesian decision theory in sensorimo-
tor control. Trends Cogn Sci 10:319326.
Lau H. C. and Passingham R. E. (2007). Unconscious activation of the cognitive
control system in the human prefrontal cortex. J Neurosci 27:58055811.
Lehar S. (2003). The World in Your Head: A Gestalt View of the Mechanism of
Conscious Experience. Mahwah, NJ: Lawrence Erlbaum.
Lennie P. (1998). Single units and visual cortical organization. Perception 27:889
935.
Liotti M., Brannan S., Egan G., Shade R., Madden L., Abplanalp B., et al.
(2001). Brain responses associated with consciousness of breathlessness (air
hunger). P Natl Acad Sci USA 98:20352040.
Lopez L. C. S. (2006). The phenomenal world inside the noumenal head of
the giant: Linking the biological evolution of consciousness with the virtual
reality metaphor. Revista Eletronica Informacao e Cognicao 5:204228.
Luck S. J., Vogel, E. K., and Shapiro K. L. (1996). Word meanings can be
accessed but not reported during the attentional blink. Nature 383:616
618.
Mach E. (1897). Contributions to the Analysis of the Sensations. La Salle, IL: Open
Court.
Mack A. (2003) Inattentional blindness: Looking without seeing. Curr Dir Psychol
Sci 12:180184.
Mandler G. (1975). Memory storage and retrieval: Some limits on the reach
of attention and consciousness. In Rabbitt P. M. A. and Dornic S. (eds.)
Attention and Performance V. New York: Academic Press, pp. 499516.
Marigold D. S. and Patla A. E. (2007). Gaze fixation patterns for negotiating
complex ground terrain. Neuroscience 144:302313.
Marigold D. S. (2008). Role of peripheral visual cues in online visual guidance
of locomotion. Exerc Sport Sci R 36:145151.
Masino T. (1992). Brainstem control of orienting movements: Intrinsic coordi-
nate system and underlying circuitry. Brain Behav Evolut 40:98111.
40 Bjorn Merker

McFarland D. J. and Sibly R. M. (1975). The behavioural final common path.


Phil Trans Roy Soc Lon B 270:265293.
McFarland D. J. (1977). Decision making in animals. Nature 269:1521.
McGuire L. M. M. and Sabes N. (2009). Sensory transformations and the
use of multiple reference frames for reach planning. Nat Neurosci 12:1056
1061.
Merker B. (1997). The common denominator of conscious states: Implica-
tions for the biology of consciousness. URL: http://cogprints.org/179/1/
COGCONSC.TXT (accessed March 1, 2013).
Merker B. (2004). Cortex, countercurrent context, and dimensional integration
of lifetime memory. Cortex 40:559576.
Merker B. (2005). The liabilities of mobility: A selection pressure for the transi-
tion to consciousness in animal evolution. Conscious Cogn 14:89114.
Merker B. (2007a). Consciousness without a cerebral cortex: A challenge for
neuroscience and medicine. Target article and peer commentary. Behav
Brain Sci 30:63110.
Merker B. (2007b). Grounding consciousness: The mesodiencephalon as thala-
mocortical base. Authors response. Behav Brain Sci 30:110134.
Merker B. (2012). From probabilities to percepts: A subcortical global best
estimate buffer as locus of phenomenal experience. In Edelman S., Fekete
T., and Sachs N. (eds.) Being in Time: Dynamical Models of Phenomenal
Experience. Amsterdam: John Benjamins.
Merker B. (2012). Naturalizing the first person perspective founds a paradigm
for the conscious state. In: S. Harnad (ed.). Alan Turing Memorial Summer
Institute on the Evolution and Function of Consciousness, Montreal, June
29July 11, 2012.
Mezard M. and Mora T. (2009). Constraint satisfaction problems and neural
networks: A statistical physics perspective. J Physiology-Paris 103:107113.
Miller G. A. (1956). The magical number seven, plus or minus two: Some limits
on our capacity for processing information. Psychol Rev 63:8197.
Mitchell H. B. (2007). Multi-sensor Data Fusion: An Introduction. Berlin: Springer-
Verlag.
Moll M. and Miikkulainen R. (1997). Convergence-zone episodic memory: Anal-
ysis and simulations. Neural Networks 10:10171036.
Morsella E. (2005). The function of phenomenal states: Supramodular interac-
tion theory. Psychol Rev 11:10001021.
Morsella E. and Bargh J. A. (2007). Supracortical consciousness: Insight from
temporal dynamics, processing-content, and olfaction. Behav Brain Sci
30:100.
Morsella E., Berger C. C., and Krieger S. C. (2010). Cognitive and neural
components of the phenomenology of agency. Neurocase 15:122.
Most S., Scholl B., Clifford E., and Simons D. (2005). What you see is what you
set: Sustained inattentional blindness and the capture of awareness. Psychol
Rev 112:217242.
Neelon M. F., Brungart D. S., and Simpson B. D. (2004). The isoazimuthal
perception of sounds across distance: A preliminary investigation into the
location of the audio egocenter. J Neurosci 24:76407647.
Body-world: Phenomenal contents of the reality model 41

Pearl J. (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible


Inference, 2nd edn. San Francisco, CA: Morgan Kaufmann.
Philipona D., ORegan J. K., Nadal J.-P., and Coenen O. J.-M. D. (2004). Per-
ception of the structure of the physical world using multimodal unknown
sensors and effectors. Adv Neur Inf 16:945952.
Rees G. (2007). Neural correlates of the contents of visual awareness in humans.
Phil Trans Roy Soc Lon B 362:877886.
Rensink R. A., ORegan J. C., and Clark J. J. (1997). To see or not to see:
The need for attention to perceive changes in scenes. Psychol Sci 8:368
373.
Revonsuo A. (2006). Inner Presence: Consciousness as a Biological Phenomenon.
Cambridge, MA: MIT Press.
Richards W., Seung H. S., and Pickard G. (2006). Neural voting machines.
Neural Networks 19:11611167.
Roelofs C. O. (1959). Considerations on the visual egocenter. Acta Psychol
16:226234.
Rojer A. S. and Schwartz E. L. (1990). Design considerations for a space-variant
visual sensor with complex logarithmic geometry. In Proceedings of the 10th
International Conference on Pattern Recognition. Washington, DC: IEEE Com-
puter Society Press, pp. 278285.
Rumelhart D. E., Smolensky P., McClelland J. L., and Hinton G. E. (1986).
Schemata and sequential thought processes in PDP models. In Rumelhart
D. E. and McClelland J. L. (eds.) Parallel Distributed Processing: Explorations
in the Microstructure, Vol. 2: Psychological and Biological Models. Computational
Models of Cognition and Perception Series. Cambridge MA: MIT Press, pp. 7
57.
Sachs E. (1967). Dissociation of learning in rats and its similarities to dissociative
states in man. In Zubin J. and Hunt H. (eds.) Comparative Psychopathology:
Animal and Human. New York: Grune and Stratton, pp. 249304.
Schacter D. L. (1989). On the relations between memory and consciousness:
Dissociable interactions and conscious experience. In Roediger III H. L.
and Craik F. I. M. (eds.) Varieties of Memory and Consciousness: Essays
in Honour of Endel Tulving. Mahwah, NJ: Lawrence Erlbaum, pp. 356
390.
Schacter D. L. (1990). Toward a cognitive neuropsychology of awareness:
Implicit knowledge and anosognosia. J Clin Exp Neuropsyc 12:155 178.
Schopenheuer A. (1844/1958). The World as Will and Representation, 2nd edn.
Trans. E. F. J. Payne. New York: Dover.
Shafer G. (1996). The Art of Causal Conjecture. Cambridge, MA: MIT Press.
Sibly R. M. and McFarland D. J. (1974). A state-space approach to motivation.
In McFarland J. D. (ed.) Motivational Control Systems Analysis. New York:
Academic Press, pp. 213250.
Simons D. J. and Ambinder M. S. (2005). Change blindness: Theory and con-
sequences. Curr Dir Psych Sci 14:4448.
Singer W., Zihl J., and Poppel E. (1977). Subcortical control of visual thresholds
in humans: Evidence for modality specific and retinotopically organized
mechanisms of selective attention. Exp Brain Res 29:173190.
42 Bjorn Merker

Smith M. A. (1997). Simulating Multiplicative Neural Processes in Non-orthogonal


Coordinate Systems: A 3-D Tensor Model of the VOR. Master of Arts Thesis,
Graduate Program in Psychology, York University, North York, Ontario,
Canada.
Sparks D. L. (1999). Conceptual issues related to the role of the superior col-
liculus in the control of gaze. Curr Opin Neurobiol 9:698707.
Svarverud E., Gilson S. J., and Glennerster A. (2010). Cue combination for 3D
location judgements. J Vision 10:113.
Thaler L. and Goodale M. A. (2010). Beyond distance and direction: The brain
represents target locations non-metrically. J Vision 10:127.
Trehub A. (1991). The Cognitive Brain. Cambridge, MA: MIT Press.
Trevarthen C. (1968). Two mechanisms of vision in primates. Psychol Forsch
31:229337.
Tsang E. P. K. (1993). Foundations of Constraint Satisfaction. London: Academic
Press.
Turatto M., Bettella S., Umilta C., and Bridgeman B. (2003). Perceptual condi-
tions necessary to induce change blindness. Vis Cogn 10:233255.
Velmans M. (2008). Reflexive monism. J Consciousness Stud 15:550.
van Gaal S., Ridderinkhof K. R., Fahrenfort J. J., Scholte H. S., and Lamme
V. A. F. (2008). Frontal cortex mediates unconsciously triggered inhibitory
control. J Neurosc 28:80538062.
von Holst E. and Mittelstaedt H. (1950). Das Reafferenzprincip (Wechselwirkun-
gen zwischen Zentralnervensystem und Peripherie). Naturwissenschaften
37:464476.
Watson A. B. (1987). Efficiency of a model human image code. J Opt Soc Am A
4:24012417.
Webb B. (2004). Neural mechanisms for prediction: Do insects have forward
models? Trends Neurosci 27:278282.
Witkin A. P. (1981). Recovering surface shape and orientation from texture. Artif
Intel 17:1745.
Wolfe J. M. (1999). Inattentional amnesia. In Coltheart V. (ed.) Fleeting Memories.
Cambridge, MA: MIT Press, pp. 7194.
Wolpert D. M., Ghahramani Z., and Jordan M. I. (1995). An internal model for
sensorimotor integration. Science 269:18801882.
Zettel J. L., Holbeche A., McIlroy W. E., and Maki B. E. (2005). Redirection of
gaze and switching of attention during rapid stepping reactions evoked by
unpredictable postural perturbation. J Exp Brain Res 165:392401.
2 Homing in on the brain mechanisms linked
to consciousness: The buffer of the
perception-and-action interface

Christine A. Godwin, Adam Gazzaley, and


Ezequiel Morsella
2.1 Introduction 43
2.2 Homing in on the neuroanatomical loci constituting conscious states 45
2.3 Homing in on the component mental processes associated with
conscious states 49
2.3.1 Supramodular interaction theory 50
2.3.2 The monogenesis hypothesis 53
2.3.3 Consciousness is associated with limited direct cognitive
control 58
2.4 Homing in on the mental representations associated with
conscious states 59
2.5 A new synthesis: The buffer of the perception-and-action
interface (BPAI) 62
2.6 Conclusion 66

2.1 Introduction
Discovering the events in the nervous system that are responsible for the
instantiation of conscious states remains one of the most daunting chal-
lenges in science (Crick 1995; Koch 2004). This puzzle is often ranked
as one of the top unanswered scientific questions (e.g., Roach 2005).
To the detriment of the scientist, the problem is unfortunately far more
difficult than what non-experts may surmise: Investigators focusing on
the problem are not only incapable of having an inkling regarding how
something like consciousness could arise from something like the brain,
they cannot even begin to fathom how something like consciousness
could emerge from any set of real or hypothetical circumstances what-
soever. When speaking about conscious states, we are referring to the
most basic form of consciousness, the kind falling under the rubrics of
subjective experience, qualia, sentience, basic awareness, and
phenomenal state. This basic form of consciousness has been defined
by Nagel (1974), who claimed that an organism has basic consciousness

We gratefully acknowledge the continued assistance of the neurologist Stephen Krieger.

43
44 Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

if there is something it is like to be that organism something it is like, for


example, to be human and experience pain, love, or breathlessness.1 To
date, we do not have a single clue regarding how something unconscious
could be turned into something that is conscious, that is, to something
for which there is something it is like to be that thing.
Despite these challenges, some progress regarding this hard prob-
lem (Chalmers 1996) has been made from approaches that are bru-
tally reductionistic (Morsella et al. 2010). For example, based on the
empirical and theoretical developments of the last four decades, there is a
growing consensus in neuroscience, neurology, and psychology that con-
scious states are associated with only a subset of all of the processes/events
and regions of the nervous system (Penfield and Jasper 1954; Logothetis
and Schall 1989; Weiskrantz 1992; Crick 1995; Gray 1995; Grossberg
1999; Zeki and Bartels 1999; Dehaene and Naccache 2001; Crick and
Koch 2003; Gray 2004; Koch 2004; Koch and Greenfield 2007; Merker
2007) and that much of nervous function is unconscious2 (reviewed next
and also in Morsella and Bargh 2011). It seems that consciousness is
associated with nervous events that are qualitatively different from their
unconscious counterparts in the nervous system, in terms of their func-
tion, physical make-up/organization, or mode of activity (Ojemann 1986;
Coenen 1998; Llinas et al. 1998; Edelman and Tononi 2000; Goodale
and Milner 2004; Gray 2004; Morsella 2005; Merker 2007; Morsella and
Bargh 2010a). It should be noted that though many researchers today
believe that no single anatomical region is necessary for all kinds of con-
sciousness, there is some evidence that there may be a single region/zone
that is necessary for all kinds of consciousness; see Arendes (1994) and
Merker (2007).
In the spirit of this reductionistic approach, one can home in on both
the unique functions and nervous events/organizations (e.g., circuits)
associated with conscious states (e.g., Morsella et al. 2010). This is the
primary burden of the current chapter. Specifically, our aim is to
(1) home in on the neuroanatomical loci constituting conscious states
(Section 2.2), (2) home in on the basic component mental processes

1 Similarly, Block (1995) claimed, the phenomenally conscious aspect of a state is what
it is like to be in that state (p. 227). For good reason, some theoreticians argue that the
term should be conscious experience instead of conscious state. We will continue to
use the latter only because it will make the terminology consistent with that of previous
publications.
2 The unconscious mind comprises information-processing events in the nervous system
that, though capable of systematically influencing behavior, cognition, motivation, and
emotion, do not influence the organisms subjective experience in such a way that the
organism can directly detect, understand, or report the occurrence or nature of these
events (Morsella and Bargh 2010b).
Mechanisms of consciousness: The perception-action buffer 45

associated with conscious states (Section 2.3), and (3) home in on the
mental representations (the tokens of mental operations) associated with
conscious states (Section 2.4). In a funneled approach, each section
attempts to home in on the correlates of consciousness at a more micro
level than the previous section. As is evident later, the literature reviewed
in the three sections reveals that conscious states are restricted to only a
subset of nervous and mental processes. We conclude our chapter with
a simplified framework the Buffer of the Perception-and-Action Interface
(BPAI) that attempts to present in schematic form the modal findings
(but not the exhaustive findings) and conclusions about the nature of
conscious states in the nervous system.

2.2 Homing in on the neuroanatomical loci constituting


conscious states
In order to home in on the neural circuit(s) of consciousness, one poten-
tial first step is to identify regions whose nonparticipation (because of
lesions, ablation, extirpation, or other forms of inactivation such as deac-
tivation by transcranial magnetic stimulation) does not render the ner-
vous system incapable of still exhibiting an identifiable form of basic
consciousness. As reviewed in Morsella et al. (2010), substantial evi-
dence reveals that conscious states of some kind persist with the non-
participation of several regions in the nervous system: Cerebellum (see
Schmahmann 1998), amygdala (see LeDoux 1996; Anderson and Phelps
2002), and hippocampus (Milner 1966). In addition, investigations of
split-brain patients (Wolford et al. 2004), binocular rivalry3 (Logo-
thetis and Schall 1989), and split-brain patients experiencing rivalry dur-
ing a binocular rivalry experiment (OShea and Corballis 2005) suggest
that an identifiable conscious state of some kind will survive following
the nonparticipation of the non-dominant (usually right) cerebral cortex
or of the commissures linking the two cortices. Less definitive evidence
suggests that a conscious state of some sort can persist with the nonpar-
ticipation of the basal ganglia (Ho et al. 1993; Bellebaum et al. 2008),
mammillary bodies (Tanaka et al. 1997; Duprez et al. 2005), and insula.

3 In binocular rivalry (Logothetis and Schall 1989), an observer is presented with different
visual stimuli to each eye (e.g., an image of a house in one eye and of a face in the
other). It might seem reasonable that, faced with such stimuli, one would perceive an
image combining both objects a house overlapping a face. Surprisingly, however, an
observer experiences seeing only one object at a time (a house and then a face), even
though both images are always present. At any moment, the observer is unaware of
the computational processes leading to this outcome; the perceptual conflict and its
resolution are unconscious.
46 Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

Regarding the insula, when delivering a speech at the 2011 Association


for Psychological Science Convention, Antonio Damasio reported that a
patient with a void in his insular regions was found to be as conscious
as anybody in this room (Damasio 2011, as cited in Voss 2011; see also
Damasio 2010).
Controversy continues to surround the hypothesis that cortical mat-
ter is necessary for consciousness. Some researchers have gone so far
as to propose that, while the cortex may elaborate the contents of con-
sciousness, consciousness is primarily a function of subcortical structures
(Penfield and Jasper 1954; Merker 2007). Penfield and Jasper (1954)
based this hypothesis in part on their extensive studies involving both the
direct stimulation of, and ablation of, cortical regions. Based on these
and other findings (e.g., observations of patients with anencephaly), it has
been suggested that consciousness is associated with subcortical, mesen-
cephalic areas (e.g., the zona incerta; Merker 2007). This has led to the
cortical-subcortical controversy (see Morsella et al. 2011). The role of sub-
cortical structures in the production of consciousness, and the amount
of cortex that may be necessary for the production of consciousness,
remains to be elucidated. Data from studies on patients with profound
disorders of awareness (e.g., vegetative state) suggest that signals from
frontal cortex may be critical for the instantiation of any conscious state
(Boly et al. 2011). However, the psychophysiology of dream conscious-
ness, which involves prefrontal deactivations (Muzur et al. 2002), suggest
that, although the prefrontal lobes are involved in cognitive control (see
review in Miller 2007), they may not be essential for the generation of
basic consciousness.
There is also evidence implicating not frontal but parietal areas as
the primary region responsible for conscious states. (Relevant to this
hypothesis is research on the phenomenon of sensory neglect; see Heilman
et al. 2003.) For example, direct electrical stimulation of parietal areas
of the brain gives rise to the subjectively experienced will to perform an
action, and increased activation makes subjects believe that they actually
executed the corresponding action, even though no action was performed
(Desmurget et al. 2009; Desmurget and Sirigu 2010). Activating motor
areas (e.g., in premotor areas) can lead to the actual action, but subjects
will believe that they did not perform any action (see also Fried et al.
1991). (These parietal-related findings are consistent with the Sensorium
Hypothesis presented later.)
To illuminate these controversial issues and further eliminate brain
regions not necessary for consciousness, and following up on Morsella
et al. (2010), we have focused our attention on the olfactory system
(see also Keller 2011), a phylogenetically old system whose circuitry
Mechanisms of consciousness: The perception-action buffer 47

appears to be more tractable and less widespread in the brain than that
of, say, vision or higher-level processing such as music perception. Unlike
other sensory modalities, afference from the olfactory sensory system
bypasses the thalamic first order relay neurons and, after processing
in the olfactory bulb, directly influences the olfactory (piriform) cortex
(Haberly 1998). Specifically, afferents from the olfactory sensory system
bypass the thalamus and directly target regions of the ipsilateral cortex
(Shepherd and Greer 1998; Tham et al. 2009). Importantly, this is not
to confirm that a conscious brain experiencing only olfaction does not
require the thalamus: in post-cortical stages of processing, the mediodor-
sal nucleus of the thalamus does receive inputs from cortical regions that
are involved in olfactory processing (Haberly 1998).
Because olfactory afferents bypass the relay thalamus, one can con-
clude that, at least for olfactory experiences and under the assumptions
described in the following, a conscious state of some sort need not include
the first-order thalamic nuclei (Morsella et al. 2010). Accordingly, pre-
vious findings suggest that the olfactory bulb, which has been described
as being functionally equivalent to the first-order relay of the thalamus
(Kay and Sherman 2007), is not required for endogenic olfactory con-
sciousness (Mizobuchi et al. 1999; Henkin et al. 2000). Specifically,
knowledge regarding the neural correlates of conscious olfactory percep-
tions, imagery, and hallucinations (Markert et al. 1993; Mizobuchi et al.
1999; Leopold 2002), as revealed by direct stimulation of the brain (Pen-
field and Jasper 1954), neuroimaging (Henkin et al. 2000), and lesion
studies (Mizobuchi et al. 1999), suggests that endogenic, olfactory con-
sciousness does not require the olfactory bulb. In addition, it seems that
patients can still experience explicit, olfactory memories following bilat-
eral olfactory bulbectomies, though the literature is in want of systematic
studies regarding this important observation.
Regarding the mediodorsal thalamic nucleus (MDNT), although it
likely plays a significant role in olfactory discrimination (Eichenbaum
et al. 1980; Slotnick and Risser 1990; Tham et al. 2011), identification,
and hedonics (Sela et al. 2009), as well as in more general cognitive pro-
cesses, including memory (Markowitsch 1982), learning (Mitchell et al.
2007), and attentional processes (Tham et al. 2009; Tham et al. 2011),
no study we are aware of has found a lack of basic conscious olfactory
experience resulting from lesions of the MDNT (but see theorizing in
Plailly et al. 2008). Regarding second-order thalamic relays such as the
MDNT, one must keep in mind that they seem to be similar with respect
to their internal circuitry to first-order relays (Sherman and Guillery
2006), which are quite simplistic compared to, say, a cortical column.
Nevertheless, because to date there is no strong theorizing regarding the
48 Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

kind of circuitry that a conscious state would require, and because so


little is known about all aspects of thalamic processing, at this time one
cannot rule out that thalamic processes can constitute a conscious state
(see strong evidence in Merker 2007).
If the cortex is required for consciousness, then lesions of the OFC
should result in a lack of conscious olfactory experience. Indeed,
Cicerone and Tanenbaum (1997) observed complete anosmia in a patient
with a lesion to the left orbital gyrus of the frontal lobe. Additionally,
Li et al. (2010) reported a case study of a patient with a right OFC
lesion who experienced complete anosmia. Despite the patients com-
plete lack of conscious olfactory experience, neural activity and auto-
nomic responses showed a robust sense of blind smell (unconscious olfac-
tory processing that influences behavior; Sobel et al. 1999). This sug-
gests that, while many aspects of olfaction can occur unconsciously, the
OFC is necessary for conscious olfactory experience. Consistent with this
cortical account of consciousness, conscious aspects of odor discrimina-
tion depend primarily on the activities of the frontal and orbitofrontal
cortices (Buck 2000); according to Barr and Kiernan (1993), olfactory
consciousness depends on the piriform cortex. However, not all lesions
of the OFC have resulted in anosmia: Zatorre and Jones-Gotman (1991)
reported a study in which orbitofrontal lesions resulted in severe deficits,
yet all patients demonstrated normal detection thresholds. Investigations
on the neural correlates of phantosmias and explicit versus implicit olfac-
tory processing may further illuminate the circuits required for olfactory
consciousness. Regarding the former, it has proven difficult to identify
the minimal region(s) whose stimulation is sufficient to induce olfactory
hallucinations (Mizobuchi et al. 1999).
A critical empirical question that should not be ignored is whether
the olfactory system can generate some form of consciousness by itself
(a microconsciousness; Zeki and Bartels 1999) or whether olfac-
tory consciousness requires interactions with other, traditionally non-
olfactory regions of the brain (Cooney and Gazzaniga 2003). For
instance, perhaps one becomes conscious of olfactory percepts only when
the representations cross-talk with other systems (Morsella 2005) or
when they influence processes that are motor (Mainland and Sobel 2006)
or semantic-linguistic (Herz 2003). More generally, it may be that, to
instantiate a conscious state, the mode of interaction among regions is as
important as the nature and loci of the regions activated (Buzsaki 2006).
For example, the presence or lack of interregional synchrony leads to
different cognitive, behavioral, and subjectively experienced outcomes
(Hummel and Gerloff 2005). (See review of neuronal communication
through coherence in Fries 2005.) Regarding conscious states, during
Mechanisms of consciousness: The perception-action buffer 49

binocular rivalry, the neural processing of the conscious percept seems


to require interactive activations between both perceptual brain regions
and motor-related processes in frontal cortex (Doesburg et al. 2009).
Perhaps the instantiation of conscious states requires for certain kinds of
reentrant processes to take place in the brain (cf. Llinas et al. 1998;
Grossberg 1999; Di Lollo et al. 2000; Tong 2003).
In conclusion, it is evident the field has yet to reach a consensus regard-
ing the neuroanatomical loci of the nervous events constituting con-
sciousness. In some ways, more progress has been made when attempting
to home in on the component processes associated with conscious states.
This is the topic of our next section.

2.3 Homing in on the component mental processes


associated with conscious states
The integration consensus (Tononi and Edelman 1988; Damasio 1989;
Freeman 1991; Baars 1998; Zeki and Bartels 1999; Dehaene and Nac-
cache 2001; Llinas and Ribary 2001; Varela et al. 2001; Clark 2002;
Ortinski and Meador 2004; Sergent and Dehaene 2004; Del Cul et al.
2007; Doesburg et al. 2009; Uhlhaas et al. 2009; Boly et al. 2011) pro-
poses that, in the service of adaptive action, conscious states integrate
neural activities and information-processing structures that would oth-
erwise be independent (see review in Baars 2005). For example, when
actions are decoupled from consciousness (e.g., in neurological disor-
ders), the actions often appear impulsive or inappropriate, as if they
are not adequately influenced by the kinds of information by which they
should be influenced (Morsella and Bargh 2011). Most theoretical frame-
works in the integration consensus speak of conscious information as
being available globally in some kind of workspace (Baars 2002). Con-
sistent with the integration consensus, conscious actions involve more
widespread activations in the brain than their unconscious counterparts
(Ortinski and Meador 2004; Morsella and Bargh 2011).
One limitation of the integration consensus is that it fails to spec-
ify which kinds of integration require consciousness and which kinds
do not. For example, conscious processing seems unnecessary for inte-
grations across different sensory modalities (e.g., as in feature bind-
ing, intersensory conflicts, and multi-modal integration) or integrations
involving smooth muscle effectors (e.g., integrations in the pupillary
reflex; Morsella et al. 2009a). These integrations/conflicts can transpire
unconsciously. In contrast, people tend to be aware of some conflicts, as
when one holds ones breath or experiences an approach-approach con-
flict (Lewin 1935; Miller 1959). Such conscious conflicts (Morsella 2005)
50 Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

involve competition for control of the skeletal muscle output system and
are triggered by incompatible skeletomotor plans, as when one holds
ones breath while underwater, suppresses uttering something, or inhibits
a prepotent response in a laboratory response interference paradigm (e.g.,
the Stroop and flanker tasks; Stroop 1935; Eriksen and Eriksen 1974).
(In the Stroop task, one must name the color in which a word is written.
When the word and color are incongruous [e.g., RED in blue], response
conflict leads to interference [e.g., increased response times]. When the
color matches the word [e.g., RED in red] or is presented on a neutral
stimulus [e.g., XXXX], there is little or no interference.)

2.3.1 Supramodular interaction theory


On the basis of this and other evidence, Supramodular Interaction The-
ory (SIT) proposes that the primary function of conscious states is to
integrate information, but only certain kinds of information the kinds
involving incompatible skeletomotor intentions for adaptive action (e.g.,
holding ones breath). From this standpoint, these states are necessary,
not to integrate perceptual-level processes (as in feature binding or inter-
sensory conflicts), but to integrate conflicting action-goal inclinations
toward the skeletal muscle system, as captured by the principle of Parallel
Responses into Skeletal Muscle (PRISM; Morsella 2005). SIT proposes
that, in the nervous system, there are three distinct kinds of integra-
tion or binding (Morsella and Bargh 2011). Perceptual binding (or
afference binding) is the binding of perceptual processes and representa-
tions. This occurs in feature binding (e.g., the binding of shape to color;
Zeki and Bartels 1999) and intersensory binding, as in the McGurk
effect (McGurk and MacDonald 1976). (This effect involves interac-
tions between visual and auditory processes: An observer views a speaker
mouthing ga while presented with the sound ba. Surprisingly, the
observer is unaware of any intersensory interaction, perceiving only da.
See neural evidence for this effect in Nath and Beauchamp [2012].) Affer-
ence binding can occur unconsciously. It should be noted that, though
the integrative process involved in afference binding can be mediated
unconsciously, consciousness is often coupled with the output of the
process, for example, the percept da in the McGurk effect.
Another form of binding, linking perceptual processing to action/motor
processing, is known as efference binding (Haggard et al. 2002). This kind
of stimulus-response (S R) binding allows one to press a button when
presented with a cue. Research has shown that responding on the basis
of efference binding can occur unconsciously. For example, Taylor and
Mechanisms of consciousness: The perception-action buffer 51

McCloskey (1990, 1996) demonstrated that, in a choice RT task, partici-


pants could select the correct motor response (one of two button presses)
when confronted with subliminal stimuli (Fehrer and Biederman 1962;
Fehrer and Raab 1962; Hallett 2007). More commonly, it can also be
mediated unconsciously in actions such as reflexive pain withdrawal or
reflexive inhalation. The third kind of binding, efference-efference binding,
occurs when two streams of efference binding are trying to influence
skeletomotor action simultaneously (Morsella and Bargh 2011). This
occurs when one holds ones breath, suppresses uttering something, sup-
presses a prepotent response in a response interference paradigm such as
the Stroop task, or voluntarily breathes faster for some reward. (The last
example illustrates that not all cases involve suppression.)
According to SIT, it is the instantiation of conflicting efference-
efference binding that requires consciousness. Consciousness can be
construed as the crosstalk medium that allows conflicting actional
processes to influence action collectively, leading to integrated actions
(Morsella and Bargh 2011) such as holding ones breath. Absent con-
sciousness, behavior can be influenced by only one of the efference
streams, leading to un-integrated actions (Morsella and Bargh 2011) such
as unconsciously inhaling while underwater, pressing a button when
confronted with a subliminal stimulus, or, more commonly, reflexively
removing ones hand from a hot object. The form of integration afforded
by consciousness involves high-level information that can be polysen-
sory and that occurs at a stage of processing beyond that of the tradi-
tional Fodorian module (Fodor 1983), hence the term supramodular
(Morsella 2005).
In summary, the difference between unconscious action and conscious
action is that the former is always a case of un-integrated action, and the
latter can be a case of integrated action. Integrated action occurs when two
(or more) action plans that could normally influence behavior on their own
(when existing at that level of activation) are simultaneously co-activated and
trying to influence the same skeletal muscle effector (Morsella and Bargh
2011). Thus, integrated action occurs when one holds ones breath,
refrains from dropping a hot dish, suppresses the urge to scratch an itch,
suppresses a pre-potent response in a laboratory paradigm, or makes
oneself breathe faster.
Regarding the skeletal muscle effector system, one must consider that,
to a degree greater than that of any other effector system (e.g., smooth
muscle), distinct regions/systems of the brain are trying to control it in
different and often opposing ways. As mentioned earlier, the skeletal mus-
cle output system is akin to a single steering wheel that is controlled by
52 Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

multiple agentic systems. Each system has its peculiar operating prin-
ciples and phylogenetic origins. Most effector systems do not suffer
from this particular kind of multi-determined guidance. Although sim-
ple motor acts suffer from the degrees of freedom problem, because
there are countless ways to instantiate a motor act such as grasping a
handle (Rosenbaum 2002), action goal selection (e.g., what action goal to
implement next) suffers from this problem to a greater extent (Morsella
and Bargh 2010a). In action goal selection the challenge is met, not
by unconscious motor algorithms (as in the case of motor programming;
Rosenbaum 2002), but by the ability of conscious states to crosstalk infor-
mation and constrain what we do by having the inclinations of multiple
systems constrain and curb skeletomotor output: one system protests
one exploratory act (e.g., touching a flame), while another reinforces
another act (e.g., eating something sweet).
It has been known since at least the nineteenth century that, though
often functioning unconsciously (as in the frequent actions of breath-
ing, blinking, and postural shifting), skeletal muscle is the only bodily
effector that can be consciously controlled, but why this is so has never
been addressed theoretically. SIT introduces a systematic reinterpreta-
tion of this age-old fact: skeletomotor actions are at times consciously
mediated because they are directed by multiple, encapsulated systems
that, when in conflict, require conscious states to crosstalk and yield
adaptive action. Although identifying still higher-level systems is beyond
the purview of SIT, PRISM correctly predicts that certain aspects of the
expression (or suppression) of emotions (e.g., aggression, affection, dis-
gust, and so forth), reproductive behaviors, parental care, and addiction-
related behaviors should be coupled with conscious states, for the action
tendencies of such processes may compromise skeletal muscle plans (of
other systems).
In support of SIT, experiments have revealed that incompatible skele-
tomotor intentions (e.g., to point right and left, to inhale and not inhale)
do produce strong, systematic intrusions into consciousness, but no
such changes accompany smooth muscle conflicts or conflicts occur-
ring at perceptual stages of processing (e.g., intersensory processing; see
meta-analysis of evidence in Morsella et al. 2011). Accordingly, of the
many conditions in interference paradigms, the strongest perturbations
in consciousness (e.g., urges to err) are found in conditions involving
the activation of incompatible action plans (Morsella et al. 2009a,d).
Conversely, when distinct processes lead to harmonious action plans, as
when a congruent Stroop stimulus activates harmonious word-reading
and color-naming plans (e.g., RED is presented in red), there are lit-
tle such perturbations in consciousness, and participants may even be
Mechanisms of consciousness: The perception-action buffer 53

unaware that more than one cognitive process led to a particular overt
action plan (e.g., uttering red). This phenomenon, called synchrony
blindness (Molapour et al. 2011), is perhaps more striking in the con-
gruent (pro-saccade) condition of the anti-saccade task, in which dis-
tinct brain regions/processes indicate that the eyes should move in the
same direction (see Morsella et al. 2011). Regarding the Stroop data,
after carefully reviewing the behavioral and psychophysiological evidence,
MacLeod and MacDonald (2000) concluded that participants often do
read the word inadvertently in the congruent condition but that they
may be unaware of this process: The experimenter (perhaps the par-
ticipant as well) cannot discriminate which dimension gave rise to the
response on a given congruent trial (p. 386; see also Eidels et al. 2010;
Roelofs 2010). For additional evidence regarding synchrony blindness in
the Stroop task, see Molapour et al. (2011).
In summary, SIT is unique in its ability to explain the effects in con-
sciousness of (1) intersensory conflicts, (2) smooth muscle conflicts,
(3) synchrony blindness, and (4) conflicts from action plans (e.g., hold-
ing ones breath). In synthesis, the SIT framework has been success-
ful in homing in on the component processes of action production
that are associated with consciousness. Based on the crosstalk func-
tion of the phenomenal state, integrated action-goal selection can take into
account the votes of the often conflicting component response sys-
tems (Morsella 2005), as when one system wants to approach a stimulus
and another system wants to avoid the stimulus. It has been proposed
that the well-known lateness of consciousness in processing stems from
the fact that phenomenal states must integrate information (which is
necessary for one system to veto another) from neural sources hav-
ing different processing speeds (Libet 2004). These votes can be con-
strued as tendencies based on inborn or learned knowledge. This knowl-
edge has been proposed to reside in the neural circuits of the ventral
thalamocortical processing stream (Goodale and Milner 2004; Sher-
man and Guillery 2006), where information about the world is repre-
sented in a unique manner (e.g., representing the invariant aspects of the
world, involving allocentric coordinates) unlike that of the dorsal stream
(e.g., representing the variant aspects of the world, using egocentric
coordinates).

2.3.2 The monogenesis hypothesis


To further circumscribe the place of consciousness within the the-
ater of the nervous system, we entertain the monogenesis hypothesis that
consciousness is an atypical tool used primarily (though perhaps not
54 Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

exclusively) by what has been construed as the instrumental response sys-


tem (Bindra 1974, 1978; Morsella 2005), one of many specialized systems
in the brain that constrains skeletomotor output. This is the system that
allows one to move ones fingers, arms, or legs at will, regardless of
reward (Tolman 1948). It is a cool (versus hot; Metcalfe and Mis-
chel 1999) system that is concerned with physically negotiating with the
environment. Like Tolman (1948), Bindra (1974, 1978) proposed that
there is a multi-modal, high-level system devoted to physically negotiat-
ing instrumental actions with the environment. This cool (Metcalfe
and Mischel 1999) system is responsible for the instrumental aspects of a
behavioral response navigating through a field, approaching the location
of objects, grabbing objects, pressing levers, and other kinds of instru-
mental acts. The system represents and treats all objects in the same
manner regardless of the organisms motivational state (Bindra 1974).
Regarding consciousness, the system represents, for example, what it is
like when an event occurs on the left or on the right, an object should
be held with a light precision grip or a strong power grip, something is
above or below something else, or something is drawn with curved or
rigid lines. It allows one to handle food, move it around, or even throw it
should it be used as a projectile. All of these actions would be performed
in roughly the same manner regardless of whether one is starved, sated,
thirsty, or angry, for how the instrumental response system modulates
phenomenal experience is not modulated by needs or drives (Rolls et al.
1977; Morsella 2005); instead, the system is concerned with how (and
whether) a given instrumental action should be carried out in the event
that it is prompted.
With this in mind, it is important to note that the actual motor pro-
grams used to interact with the object are unconscious. Substantial evi-
dence reveals that one is unconscious of the motor programs guiding
action (Rosenbaum 2002). (See Grossberg 1999 for an account of why
motor programs must be unconscious and no memory of them should
be made.) In addition to action slips and spoonerisms, highly flexible
and online adjustments are made unconsciously during an act such as
grasping a fruit. One is unconscious of these complicated programs (see
compelling evidence in Johnson and Haggard 2005) that calculate which
muscles should be activated at a given time but is often aware of their
proprioceptive and perceptual consequences (e.g., perceiving the hand
grasping; Gottlieb and Mazzoni 2004; Gray 2004). (See Berti and Pia
2006 for a review of motor awareness and its disorders.) Accordingly,
though the planning of action (e.g., identifying the object that one must
act towards) shares resources with conscious perceptual processing, the
online, visually guided control of ongoing action does not (Liu et al.
Mechanisms of consciousness: The perception-action buffer 55

2008). In short, there is a plethora of findings showing that one is


unconscious of the adjustments that are made online as one reaches
for an object (Fecteau et al. 2001; Rossetti 2001; Rosenbaum 2002;
Goodale and Milner 2004; Heath et al. 2008).
The instrumental system enacts action goals (e.g., pressing a button,
closing a door), many of which are acquired from a learning history
(Bindra 1974, 1978). (An action goal is similar to Skinners notion of
an operant [Skinner 1953], an instrumental goal that could be realized
in multiple ways, as in the case of motor equivalence [Lashley 1942].) In
addition to operant forms of instrumental learning (Thorndike 1911;
Skinner 1953), the system is capable of vicarious and latent learning
(i.e., learning without reward or punishment; Tolman 1948). In a cool
manner and without invoking valence or affect, the instrumental system
can predict and mentally represent the instrumental consequences of its
action (e.g., what the world looks like when an object is dropped or a
box is opened). Thus, the system is highly predictive in nature (Frith
et al. 2000; Berthoz 2002; Llinas 2002). The system can have access to
information about both the outcomes of skeletomotor acts (e.g., know-
ing that a finger was flexed) and, through reafference, how the skeletal
muscle system is about to be maneuvered (e.g., knowing that there is a
strong tendency to move a finger). Figuratively speaking, it has its eye
on how the skeletal muscle steering wheel is being moved or is about to
be moved.
As an anticipatory system, the operating principles of the directed
actions of this system are perhaps best understood in terms of the histor-
ical notion of ideomotor processing (Greenwald 1970; Hommel et al. 2001;
Hommel 2009; Hommel and Elsner 2009). Ideomotor theory states that
the mental image of an instrumental action tends to lead to the execution
of that action (Lotze 1852; Harle 1861; James 1890), with the motor
programming involved being unconscious (James 1890). These images
tend to be perceptual-like images of action outcomes (Hommel 2009).
Once an action outcome (e.g., flexing the finger) is selected, unconscious
motor efference enacts the action by activating the right muscles at the
right time. Phenomenologically, the goals of this system are subjectively
experienced as instrumental wants, as in what it is like to intend to press
a button, rearrange the placement of objects, move a limb in a circular
motion, or remain motionless. The inability to materialize instrumental
wants could reflect the limitations of the body/physical action or con-
flict between response systems (Morsella et al. 2009b), such as when
the incentive systems that are concerned with bodily needs curb one
against, say, inflicting tissue damage through ones instrumental actions
(Morsella 2005).
56 Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

As explained by ideomotor approaches (Hommel et al. 2001),


this system is unique in that it has privileged access to the skeletal
muscle system and is thus the dominant system with respect to immedi-
ate skeletomotor action. Unlike affective states, which one cannot modu-
late directly (Ohman and Mineka 2001), instrumental goals can be imple-
mented instantaneously, in the form of direct cognitive control (Morsella
et al. 2009c). This is the system that is largely responsible for what has
often been regarded as controlled processes (Chaiken and Trope 1999;
Strack and Deutsch 2004; cf. Lieberman 2007). For example, one can
snap ones finger at will, but one cannot make oneself feel scared or
joyous with the same immediacy (discussed later).
Because of its atypical relationship to skeletomotor control, and based
on the parsimonious assumption that, in (at least) mammalian evolu-
tion, something as atypical as consciousness was more likely to have
evolved only once than multiple independent times (for the logic of
this approach, see Gazzaniga et al. 2009, Chapter 15), the monogen-
esis hypothesis is that consciousness is a process primarily (though not
necessarily exclusively) associated with the instrumental response sys-
tem. This proposal is more conservative than traditional approaches that
have either conflated controlled and conscious processing or set forth that
they are aspects of the same system. Previously, controlled and conscious
processes have been conflated as one and the same a priori. Conflating
both kinds of events is unjustified (see argument in Koch and Tsuchiya
2007; Suhler and Churchland 2009). Instead, and in the spirit of this
forward-looking but conservative review, we propose that conscious and
controlled events are not one and the same, but that they are intimately
related, in a way that is best understood by examining, in particular, the
instrumental system, because of its unique relationship with conscious
processing. Presented in another way, we believe that the time has come
to make an assumption that the system that, to some extent, is capable
of controlling the contents of consciousness (e.g., by deciding to close ones
eyes or move ones arm) is the system that is most intimately associated
with consciousness.
This assumption leads to the following question. How is one conscious
of things like the urge to eat, to drink water, or to affiliate, that is, to things
that reflect the inclinations of the hot (Metcalfe and Mischel 1999)
affective/incentive systems? From a monogenesis standpoint, one is con-
scious of these inclinations of affective/incentive systems only indirectly,
because these inclinations happen to influence the skeletal muscle steer-
ing wheel that the instrumental system is incessantly monitoring already
(cf. chronic engagement; Morsella 2005). With knowledge of instrumen-
tal operations and wants, the instrumental system can keep track of the
Mechanisms of consciousness: The perception-action buffer 57

inclinations of affective/incentive systems, but only indirectly. Perhaps it


is best to illustrate this idea by the following hypothetical scenario, illus-
trating how an organism, by having consciousness associated with only
one system, can still use consciousness to monitor the inclinations of the
other, unprivileged systems.
Imagine that a giant cruise liner is steered by one conscious person
and three unconscious zombies that are incapable of communicating.
The nature of such unconscious zombies is described by Moody (1994):
They engage in complex behaviors very similar to ours . . . but these
behaviors are not accompanied by conscious experience of any sort. I
shall refer to these beings as zombies (p. 196). Let us further imagine
that, in this scenario, each driver has one hand on the helm and that the
zombies may conflict regarding the trajectory of the ship: Some would
like the ship to turn left, and others would like for it to turn right. If
the conscious driver could look at only one place on the ship to infer
where the zombies desire to go, where should the conscious driver look?
Certainly one would not find it useful to inspect the engine room or
where the hull hits the waves. It turns out that the most informative place
to look to learn about the inclinations of the zombies is the very helm
that is held by all drivers, because its movements happen to reflect the
inclinations of everyone.
Analogously, because consciousness is concerned with instrumen-
tal happenings and wants, it happens to have access to the skeleto-
motor inclinations of affective/incentive systems that would otherwise
be encapsulated (LeDoux 1996; Ohman and Mineka 2001). Thus,
an organism that has its little bit of consciousness placed else-
where would be at a disadvantage with respect to constraining skele-
tal muscle output on the basis of the agendas of encapsulated actional
systems.
It is important to note one limitation of this hypothesis: Although it
illuminates how the inclinations (i.e., direction) of affective/incentive sys-
tems may influence consciousness when consciousness is associated with
only one system (the instrumental response system), it does not explain
how positive or negative subjectively experienced outcomes arise from
indirectly observing the unconscious systems. In short, a monogenesis
framework can explain how the inclinations of otherwise encapsulated
systems can be represented consciously, but it cannot explain how the
positive or negative evaluations of such systems are thus represented. (It
may be that deactivation of the drive of these systems, or approaching
their goals, is inherently positive; cf. Hull 1943.) This is one of many
gaps of knowledge in the current theoretical account, a gap that requires
further investigation.
58 Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

2.3.3 Consciousness is associated with limited direct cognitive control


Last, regarding the mental events associated with consciousness, we
examine how consciousness is further circumscribed in that it is
associated with extremely limited direct cognitive control and is asso-
ciated with processes that must influence many nervous activities only
through indirect cognitive control (Morsella et al. 2009c). As mentioned,
direct cognitive control is perhaps best exemplified by ones ability to
immediately control thinking (or imagining) and the movements of a
finger, arm, or other skeletal muscle effectors. With respect to action,
the instrumental system is the only system that features direct cogni-
tive control. Interestingly, each of these kinds of processes requires the
activation of perceptual-like representations, one for constituting mental
imagery (Farah 1989) and the other for instantiating ideomotor mecha-
nisms (Hommel et al. 2001). When direct control is unavailable, indirect
forms of control can be implemented. For example, it is clear that one
may not be able to directly influence ones affective/incentive states at
will. In other words, one cannot make oneself frightened, happy, angry,
sad, or excite a desired appetitive state (being hungry) if the adequate
conditions are absent. It is for this reason that people seek and even pay
for certain experiences (e.g., going to movies or comedy clubs) to put
themselves in a desired state that cannot be instantiated through an act
of will.
Although direct control cannot activate incentive or affective states
(Ohman and Mineka 2001), it is possible to indirectly stimulate these
states by activating the kinds of perceptuo-semantic representations that,
as releasers (to use an ethological term; Tinbergen 1952), can trigger
the networks responsible for these states (Morsella et al. 2009c). In this
way, method actors spend a great deal of time and effort imagining
certain events in order to put themselves into a certain state (e.g., to
make themselves sad in order to portray a sad persona). This is done to
render the acting performance more natural and convincing. To make
oneself hungry, one can imagine a tasty dish; to make oneself angry, one
can recall an event that was frustrating or unjust. This illustrates how a
system with limited cognitive control one that can directly activate only,
say, perceptual-like representations can still influence the functioning
of otherwise encapsulated processes.
In indirect cognitive control, top-down processing activates, not the
circuits responsible for hunger, but perceptual symbols (Farah 1989;
Barsalou 1999; Kosslyn et al. 2006) that then stimulate incentive/affective
systems in a manner similar to the way that the corresponding
external stimuli would. Because of indirect cognitive control, conscious
Mechanisms of consciousness: The perception-action buffer 59

control seems more far reaching than it actually is at one moment of time.
Goodale and Milner (2004) go on further to propose that it is through the
top-down activation of low-level retinotopic perceptual representations,
representations that are common to both the ventral and dorsal process-
ing streams (e.g., retinotopic representations in the visual system), that
the ventral system interacts with the dorsal system.

2.4 Homing in on the mental representations associated


with conscious states
According to the integration consensus and SIT, consciousness integrates
(or crosstalks) information and behavioral inclinations (e.g., urges)
that were already generated and analyzed unconsciously (Shepard 1984;
Jackendoff 1990; Morsella 2005; Baumeister and Masicampo 2010).
From this point of view, consciousness is primarily not a doer but a talker,
and it only crosstalks relatively few kinds of information, as specified
earlier (Morsella and Bargh 2010a). If the primary function of conscious
states is to achieve integration amongst skeletomotor response systems
by broadcasting information, then one would expect that the nature of
representations involved in conscious processing has a high broadcast-
ability, that is, that representations can be received and understood
by multiple action systems in the brain. Are conscious representations
broadcastable? This appears to be the case. It has been proposed a priori
(on the basis of the requirements of isotropic information processing)
that it is the perceptual-like (object) representation that is the kind of
representation that would have the best broadcast ability in the brain
(Fodor 1983; Morsella et al. 2009c). Thus, perhaps it is no accident
that it is the perceptual-like kind of representation (e.g., visual objects
or linguistic objects such as phonemes; Fodor 1983) that happens to be
consciously available (Gray 1995). An additional benefit of having the
perceptual-like representation be the representation that is broadcasted to
multiple systems in the brain may be that phylogenetically old response
systems in the brain (e.g., allowing for a spider stimulus to trigger a
startle response; Rakison and Derringer 2008) were already evolved to
deal with this kind of representation (i.e., one reflecting external objects;
Bargh and Morsella 2008). Convergent evidence for this stems from
research elucidating why motor programs must be unconscious (Gray
1995; Grossberg 1999; Prinz 2003).
As stated in Morsella and Bargh (2010a), when examining the liaison
between action and consciousness, one notices that there is an unmistak-
able isomorphism regarding that which one is conscious of when one is
(1) observing the behaviors of others, (2) dreaming, and (3) observing
60 Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

ones own actions. In every case, it is the same, perceptual-like represen-


tation that constitutes that which is consciously available (Rizzolatti et al.
2008). Speech processing provides a compelling example. Consider the
argument by Levelt (1989) that, of all the processes involved in language
production, one is conscious only of a subset of the processes, whether
when speaking aloud or only in ones head (i.e., subvocalizing). It is
the phonological representation, and not, say, the motor-related, artic-
ulatory code (Ford et al. 2005) that one is conscious of during spoken
or subvocalized speech, or even when perceiving the speech of others
(Fodor 1983; Buchsbaum and DEsposito 2008; Rizzolatti et al. 2008).
This theorizing is also consistent with models of conscious action con-
trol, in which conscious contents regarding ongoing action are primarily
of the perceptual consequences of action (Jeannerod 2006): In perfectly
simple voluntary acts there is nothing else in the mind but the kinesthetic
idea . . . of what the act is to be (James 1890, p. 771). James (1890)
proposed that, after performing an action, the conscious mind stores
the perceptual consequences of the action and uses them to voluntarily
guide the generation of motor efference, which itself is an unconscious
process, as discussed above. According to a minority (see the list of four
dissenters in James 1890, p. 772), one is conscious of the efference
to the muscles (what Wundt called the feeling of innervation; see James
1890, p. 771). This efference was believed to be responsible for action
outcomes (see the review in Sheerer 1984). (Wundt later abandoned the
feeling of innervation hypothesis; Klein 1970.) In contrast, James (1890)
staunchly proclaimed, There is no introspective evidence of the feeling
of innervation (p. 775). To examine this basic notion empirically, in one
experiment (Berger et al. 2011), participants performed simple actions
(e.g., sniffing) while introspecting the degree to which they perceived
certain body regions to be responsible for the actions. Consistent with
ideomotor theory, participants perceived regions (e.g., the nose) asso-
ciated with the perceptual consequences of actions (e.g., sniffing) to be
more responsible for the actions than regions (e.g., chest/torso) actually
generating the action.
Unlike traditional approaches about perception-and-action, which
divorce input from output processes, contemporary ideomotor
approaches propose that perceptual and action codes activate each other
by sharing the same representational format (Hommel et al. 2001). In
this way, these single-code (or common-code) models explain how
perception leads to action and findings such as stimulusresponse com-
patibility effects, as in the Simon task (Simon et al. 1970). In this task,
subjects are faster at pressing a button on the left (versus the right) when
an incidental and task irrelevant stimulus happened to appear on the
Mechanisms of consciousness: The perception-action buffer 61

left. Contemporary ideomotor models also explain response-effect (R-E)


compatibility (Kunde 2001; Koch and Kunde 2002; Kunde 2003; Hub-
bard et al. 2011). In this case, interference stems from the automatic
activation of representations associated with the anticipated effects of an
action (e.g., the presence of an arrow pointing left after one has pressed
a button on the right will increase response interference in future trials).
Ideomotor theories have explained these findings as resulting from the
fact that perceptual and action-related representations share the same
representational format, by which perception can influence action and
action can influence perception. It is important to note that, unlike SIT
or the synthesis presented later, contemporary ideomotor approaches
remain agnostic regarding which representations in ideomotor process-
ing are associated to consciousness. It was James (1890) who proposed
that what is most intimately related to consciousness is the perceptual-like
aspect of the potential common-code linking perception and action.
This was proposed in part because motor control is unconscious and one
tends to be conscious of the perceptual consequences of action produc-
tion and not of the efferences to the muscles.
With ideomotor theory in mind, we will now take a closer look at
what occurs during a Stroop incongruent trial (e.g., when RED is pre-
sented in blue). Here, when the word and color are incongruous, response
conflict leads to interference (Cohen et al. 1990), including systematic
changes in consciousness (Morsella et al. 2009a,d). It has been proposed
that, in this condition, set-related top-down activation from prefrontal
cortex increases the activation of areas in posterior brain areas (e.g.,
visual association cortex) that are associated with task-relevant dimen-
sions (e.g., color; Enger and Hirsch 2005; Gazzaley et al. 2005). To
influence behavior, action sets from information in working memory or
long-term memory increase or decrease the strength of perceptuoseman-
tic information, along with, most likely, other kinds of information (e.g.,
motor priming). Consistent with ideomotor theory, during conflict it is
perceptual-like representations that are activated to guide action (Enger
and Hirsch 2005).
In conclusion, perceptual-like representations seem to be (1) the kinds
of representations one tends to be conscious of during ideomotor control,
(2) the most broadcastable kinds of representations, and (3) the kinds
of representations in dreams, episodic memory, and the observations of
the actions of others and of oneself, including internal actions such as
subvocalization. Although there has been substantial debate regarding
the nature of conscious representations (e.g., whether they are analog-
ical or propositional; Markman 1999), few would argue about the
isomorphism among the conscious representations experienced while
62 Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

acting (e.g., saying hello), dreaming (e.g., saying hello in the dream
world), or observing the action of another (e.g., hearing hello). This is
consistent with the Sensorium Hypothesis (Muller 1843; James 1890; Gray
2004; Morsella and Bargh 2010a) that action/motor processes are largely
unconscious (Grossberg 1999; Goodale and Milner 2004; Gray 2004),
and the contents of consciousness are influenced primarily by perceptual-
based (and not action-based) events and processes (e.g., priming by per-
ceptual representations). (See brain stimulation evidence in Desmurget
et al. 2009.) Accordingly, it has been proposed that, in terms of stages
of processing, that which characterizes conscious content is the notion
of perceptual afference (information arising from the world that affects
sensory-perceptual systems) or perceptual re-afference (information aris-
ing from corollary discharges or efference copies of our own actions;
cf. Christensen et al. 2007; Obhi et al. 2009), both of which are cases of
afferent processing. Sherrington (1906) aptly referred to these two, sim-
ilar kinds of information as exafference (when the source of information
stems from the external world) and reafference (when the source is from
our own actions). As mentioned earlier, it seems that we do not have
direct, conscious access to motor programs or other kinds of efference
generators (Grossberg 1999; Rosenbaum 2002; Morsella and Bargh
2010a), including those for language (Levelt 1989), emotional systems
(e.g., the amygdala; Anderson and Phelps 2002; Ohman et al. 2007), or
executive control (Crick 1995; Suhler and Churchland 2009). It is for
this reason that, when speaking, one often does not know exactly which
words one will utter next until the words are uttered or subvocalized
following word retrieval (Levelt 1989; Slevc and Ferreira 2006).
Importantly, these conscious contents (e.g., urges and perceptual rep-
resentations) are similar to (or perhaps one and the same with) the
contents that occupy the buffers in working memory, a large-scale
mechanism that is intimately related to both consciousness and action
production (Fuster 2003; Baddeley 2007). Working memory is one of
the main topics of our next section.

2.5 A new synthesis: The buffer of the perception-and-action


interface (BPAI)
To synthesize all the aforementioned conclusions, we present a simplified
model that underscores the otherwise implicit links among areas of study
that have yet to be integrated consciousness, ideomotor processing,
and working memory. (As mentioned earlier, contemporary ideomotor
accounts are agnostic regarding which aspects of perception-and-action
processing are conscious and unconscious.) To make progress in this way,
Mechanisms of consciousness: The perception-action buffer 63

and because so little about consciousness is certain at this stage of under-


standing, we believe that one should focus on the actions and states of a
hypothetical, simplified human, which performs basic actions such as
eating, locomoting, and sleeping, and is devoid of higher-level phenom-
ena such as music appreciation, nostalgia, humor, and existential crises.
Second, an overarching goal for this synthesis is to begin to describe
consciousness in terms of its component functions/attributes and, ulti-
mately, as something other than just being conscious. In this way, the
functional role of this atypical tool within the nervous system can begin
to be unraveled.
Our first conclusion is that consciousness appears to be a highly cir-
cumscribed physical state. Through mechanisms that remain mysterious,
it is capable of instantiating a form of internal communication in the
brain, a form of crosstalk that allows for multiple systems to influence
skeletomotor action (but not smooth muscle action or cardiac muscle
action) simultaneously. To date, the only conjectured properties of con-
scious representations are that they tend to be perceptual-like (Muller
1843; James 1890; Gray 1995; Grossberg 1999; Goodale and Milner
2004; Morsella and Bargh 2010a) and broadcastable, that is, dissemi-
nated to and understood by many systems (Fodor 1983). As widespread
as the influence of conscious processing may be because it integrates
various brain regions and influences processing through both direct and
indirect cognitive control not all processes in the brain are associated
with consciousness, and many brain regions/circuits are capable of car-
rying out their functions without it (Morsella and Bargh 2011). Rather,
it appears that consciousness is a part of a larger system, the Skeletal
Muscle System (belonging to the Somatic Nervous System but not to the
Autonomic Nervous System), which is concerned with the adaptive use of
skeletal muscle.
Within the skeletal muscle output system, consciousness is unneces-
sary for various kinds of integrations (e.g., intersensory conflicts) and
functions within a perception-to-action buffer that is most intimately
related to (but perhaps not exclusively related to) the instrumental response
system, which indirectly guides unconscious motor programming through
the activation of perceptual-like representations of action effects. Accord-
ing to contemporary ideomotor accounts, codes for perception and action
may share the same representational format (or be one and the same
token); yet, consciousness seems to be more intimately related to the
perceptual end of processing (Morsella and Bargh 2010a), as discussed
earlier. The token conscious representations used in the guidance of
action, in working memory, and when perceiving the world (and ones
inclinations toward it) are isomorphic to each other. Perhaps they are
64 Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

even one and the same. When delaying uttering the word LChayim
(the action goal) until the appropriate cue is experienced (a toast is
made), the knowledge guiding the realization of the action goal (i.e., to
utter LChayim) could have stemmed from (1) hearing another person
say the word, (2) imagining saying the word, or (3) having a memory
of what the word sounds like. In this way, the action-goal representa-
tions that influence action are provided either by the external world, as
in the case of interference during the Simon task, or by memory sys-
tems that historically have been part of the ventral processing stream
(Milner and Goodale 1995), a system concerned with adaptive action
goal selection (Morsella and Bargh 2010a) rather than motor control,
which has been associated with the dorsal pathway (Goodale and Milner
2004).
Our review leads one to conclude that the voluntary action pro-
duction (for an explanation regarding why skeletal muscles have been
regarded as voluntary muscles, see earlier) is usually guided by the
organisms ability to foreground one action goal representation over
another (Curtis and DEsposito 2009; Johnson and Johnson 2009). Such
refreshing of a representation (Johnson and Johnson 2009) keeps a
representation in the foreground of, say, working memory, and is inti-
mately related to consciousness the representation that is intentionally
refreshed occupies the conscious field. Critical to this foregrounding pro-
cess is attention, which is a limited resource (Cowan 1988). Interestingly,
it was James (1890) who concluded that, to guide action (which is medi-
ated in large part unconsciously), all the conscious will can do is attend
to one representation over another. Thus, the will usually resides in
a buffer that is concerned with skeletal muscle action and is limited to
selecting (the modal process), vetoing (Libet 2004), or manipulating
(e.g., in the case of mental rotation; Shepard and Metzler 1971) these
perceptual-like representations.
Figure 2.1 illustrates the basic components of the BPAI, in its modal
form of processing. In the figure, the phonological representation of the
word cheers is held in mind consciously, activated above a conscious
threshold (i.e., supraliminally) by some external stimulus or sustained
through refreshing and attentional processing in working memory. In this
case, the conscious representation can be construed as a memory of the
perceptual consequences of the action. The representation is flanked
by unconscious representations that, because they are unconscious, are
incapable of being broadcast to the same extent as the conscious repre-
sentation for cheers. Below the conscious representation is a schematic
of the conscious field through which the representation is broadcasted.
The detectors of response systems receive and process the broadcasted
Mechanisms of consciousness: The perception-action buffer 65

Unconscious memory of
perceptual consequences
of action
Conscious memory of
perceptual consequences
of action
HOUSE TOAST
SALUD WIND
CHEERS
Conscious field for Conscious field for
broadcasting broadcasting

If selected for
production

Detectors of response
systems; operations therein
may be unconscious

Unconscious motor
programming

Fig. 2.1 Buffer of the perception-and-action interface (BPAI). The


phonological representation of the word cheers is held in mind con-
sciously, activated above a conscious threshold (i.e., supraliminally) by
some external stimulus or sustained through refreshing and attentional
processing in working memory. In this case, the conscious represen-
tation can be construed as a memory of the perceptual consequences
of the action. The representation is flanked by unconscious represen-
tations that, because they are unconscious, cannot be broadcasted to
the same extent as the conscious representation for cheers. Below the
conscious representation is a schematic of the conscious field through
which the representation is broadcast. (Conscious integration of this
kind is unnecessary for intersensory binding or integrational processes
involving smooth muscle.) The detectors of response systems receive
and process the broadcasted information. In a voting-like process, these
systems influence whether the action should be performed. In a dynamic
and ever evolving manner, the output of these systems can in turn influ-
ence the contents of the conscious field. If the representation is selected
for production, the motor programs for executing the action are imple-
mented unconsciously.

information. In a voting-like process, these systems influence whether


the action should be performed. In a dynamic and ever-evolving man-
ner, the output of these systems can in turn influence the contents of the
conscious field (Baumeister and Masicampo 2010; Morsella and Bargh
2010a). If the representation is selected for production, the motor pro-
grams for executing the action are implemented unconsciously.
66 Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

It is important to point out that our simplified model is based on the


processing dynamics that usually occur (i.e., the modal form of pro-
cessing) when someone performs a simple action; we do not propose
that processing must always occur in this way or that other important
cognitive processes cannot influence this form of processing. The BPAI
schematically captures much of what we reviewed about consciousness,
ideomotor processing, working memory, and action production. The
buffer includes the classic working memory systems of the phonologi-
cal loop (including conscious echoic representations), the visuospatial
sketchpad (including conscious iconic representations), and the episodic
buffer (Baddeley 2007).

2.6 Conclusion
With a brutally reductionistic approach (Morsella et al. 2010), in our
literature review we attempted to home in on both the unique func-
tions and nervous events/organizations (e.g., circuits) associated with
conscious states (e.g., Morsella et al. 2010). Specifically, we sought to
(1) home in on the neuroanatomical loci constituting conscious states
(Section 2.2), (2) home in on the basic component mental processes
associated with conscious states (Section 2.3), and (3) home in on the
mental representations associated with conscious states (Section 2.4).
Each section attempted to home in on the correlates of consciousness at
a more micro level than the last section. The literature reviewed in the
three sections reveals that conscious states are restricted to only a subset
of nervous and mental processes. Our BPAI model illustrates the modal
findings and conclusions in schematic form.

REFERENCES
Anderson A. K., and Phelps E. A. (2002). Is the human amygdala critical for the
subjective experience of emotion? Evidence of intact dispositional affect in
patients with amygdala lesions. J Cognitive Neurosci 14:709720.
Arendes L. (1994). Superior colliculus activity related to attention and to con-
notative stimulus meaning. Cognitive Brain Res 2:6569.
Baars B. J. (1998). The function of consciousness: Reply. Trends Neurosci 21:201.
Baars B. J. (2002). The conscious access hypothesis: Origins and recent evidence.
Trends Cogn Sci 6:4752.
Baars B. J. (2005). Global workspace theory of consciousness: Toward a cognitive
neuroscience of human experience. Prog Brain Res 150:4553.
Baddeley A. D. (2007). Working Memory, Thought and Action. Oxford University
Press.
Bargh J. A. and Morsella E. (2008). The unconscious mind. Perspect Psychol Sci
3:7379.
Mechanisms of consciousness: The perception-action buffer 67

Barr M. L. and Kiernan J.A. (1993). The Human Nervous System: An Anatomical
Viewpoint, 6th Edn. Philadelphia, PA: Lippincott.
Barsalou L. W. (1999). Perceptual symbol systems. Behav Brain Sci 22:577609.
Baumeister R. F. and Masicampo E. J. (2010). Conscious thought is for facil-
itating social and cultural interactions: How simulations serve the animal-
culture interface. Psychol Rev 117:945971.
Bellebaum C., Koch B., Schwarz M., and Daum I. (2008). Focal basal ganglia
lesions are associated with impairments in reward-based reversal learning.
Brain 131:829841.
Berger C. C., Bargh J. A., and Morsella E. (2011). The what of doing:
Introspection-based evidence for Jamess ideomotor principle. In Durante
A., and Mammoliti C. (eds.) The Psychology of Self-Control. New York: Nova
Publishers, pp. 145149.
Berthoz A. (2002). The Brains Sense of Movement. Cambridge, MA: Harvard
University Press.
Berti A. and Pia L. (2006). Understanding motor awareness through normal and
pathological behavior. Curr Dir Psychol Sci 15:245250.
Bindra D. (1974). A motivational view of learning, performance, and behavior
modification. Psychol Rev 81:199213.
Bindra D. (1978). How adaptive behavior is produced: A perceptual-motivational
alternative to response-reinforcement. Behav Brain Sci 1:4191.
Block N. (1995). On a confusion about a function of consciousness. Behav Brain
Sci 18:227287.
Boly M., Garrido M. I., Gosseries O., Bruno M. A., Boveroux P., Schnakers C.,
et al. (2011). Preserved feedforward but impaired top-down processes in the
vegetative state. Science 332:858862.
Buchsbaum B. R. and DEsposito M. (2008). The search for the phonological
store: From loop to convolution. J Cogn Neurosci 20:762778.
Buck L. B. (2000). Smell and taste: The chemical senses. In Kandel E. R.,
Schwartz J. H., and Jessell T. M. (eds.) Principles of Neural Science, 4th Edn.
New York: McGraw-Hill, pp. 625647.
Buzsaki G. (2006). Rhythms of the Brain. New York: Oxford University Press.
Chaiken S. and Trope Y. (1999). Dual-Process Models in Social Psychology. New
York: Guilford.
Chalmers D. (1996). The Conscious Mind: In Search of a Fundamental Theory.
New York: Oxford University Press.
Christensen M. S., Lundbye-Jensen J., Geertsen S. S., Petersen T. H., Paulson
O. B., and Nielsen J. B. (2007). Premotor cortex modulates somatosensory
cortex during voluntary movements without proprioceptive feedback. Nature
Neurosci 10:417419.
Cicerone K. D. and Tanenbaum L. N. (1997). Disturbance of social cognition
after traumatic orbitofrontal brain injury. Arch Clin Neuropsych 12:173188.
Clark A. (2002). Is seeing all it seems? Action, reason and the grand illusion. J
Consciousness Stud 9:181202.
Coenen A. M. L. (1998). Neuronal phenomena associated with vigilance and
consciousness: From cellular mechanisms to electroencephalographic pat-
terns. Conscious Cogn 7:4253.
68 Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

Cohen J. D., Dunbar K., and McClelland J. L. (1990). On the control of auto-
matic processes: A parallel distributed processing account of the Stroop
effect. Psychol Rev 97:332361.
Cooney J. W. and Gazzaniga M. S. (2003). Neurological disorders and the struc-
ture of human consciousness. Trends Cogn Sci 7:161166.
Cowan N. (1988). Evolving conceptions of memory storage, selective attention,
and their mutual constraints within the human information-processing sys-
tem. Psychol Bull 104:163191.
Crick F. (1995). The Astonishing Hypothesis: The Scientific Search for the Soul. New
York: Touchstone.
Crick F. and Koch C. (2003). A framework for consciousness. Nat Neurosci
6:18.
Curtis C. E. and DEsposito M. (2009). The inhibition of unwanted actions.
In Morsella E., Bargh J. A. and Gollwitzer P. M. (eds.) Oxford Handbook of
Human Action. New York: Oxford University Press, pp. 7297.
Damasio A. R. (1989). Time-locked multiregional retroactivation: A systems-
level proposal for the neural substrates of recall and recognition. Cognition
33:2562.
Damasio A. R. (2010). Self Comes to Mind: Constructing the Conscious Brain. New
York: Pantheon.
Dehaene S. and Naccache L. (2001). Towards a cognitive neuroscience of con-
sciousness: Basic evidence and a workspace framework. Cognition 79:1
37.
Del Cul A., Baillet S., and Dehaene S. (2007). Brain dynamics underlying the
nonlinear threshold for access to consciousness. PLoS Biol 5:e260.
Desmurget M., Reilly K. T., Richard N., Szathmari A., Mottolese C., and Sirigu
A. (2009). Movement intention after parietal cortex stimulation in humans.
Science 324(5928):811813.
Desmurget M. and Sirigu A. (2010). A parietal-premotor network for movement
intention and motor awareness. Trends Cogn Sci 13:411419.
Di Lollo V., Enns J. T., and Rensink R. A. (2000). Competition for consciousness
among visual events: The psychophysics of reentrant visual pathways. J Exp
Psychol Gen 129:481507.
Doesburg S. M., Green J. L., McDonald J. J., and Ward L. M. (2009). Rhythms
of consciousness: Binocular rivalry reveals large-scale oscillatory network
dynamics mediating visual perception. PLoS ONE 4:114.
Duprez T. P., Serieh B. A., and Reftopoulos C. (2005). Absence of memory
dysfunction after bilateral mammillary body and mammillothalamic tract
electrode implantation: Preliminary expereince in three patients. Am J Neu-
roradiol 26:195198.
Edelman G. M. and Tononi G. (2000). A Universe of Consciousness: How Matter
Becomes Imagination. New York: Basic Books.
Eichenbaum H., Shedlack K. J., and Eckmann K. W. (1980). Thalamocor-
tical mechanisms in odor-guided behavior. Brain Behav Evolut 17:255
275.
Eidels A., Townsend J. T., and Algom D. (2010). Comparing perception of
Stroop stimuli in focused versus divided attention paradigms: evidence for
dramatic processing differences. Cognition 114:129150.
Mechanisms of consciousness: The perception-action buffer 69

Enger T. and Hirsch J. (2005). Cognitive control mechanisms resolve conflict


through cortical amplification of task-relevant information. Nat Neurosci
8:17841790.
Eriksen B. A. and Eriksen C. W. (1974). Effects of noise letters upon the identi-
fication of a target letter in a nonsearch task. Percept Psychophys 16:143149.
Farah M. J. (1989). The neural basis of mental imagery. Trends Neurosci
12(10):395399.
Fecteau J. H., Chua R., Franks I., and Enns J. T. (2001). Visual awareness and
the online modification of action. Can J of Exp Psychol 55:104110.
Fehrer E. and Biederman I. (1962). A comparison of reaction time and verbal
report in the detection of masked stimuli. J Exp Psychol 64:12630.
Fehrer E. and Raab D. (1962). Reaction time to stimuli masked by metacontrast.
J Exp Psychol 63:143147.
Fodor J. A. (1983). Modularity of Mind: An Essay on Faculty Psychology. Cam-
bridge, MA: MIT Press.
Ford J. M., Gray M., Faustman W. O., Heinks T. H., and Mathalon D. H. (2005).
Reduced gamma-band coherence to distorted feedback during speech when
what you say is not what you hear. Int J Psychophysiol 57:143150.
Freeman W. J. (1991). The physiology of perception. Sci Am 264:7885.
Fried I., Katz A., McCarthy G., Sass K. J., Williamson P., et al. (1991). Func-
tional organization of human supplementary motor cortex studied by elec-
trical stimulation. J Neurosci 11:36563666.
Fries P. (2005). A mechanism for cognitive dynamics: Neuronal communication
through neuronal coherence. Trends Cogn Sci 9:474480.
Frith C. D., Blakemore S. J., and Wolpert D. M. (2000). Abnormalities in the
awareness and control of action. Philos T R Soc B 355(1401):17711788.
Fuster J. M. (2003). Cortex and Mind: Unifying Cognition. New York: Oxford
University Press.
Gazzaley A., Cooney J. W., Rissman J., and DEsposito M. (2005). Top-down
suppression deficit underlies working memory impairment in normal aging.
Nat Neurosci 8:12981300.
Gazzaniga M. S., Ivry R. B., and Mangun G. R. (2009). Cognitive Neuroscience:
The Biology of the Mind, 3rd Edn. New York: Norton.
Goodale M. and Milner D. (2004). Sight Unseen: An Exploration of Conscious and
Unconscious Vision. New York: Oxford University Press.
Gottlieb J. and Mazzoni P. (2004). Neuroscience: Action, illusion, and percep-
tion. Science 303:317318.
Gray J. A. (1995). The contents of consciousness: A neuropsychological conjec-
ture. Behav Brain Sci 18:659676.
Gray J. A. (2004). Consciousness: Creeping up on the Hard Problem. New York:
Oxford University Press.
Greenwald A. G. (1970). Sensory feedback mechanisms in performance control:
With special reference to the ideomotor mechanism. Psychol Rev 77:7399.
Grossberg S. (1999). The link between brain learning, attention, and conscious-
ness. Conscious Cogn 8:144.
Haberly L. B. (1998). Olfactory cortex. In Shepherd, G. M. (ed.) The Synaptic
Organization of the Brain, 4th Edn. New York: Oxford University Press,
pp. 377416.
70 Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

Haggard P., Aschersleben G., Gehrke J., and Prinz W. (2002). Action, binding
and awareness. In Prinz W. and Hommel B. (eds.) Common Mechanisms in
Perception and Action: Attention and Performance, Vol. 19. Oxford University
Press, pp. 266285.
Hallett M. (2007). Volitional control of movement: The physiology of free will.
Clin Neurophysiol 117:11791192.
Harle E. (1861). Der Apparat des Willens [The apparatus of the will] Zeitschrift
fur Philosophie und philosophische Kritik 38:499507.
Heath M., Neely K. A., Yakimishyn J., and Binsted G. (2008). Visuomotor
memory is independent of conscious awareness of target features. Exp Brain
Res 188:517527.
Heilman K. M., Watson R. T., Valenstein E. (2003). Neglect: Clinical and
anatomic issues. In Feinberg T. E. and Farah, M .J. (eds.) Behavioral Neurol-
ogy and Neuropsychology, 2nd Edn. New York: McGraw-Hill, pp. 303311.
Henkin R. I., Levy L. M., and Lin C. S. (2000). Taste and smell phantoms
revealed by brain functional MRI (fMRI). Neuroradiology 24:106123.
Herz R. S. (2003). The effect of verbal context on olfactory perception J Exp
Psychol: Gen 132:595606.
Ho V. B., Fitz C. R., Chuang S. H., and Geyer C. A. (1993). Bilateral basal
ganglia lesions: Pediatric differential considerations. RadioGraphics 13:269
292.
Hommel B. (2009). Action control according to TEC (theory of event coding).
Psychol Res 73:512526.
Hommel B. and Elsner B. (2009). Acquisition, representation, and control of
action. In Morsella E., Bargh J. A., and Gollwitzer P. M. (eds.) Oxford
Handbook of Human Action. New York: Oxford University Press, pp. 371
398.
Hommel B., Musseler J., Aschersleben G., and Prinz W. (2001). The theory of
event coding: A framework for perception and action planning. Behav Brain
Sci 24:849937.
Hubbard J., Gazzaley A., and Morsella E. (2011). Traditional response interfer-
ence effects from anticipated action outcomes: A response-effect compati-
bility paradigm. Acta Psychol 138:106110.
Hull C. L. (1943). Principles of Behavior. New York: Appleton-Century.
Hummel F. and Gerloff C. (2005). Larger interregional synchrony is associated
with greater behavioral success in a complex sensory integration task in
humans. Cereb Cortex 15:670678.
Jackendoff R. S. (1990). Consciousness and the Computational Mind. Cambridge,
MA: MIT Press.
James W. (1890). Principles of Psychology. New York: Holt.
Jeannerod M. (2006). Motor Cognition: What Action Tells the Self. New York:
Oxford University Press.
Johnson H. and Haggard P. (2005). Motor awareness without perceptual aware-
ness. Neuropsychologia 43:227237.
Johnson M. R. and Johnson M. K. (2009). Toward characterizing the neural
correlates of component processes of cognition. In Roesler F., Ranganath
C., Roeder B., and Kluwe R. H. (eds.) Neuroimaging of Human Memory:
Mechanisms of consciousness: The perception-action buffer 71

Linking Cognitive Processes to Neural Systems. New York: Oxford University


Press, pp. 169194.
Kay L. M. and Sherman S. M. (2007). An argument for an olfactory thalamus.
Trends Neurosci 30:4753.
Keller A. (2011). Attention and olfactory consciousness. Front Psychology
2:380:111.
Klein D. B. (1970). A History of Scientific Psychology: Its Origins and Philosophical
Backgrounds. New York: Basic Books.
Koch C. (2004). The Quest for Consciousness: A Neurobiological Approach. Engle-
wood, CO: Roberts.
Koch C. and Greenfield S. A. (2007). How does consciousness happen? Sci Am
297:7683.
Koch C. and Tsuchiya N. (2007). Attention and consciousness: Two distinct
brain processes. Trends Cogn Sci 11:1622.
Koch I. and Kunde W. (2002). Verbal response-effect compatibility Mem Cogni-
tion 30:12971303.
Kosslyn S. M., Thompson W. L., and Ganis, G. (2006). The Case for Mental
Imagery. New York: Oxford University Press.
Kunde W. (2001). Response-effect compatibility in manual choice reaction tasks.
J Exp Psychol Human 27:387394.
Kunde W. (2003). Temporal response-effect compatibility. Psychol Res 67:153
159.
Lashley K. S. (1942). The problem of cerebral organization in vision. In Kluver
H. (ed.) Visual Mechanisms. Lancaster, PA: Cattell, pp. 301322.
LeDoux J. E. (1996). The Emotional Brain: The Mysterious Underpinnings of Emo-
tional Life. New York: Simon & Schuster.
Leopold D. (2002). Distortion of olfactory perception: Diagnosis and treatment.
Chem Senses 27:611615.
Levelt W. J. M. (1989). Speaking: From Intention to Articulation. Cambridge, MA:
MIT Press.
Lewin K. (1935). A Dynamic Theory of Personality. New York: McGraw-Hill.
Li W., Lopez L., Osher J., Howard J. D., Parrish T. B., and Gottfried J. A. (2010).
Right orbitofrontal cortex mediates conscious olfactory perception. Psychol
Sci 21:14541463.
Libet B. (2004). Mind Time: The Temporal Factor in Consciousness. Cambridge,
MA: Harvard University Press.
Lieberman M. D. (2007). The X- and C-systems: The neural basis of automatic
and controlled social cognition. In Harmon-Jones E. and Winkielman P.
(eds.) Fundamentals of Social Neuroscience. New York: Guilford, pp. 290
315.
Liu G., Chua R., and Enns J. T. (2008). Attention for perception and action:
Task interference for action planning, but not for online control. Exp Brain
Res 185:709717.
Llinas R. R. (2002). I of the Vortex: From Neurons to Self. Cambridge, MA: MIT
Press.
Llinas R. R. and Ribary U. (2001). Consciousness and the brain: The thalamo-
cortical dialogue in health and disease. Ann NY Acad Sci 929:166175.
72 Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

Llinas R., Ribary U., Contreras D., and Pedroarena C. (1998). The neuronal
basis for consciousness. Philos T Roy Soc B 353:18411849.
Logothetis N. K. and Schall J. D. (1989). Neuronal correlates of subjective visual
perception. Science 245:761762.
Lotze R. H. (1852). Medizinische Psychologie oder Physiologie der Seele. Leipzig:
Weidmannsche Buchhandlung.
MacLeod C. M. and McDonald P. A. (2000). Interdimensional interference in
the Stroop effect: Uncovering the cognitive and neural anatomy of attention.
Trends Cogn Sci 4:383391.
Mainland J. D. and Sobel N. (2006). The sniff is part of the olfactory percept.
Chem Senses 31:181196.
Markert J. M., Hartshorn D. O., and Farhat S. M. (1993). Paroxysmal bilateral
dysomia treated by resection of the olfactory bulbs. Surg Neurol 40:160163.
Markman A. B. (1999). Knowledge Representation. Hillsdale, NJ: Lawrence Erl-
baum Associates, Publishers.
Markowitsch H. J. (1982). Thalamic mediodorsal nucleus and memory: A critical
evaluation of studies in animals and man. Neurosci Biobehav R 6:351380.
McGurk H. and MacDonald J. (1976). Hearing lips and seeing voices. Nature
264:746748.
Merker B. (2007). Consciousness without a cerebral cortex: A challenge for
neuroscience and medicine. Behav Brain Sci 30:63134.
Metcalfe J. and Mischel W. (1999). A hot/cool-system analysis of delay of grati-
fication: Dynamics of willpower. Psychol Rev 106:319.
Miller B. L. (2007). The human frontal lobes: An introduction. In Miller B. L.
and Cummings J. L. (eds.) The Human Frontal Lobes: Functions and Disorders,
2nd Edn. New York: Guilford, pp. 311.
Miller N. E. (1959). Liberalization of basic S-R concepts: Extensions to conflict
behavior, motivation, and social learning. In Koch S. (ed.) Psychology: A
Study of Science, Vol. 2. New York: McGraw-Hill, pp. 196292.
Milner B. (1966). Amnesia following operation on the temporal lobes. In Whitty
C. W. M. and Zangwill O. L. (eds.) Amnesia. London: Butterworths,
pp. 109133.
Milner A. D. and Goodale M. (1995). The Visual Brain in Action. New York:
Oxford University Press.
Mitchell A. S., Baxter M. G., and Gaffan D. (2007). Dissociable performance on
scene learning and strategy implementation after lesions to magnocellular
mediodorsal thalamic nucleus. J Neurosci 27:1188811895.
Mizobuchi M., Ito N., Tanaka C., Sako K., Sumi Y., and Sasaki, T. (1999).
Unidirectional olfactory hallucination associated with ipsilateral unruptured
intracranial aneurysm. Epilepsia 40:516519.
Molapour T., Berger C. C., and Morsella E. (2011). Did I read or did I
name? Process blindness from congruent processing outputs. Conscious
Cogn 20:17761780.
Moody T. C. (1994). Conversations with zombies. J Consciousness Stud 1:196
200.
Morsella E. (2005). The function of phenomenal states: Supramodular interac-
tion theory. Psychol Rev 112:10001021.
Mechanisms of consciousness: The perception-action buffer 73

Morsella E. and Bargh J. A. (2010a). What is an output? Psychol Inq 21:354


370.
Morsella E. and Bargh J. A. (2010b). Unconscious mind. In Weiner I.B. and
Craighead W. E. (eds.) The Corsini Encyclopedia of Psychology and Behavioral
Science, 4th Edn, Vol. 4. Hoboken: John Wiley & Sons, Inc., pp. 18171819.
Morsella E. and Bargh J. A. (2011). Unconscious action tendencies: Sources of
un-integrated action. In Cacioppo J. T. and Decety J. (eds.) The Handbook
of Social Neuroscience. New York: Oxford University Press, pp. 335347.
Morsella E., Berger C. C., and Krieger S. C. (2011). Cognitive and neural
components of the phenomenology of agency. Neurocase 17:209230.
Morsella E., Gray J. R., Krieger S. C., and Bargh J. A. (2009a). The essence of
conscious conflict: Subjective effects of sustaining incompatible intentions.
Emotion 9:717728.
Morsella E., Krieger S. C., and Bargh J. A. (2009b). The function of conscious-
ness: Why skeletal muscles are voluntary muscles. In Morsella E., Bargh
J. A., and Gollwitzer P. M. (eds.) Oxford Handbook of Human Action. Oxford
University Press, pp. 625634.
Morsella E., Krieger S. C., and Bargh J. A. (2010). Minimal neuroanatomy for
a conscious brain: Homing in on the networks constituting consciousness.
Neural Networks 23:1415.
Morsella E., Lanska M., Berger C. C., and Gazzaley A. (2009c). Indirect cog-
nitive control through top-down activation of perceptual symbols Eur J Soc
Psychol 39:11731177.
Morsella E., Wilson L. E., Berger C. C., Honhongva M., Gazzaley A., and Bargh
J. A. (2009d). Subjective aspects of cognitive control at different stages of
processing. Atten Percept Psychophys 71:18071824.
Muller J. (1843). Elements of Physiology. Philadelphia, PA: Lea and Blanchard.
Muzur A., Pace-Schott E. F., and Hobson J. A. (2002). The prefrontal cortex in
sleep. Trends Cogn Sci 6:475481.
Nagel T. (1974). What is it like to be a bat? Philos Rev 83:435450.
Nath A. R. and Beauchamp M. S. (2012). A neural basis for interindividual
differences in the McGurk effect, a multisensory speech illusion. NeuroImage
59:781787.
Obhi S., Planetta P., and Scantlebury J. (2009). On the signals underlying con-
scious awareness of action. Cognition 110:6573.
Ohman A., Carlsson K., Lundqvist D., and Ingvar M. (2007). On the uncon-
scious subcortical origin of human fear. Physiol Behav 92:180185.
Ohman A. and Mineka S. (2001). Fears, phobias, and preparedness: Toward an
evolved module of fear and fear learning. Psychol Rev 108:483522.
Ojemann G. (1986). Brain mechanisms for consciousness and conscious experi-
ence. Can Psychol 27:158168.
Ortinski P. and Meador K. J. (2004). Neuronal mechanisms of conscious aware-
ness. Arch Neurol-Chicago 61:10171020.
OShea R. P. and Corballis P. M. (2005). Visual grouping on binocular rivalry in
a split-brain observer. Vision Res 45:247261.
Penfield W. and Jasper H. H. (1954). Epilepsy and the Functional Anatomy of the
Human Brain. New York: Little, Brown.
74 Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

Plailly J., Howard J. D., Gitelman D. R., and Gottfried J. A. (2008). Attention to
odor modulates thalamocortical connectivity in the human brain. J Neurosci
28:52575267.
Prinz W. (2003). How do we know about our own actions? In Maasen S., Prinz W.,
and Roth G. (eds.) Voluntary Action: Brains, Minds, and Sociality. London:
Oxford University Press, pp. 2133.
Rakison D. H. and Derringer J. L. (2008). Do infants possess an evolved spider-
detection mechanism? Cognition 107:381393.
Rizzolatti G., Sinigaglia C., and Anderson F. (2008). Mirrors in the Brain: How
Our Minds Share Actions, Emotions, and Experience. New York: Oxford Uni-
versity Press.
Roach J. (2005, June 30). Journal Ranks Top 25 Unanswered Science Questions.
National Geographic News. URL: news.nationalgeographic.com (accessed
March 6, 2013).
Roelofs A. (2010). Attention and facilitation: Converging information versus
inadvertent reading in Stroop task performance. J Exp Psychol Learn 36:411
422.
Rolls E. T., Judge S. J., and Sanghera M. (1977). Activity of neurons in the
inferotemporal cortex of the alert monkey. Brain Res 130:229238.
Rosenbaum D. A. (2002). Motor control. In Pashler H. (series ed.) Yantis S.
(vol. ed.) Stevens Handbook of Experimental Psychology: Vol. 1. Sensation and
Perception, 3rd Edn. New York: John Wiley & Sons, Inc., pp. 315339.
Rossetti Y. (2001). Implicit perception in action: Short-lived motor represen-
tation of space. In Grossenbacher P. G. (ed.) Finding Consciousness in the
Brain: A Neurocognitive Approach. Amsterdam: John Benjamins Publishing,
pp. 133181.
Schmahmann J. D. (1998). Dysmetria of thought: Clinical consequences of cere-
bellar dysfunction on cognition and affect. Trends Cogn Sci 2:362371.
Sela L., Sacher Y., Serfaty C., Yeshurun Y., Soroker N., and Sobel N. (2009).
Spared and impaired olfactory abilities after thalamic lesions. J Neurosci
29(39):1205912069.
Sergent C. and Dehaene S. (2004). Is consciousness a gradual phenomenon?
Evidence for an all-or-none bifurcation during the attentional blink. Psychol
Sci 15:720728.
Sheerer E. (1984). Motor theories of cognitive structure: A historical review.
In Prinz W. and Sanders A. F. (eds.) Cognition and Motor Processes. Berlin:
Springer-Verlag.
Shepard R. N. (1984). Ecological constraints on internal representation: Res-
onant kinematics of perceiving, imagining, thinking and dreaming. Psychol
Rev 91:417447.
Shepard R. N. and Metzler J. (1971). Mental rotation of three dimensional
objects. Science 171:701703.
Shepherd G. M. and Greer C. A. (1998). Olfactory bulb. In Shepherd G. M.
(ed.) The Synaptic Organization of the Brain, 4th Edn. New York: Oxford
University Press, pp. 159204.
Sherman S. M. and Guillery R. W. (2006). Exploring the Thalamus and Its Role in
Cortical Function. Cambridge, MA: MIT Press.
Mechanisms of consciousness: The perception-action buffer 75

Sherrington C. S. (1906). The Integrative Action of the Nervous System. New


Haven, CT: Yale University Press.
Simon J. R., Hinrichs J. V., and Craft J. L. (1970). Auditory S-R compatibility:
Reaction time as a function of ear-hand correspondence and ear-response-
location correspondence. J Exp Psychol 86:97102.
Skinner B. F. (1953). Science and Human Behavior. New York: Macmillan.
Slevc L. R. and Ferreira V. S. (2006). Halting in single word production: A test
of the perceptual loop theory of speech monitoring. J Mem Lang 54:515
540.
Slotnick B. M. and Risser J. M. (1990). Odor memory and odor learning in rats
with lesions of the lateral olfactory tract and mediodorsal thalamic nucleus.
Brain Res 529(12):2329.
Sobel N., Prabhakaran V., Hartley C. A., Desmond J. E., Glover G. H., et al.
(1999). Blind smell: Brain activation induced by an undetected air-borne
chemical. Brain 122:209217.
Strack F. and Deutsch R. (2004). Reflective and impulsive determinants of social
behavior. Pers Soc Psychol B 8:220247.
Stroop J. R. (1935). Studies of interference in serial verbal reactions. J Exp Psychol
18:643662.
Suhler C. L. and Churchland P. S. (2009). Control: Conscious and otherwise
Trends Cogn Sci 13:341347.
Tanaka Y., Miyazawa Y., Akaoka F., and Yamada T. (1997). Amnesia following
damage to the mammillary bodies. Neurology 48(1):160165.
Taylor J. L. and McCloskey D. I. (1990). Triggering of preprogrammed move-
ments as reactions to masked stimuli. J Neurophysiol 63:439446.
Taylor J. L. and McCloskey D. I. (1996). Selection of motor responses on the
basis of unperceived stimuli. Exp Brain Res 110:6266.
Tham W. W. P., Stevenson R. J., and Miller L. A. (2009). The functional role of
the mediodorsal thalamic nucleus in olfaction. Brain Res Rev 62:109126.
Tham W. W. P., Stevenson R. J., and Miller L. A. (2011). The role of the
mediodorsal thalamic nucleus in human olfaction. Neurocase 17(2):148
159.
Thorndike E. L. (1911). Animal Intelligence. New York: Macmillan.
Tinbergen N. (1952). Derived activities: Their causation, biological signifi-
cance, origin and emancipation during evolution. Q Rev Biol 27:132.
Tolman E. C. (1948). Cognitive maps in rats and men. Psychol Rev 55:189
208.
Tong F. (2003). Primary visual cortex and visual awareness. Nat Rev Neurosci
4:219229.
Tononi G. and Edelman G. M. (1988). Consciousness and complexity. Science
282:18461851.
Uhlhaas P. J., Pipa G., Lima B., Melloni L., Neuenschwander S., et al. (2009).
Neural synchrony in cortical networks: History, concept and current status.
Front Integr Neurosci 3:17.
Varela F., Lachaux J. P., Rodriguez E., and Martinerie J. (2001). The brainweb:
Phase synchronization and large-scale integration. Nat Rev Neurosci 2:229
239.
76 Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

Voss M. (2011). Not the mystery it used to be: Theme program: Consciousness.
APS Observer 24(6). URL: www.psychologicalscience.org/index.php/
publications/observer/2011/july-august-11/not-the-mystery-it-used-to-be
.html (accessed February 27, 2013).
Weiskrantz L. (1992). Unconscious vision: The strange phenomenon of blind-
sight. The Sciences 35:2328.
Wolford G., Miller M. B., and Gazzaniga M. S. (2004). Split decisions. In Gaz-
zaniga M. S. (ed.) The Cognitive Neurosciences III. Cambridge, MA: MIT
Press, pp. 11891199.
Zatorre R. J. and Jones-Gotman M. (1991). Human olfactory discrimination
after unilateral frontal or temporal lobectomy. Brain 114:7184.
Zeki S. and Bartels A. (1999). Toward a theory of visual consciousness. Conscious
Cogn 8:225259.
3 A biosemiotic view on consciousness derived
from system hierarchy

Ron Cottam and Willy Ranson

3.1 Biosemiotics 77
3.2 Consciousness 79
3.3 Awareness versus consciousness 80
3.4 From awareness to consciousness 82
3.5 Scale and its implications 85
3.6 Hierarchy and its properties 88
3.7 Ecosystemic inclusion through birationality 91
3.8 The implications of birationality 93
3.9 Hyperscale: Embodiment and abstraction 95
3.10 A hierarchical biosemiosis 97
3.11 Energy and awareness 99
3.12 Stasis neglect and habituation 101
3.13 A birational derivation of consciousness 102
3.14 Coda 106

What is biosemiotics? And how is it related to consciousness?

3.1 Biosemiotics
Biosemiotics (from the Greek words bios meaning life and semeion
meaning sign) is the interpretation of scientific biology through semi-
otics the representation of natural entities and phenomena as signs and
sign processes. A sign is taken to indicate any entity or characteristic to
an interpretant, which may itself be interpretation by an intelligent being
or another sign. De Saussure (2002) maintained that a sign is dyadic
consisting of signifier and signified whose elements must be com-
bined in the brain to have meaning, whereas Peirce [19311958] held
more generally that any sign process (semiosis) is irreducibly triadic, in
terms of representative sign, represented entity and interpretant. Peirce
enumerated three categories of experiencing signs:
r Firstness, as a quality of feeling (referenced to an abstraction);
r Secondness, as an actuality (referenced to a correlate);
r Thirdness, as a forming of habits (referenced to an interpretant).

77
78 Ron Cottam and Willy Ranson

Everything is a sign. Nothing exists except as a sign. We learn


from semiotics that we live in a world of signs and we have no way of
understanding anything except through signs and the codes into which
they are organized (Chandler 2012). The term sign does not mean
sign of ; that is, it is not a reference for something else. Redness is a
sign:
r The quality or feeling of redness . . . where you are aware of it as some-
thing existent but not conscious of it as distinct and red . . . is First-
ness;
r When you are aware that theres a distinct and unique experience of
redness in my vision . . . thats Secondness. Its distinct; you can define
it as unique in itself in that time and space;
r When you are talking about that experience in the abstract or generality,
for example, when its going to be a nice day tomorrow, there is always
a redness in the sky at sunset . . . thats Thirdness.1
Peirce (19311958) presents yet another classification of signs: as sym-
bol, index, or icon, depending on the relationship between the sign and its
object. Symbols have a conventionally defined relationship (e.g., alphanu-
meric symbols). Indices are directly influenced by their objects (e.g., a
thermometer). Icons have properties in common with their objects (e.g.,
portraits).
A major part of any semiotic process, or semiosis, consists of abductive
reasoning. Abduction (described by Peirce [19311958] as guessing)
is a weak form of inference which is aimed at finding a workable but
not necessarily exclusive maximally plausible hypothesis to explain an
observed phenomenon. It is related to deduction, but where deduction
derives a true causal conclusion from a true precondition, abduction
does not presume a causal link from precondition to conclusion. It is
closely associated with the scientific method, which is always ready to
reinterpret a derived relationship between presumed cause and effect. Its
purview is consequently much wider than that of deduction, as it never
presupposes or requires completeness in an interpreted precondition,
and its concluded hypotheses are always subject to reevaluation in the
light of further experience.
Semiotics is related to linguistics, but it extends the meaning of sign
to any sensory or existential modality. Morris (1971) added pragmatics
to linguistic syntax and semantics, and Pearson (1999) has established the
ordering of these three, when applied as operators, to be first syntax,
then pragmatics (or context), and only last semantics, mirroring Jakob
von Uexkulls (Kull 2001) early biosemiotic focus on an individuals

1 Our thanks to Edwina Taborsky for this example of firstness, secondness, and thirdness.
Biosemiotics of consciousness: System hierarchy 79

subjective world of signs (Umwelt) within a universal environment. This


focus on biological context makes (bio)semiotics cut across established
scientific disciplines with its emphasis on how living beings subjectively
perceive their environment and how this perception determines their
behavior (von Uexkull 1992). A sign carries meaning. To characterize
the relationship between meaning-perception and meaning-utilization, or
operation, von Uexkull developed the formulation of a functional circle
anticipating the system feedback principle of Wiener (1954) by twenty
years (von Uexkull 1992). Biosemiotics functions as a trans-disciplinary
description of all of Nature, and transcend(s) the conceptual foundation
of the other natural sciences (Brier 2006), suggesting that it is by far the
best medium within which to assess consciousness.

3.2 Consciousness
Consciousness is notoriously difficult to define, especially as it must be
attempted from within consciousness itself. Vimal (2009) has presented
an overview of numerous meanings attributed in the literature to the term
consciousness, grouping them according to the two criteria of function
(easy problems, such as detection, discrimination, recognition, cogni-
tion, etc.) and experience (i.e., aspects of the hard problem [Chalmers
1995]). In general, the easy problems could be carried out by a suitably
programmed digital computer. However, all of the lowest-level processing
elements of a digital computer (the gates) are prevented from inter-
acting with each other by the computers clock signal, which is intended
to synchronize their (local) operations. This means that any gates unde-
sirable physical characteristics are completely isolated from those of all
the others, and individual gates are only capable of reacting according to
the large-scale (global) design imposed by the computers manufacturer
and programmer. Consequently, in its instantiation as an information
processor, a digital computer itself has no unified global character at all.2
We question, therefore, whether function is a suitable criterion for the
unified nature of consciousness, while noting that function is, in many
cases, driven by intention itself a current content of the evolved state
of experience.

2 Local and global are words which may have very different meanings in different situations.
By local we mean: not broad or general; confined to a particular location or site; spatially
isolated. In its extreme form, local reduces to a dimensionless spatiotemporal point, or a
closed impenetrable logic system. By global we mean: all-inclusive; relating to an entire
system; spatially unrestricted. In its extreme form, global expands to simultaneously
encompass the entire Universe as nonlocality.
80 Ron Cottam and Willy Ranson

Vimal (2008, this volume, Chapter 5) has developed a model of con-


sciousness in terms of proto-experience and subjective experience. If we relate
these terms to Peircian semiotics, proto-experience appears to be associ-
ated with firstness, and subjective experience appears to conflate secondness
and thirdness (Merrell [2003] has presented an extensive overview of the
relationships between Peircean semiotics and first-person experience).
Vimal has proposed three different hypotheses for a relationship between
objective aspects of matter, proto-experience (PE), and subjective expe-
rience (SE), which he sums up (Vimal 2009) as:
1. Matter may be the carrier of both PEs and SEs;
2. (Matter) may carry PEs only, with the emergence of SEs in the course
of neural evolution;
3. The three may be ontologically inseparable, though possessing differ-
ent epistemic aspects.
We would tend to a position somewhere between hypotheses 1 and 2. To
quote from an interview with David Bohm (Weber 1987):

Bohm I would say that the degree of consciousness in the atomic world is very
low, at least of self-consciousness.
Weber But its not dead or inert. That is what you are saying.
Bohm It has some degree of consciousness in that it responds in some way, but
it has almost no self-consciousness (the italics are Webers).
...
Weber you are saying: This is a universe that is alive (in its appropriate way) and
somehow consciousness at all the levels (the italics are Webers).
Bohm Yes, in a way.

This implies that a low level of consciousness is all-pervasive (Cottam


et al. 1998a). We suggest that Newtons laws are our human formalization
of a low-level entitys consequent aware pursuit of the maintenance of
its identity. However, we would prefer to denote this low-level character
by the word awareness, rather than by consciousness, for three reasons.
Firstly, and most importantly, we associate consciousness with the dense
networked information processing of the brain. This aspect is absent from
the elementary constituents of Nature. Secondly, human consciousness
can be turned on and off, during sleep or anesthesia, which does not
appear to apply to Newtons laws. Thirdly, we feel that the attribution of
consciousness to elementary particles would be uncomfortable to most
people, while that of a primitive awareness would be less so.

3.3 Awareness versus consciousness


We believe that this low level of awareness is a part of the essential mat-
ter of the universe itself, as an intimate constituent of the primitive
Biosemiotics of consciousness: System hierarchy 81

character recognized by Peirce (19311958) as firstness. This identifica-


tion can be associated with one or other interpretation (e.g., Schopen-
hauer 1818; Pauli and Jung 1955; Nagel 1986; Polkinghorne 2005; Vimal
2008) of the philosophical position of dual-aspect monism, following on
from Spinozas (1677) metaphysics, where a single reality3 can express
itself in one or other of two different forms, for example, as mind or
matter, which are irreducible to each other or to any other entity. The
association of awareness with the essential matter of the universe cor-
responds to Vimals (2009)) first hypothesis previously: matter may be
the carrier of both PEs and SEs. However, it is insufficient on its own to
explain the high-level conceptual consciousness that humans experience.
The capability to switch off consciousness through the chemical activity
of anesthetics on the brain makes it difficult to avoid the conclusion that
this high level of consciousness is generated through the brains activity.
This, then, corresponds to Vimals (2009) second hypothesis: (matter)
may carry PEs only, with the emergence of SEs in the course of neural
evolution.
The position we will adopt throughout this chapter is of a universe
which is characterized by a basic all-pervasive low level of awareness that
can be expanded through information-processing network complexity4
to multiple higher degrees of consciousness (Cottam et al. 1999). The
attribution of awareness to the primary nature of the universe is in no
way an easy option which removes the hard problem of conscious
experience it merely moves the problem elsewhere: how can we explain
the development of high levels of conceptual consciousness from their
low-level precursor of awareness? This, then, is our task. But first we
must provide a definition of consciousness as we envisage it. We hesitate
to include the word subjective in such a definition, as subjectivity itself
requires consciousness. Our working definition is related to Metzingers
(2004) statement that:
What we have in the past simply called self is not a non-physical individual, but
only the content of an ongoing, dynamical process the process of transparent
self-modeling:
Consciousness is the active mental processing of the existential cohesive
correlation between a subjects transparent self-image and its genetically-
sourced and historically-experienced proxy representation of the environ-
ment.

3 We view reality as a supposedly concrete mentally stabilizing construction which sup-


ports both individual and society: reality is known through a symbolically mediated
coordination of operative interactions, both within and between persons (Chapman
1999).
4 We use the word complex in its Rosennean sense (Rosen 1991), to imply an inability
to capture all of a systems properties with other than an infinite number of formalisms.
82 Ron Cottam and Willy Ranson

This definition is closely related to Damasios (1999) hypothesis of core


consciousness, which is generated from the relationship between the self
and environmental stimuli. Merker (2007) has argued that primary con-
sciousness is associated with the brain stem; a view which is supported
by the finding of Langsjo et al. (2012) that the emergence (Cottam
et al. 1998b; Cottam et al. 1999; Van Gulick 2001) of consciousness on
returning from general anesthesia is associated with the activation of a
core network, including subcortical and limbic regions of the brain. We
concur with Damasio (1999) who has argued that consciousness exhibits
a hierarchy of degrees (Wilby 1994), each depending on its predeces-
sor, from core to extended consciousness. In common with Bohm (Weber
1987: see peviously) we view primitive awareness as a non-recursive
phenomenon, whereas higher-level consciousness is recursive, displaying
self-consciousness (Laing 1960; Morin 2006) as the consciousness of
consciousness itself (Edelman 2003): this corresponds to the argument
we will present in this chapter.

3.4 From awareness to consciousness


Consciousness, then, is in our view a result of embodiment, expanded
from a primeval universal level of awareness through the evolution of liv-
ing organisms to the neural networks of the brain. In our interpretation
life5 adopts the role of a tool used by awareness to further its propagation
to higher levels of consciousness (Cottam et al. 1998a, 1999;). Peirces
firstness presumes a primordial awareness,6 and we could suggest that
the progression through his secondness to thirdness would correspond to
the hierarchical development of consciousness from awareness. But this
raises a difficult question maybe the difficult question of where such a
development takes place. Biological evolution neatly explains how prim-
itive organisms have developed into complex mammals, but leaves this
question aside; the presumption is that there is a neural substrate which
corresponds in a one-to-one manner to the presumably abstract nature
of consciousness.

5 Definitions of life are many and varied, from the philosophical to the purely biological.
Most of these are to some degree self-referential, being based on criteria which delineate
the recognizably living from the recognizably non-living (e.g., metabolism; reproduction).
The definition we will adopt here is an extension of the biosemiotic one (Sebeok 1977)
that life and semiosis are equivalent that living entities are capable of interpreting signs,
of signaling, and of self-sustenance.
6 The immediate present, could we seize it, would have no character but its Firstness.
Not that I mean to say that immediate consciousness (a pure fiction, by the way) would
be Firstness, but that the quality of what we are immediately conscious of, which is no
fiction, is Firstness (Peirce 19311958).
Biosemiotics of consciousness: System hierarchy 83

So, can we consider that consciousness exists, or should we only refer to


its presumed substrate? This is the realm within which we must navigate,
and decide whether to adopt the materialist position of traditional science
or to strike out into the unknown of new considerations of existence
and what they entail. We are left struggling on the horns of a dilemma.
Or are we? Pirsig (1974) has described a similar situation, but suggested
yet another solution: to go directly between the horns themselves. In
this chapter we will adopt Pirsigs route. We will indeed suggest that
consciousness is related to a physical substrate, but we will submit
that, in common with the unification of any differentiable entity, this
physical substrate is not uniquely material. This reality of unification
bridges the conventionally accepted philosophical gap between material
and abstract in a manner which mirrors the included (or exclusive) middle
of Lupasco (Brenner 2010), Brenner (2008), and Cottam et al. (2013).
If we are to clarify the development of a hierarchical consciousness,
from either an evolutionary or a sensory point of view, we need to be clear
about the properties of biological hierarchy itself (Salthe 1985), and con-
sequently about the occurrence of scale in biological systems (e.g., Ravasz
et al. 2002).7 Conventional science accepts readily that natural phenom-
ena are different in domains which are characterized by different sizes or
degrees of complexity that is, different scales but until recently no gen-
eral theory of multiple scales in a unified environment has been available.
This lack has now been rectified, with the publication of work on autocre-
ative hierarchy (Cottam et al. 2003, 2004a,b). Scale and its associated
hierarchy take up prime position in the development of organisms, where
informational and phenomenological differences across scales can be
enormous. Conversely, although inanimate entities, for example, crystals
(Cottam and Saunders 1973), do show differences between their micro
and macro properties, these are relatively insignificant. The predictability
of an entitys behavior appears to be qualitatively inversely proportional
to these cross-scalar8 informational differences (e.g., Vespignani 2009).
The sustainability of a hierarchical architecture depends on the degree
to which correlation between the different scales is complete (Cottam
et al. 2004a), that is on the degree of information integration across the
entire assembly of scales. In the light of the conjecture that there is a

7 By scale, here, we refer to size-related levels of organization in organisms: for example, at


the biochemical level, at the cellular level, at the tissue level, at the level of the organism
as a unified entity. This is not the same as scaling, which refers to empirical scaling laws
that dictate how biological features change with size . . . such fundamental quantities as
metabolic rate . . . and size and shape of component parts (West et al. 2000).
8 It should be noted that the word scalar in this work is the adjectival form of scale; it does
not refer to the mathematical concept in vector spaces of a scalar.
84 Ron Cottam and Willy Ranson

strong connection between information integration and consciousness


(e.g., Tononi 2004; Schroeder 2012), this, then, provides us with a fea-
sible starting point for our navigation; that of the unifying integration of
information across scales in a biological hierarchy.
Our eventual hypothesis follows closely the definition of consciousness
which we have presented previously: that consciousness emerges from the
interaction in the brain between two unified models, one of the subjects
embodied self-image, the other of the subjects evolving external envi-
ronment. Matsuno (2000) has described the observation of one entity by
another as a mutual measurement, and we believe that mutual mea-
surement or observation between the two unified models condenses to
the quasi-duality of self-consciousness and environmental consciousness
which characterizes dense information-processing neural networks.
The plan we will follow in establishing this hypothesis is as follows. We
will first establish the basic properties of scale, particularly with respect
to living organisms (Section 3.5: Scale and its implications). This sets up
the basic characteristics of a generalized hierarchal model (Section 3.6
Hierarchy and its properties). We will extend this hierarchical model to
match the ecosystemic view of organisms in nature by unpacking the
logic upon which it is based to an entity/ecosystem complementary pair
of logics (Section 3.7: Ecosystemic inclusion through birationality), followed
by a discussion of the resultant architecture (Section 3.8: The implications
of birationality). The unifying correlation of all the scales of an organisms
hierarchy results in a previously unnoticed scale-free self-representation
we refer to as hyperscale an embodied abstraction which corresponds to
the organisms inclusive character (Section 3.9: Hyperscale: embodiment
and abstraction). This leads us to the relationship between hierarchy and
biosemiotics (Section 3.10: A hierarchical biosemiosis), and to the relation-
ship between energy and awareness in the construction of a hierarchical
model (Section 3.11: Energy and awareness). Next we address the problem
of information overload in the brain and its solution (Section 3.12: Stasis
neglect and habituation) before presenting the detail of our main hypoth-
esis (Section 3.13: A birational derivation of consciousness). We finish with
some prima facie evidence (Section 3.14: the Coda).
It is important to note that the hypothesis we are developing through
this chapter is one which artificially presupposes a capability to construct
any morphological detail at any moment it does not take account per se
of evolution, which must always build upon what has been selected previ-
ously. Evolution is adept at scavenging established functional character-
istics from previous generations and re-using them for quite a different
purpose: What serves for thermoregulation is re-adapted for gliding;
what was part of the jaw becomes a sound receiver; guts are used as
Biosemiotics of consciousness: System hierarchy 85

lungs and fins turn into shovels. Whatever happens to be at hand is made
use of (Sigmund 1993). Consequently, although we believe that the
nature tends towards the hierarchical relationships we describe, this may
not always be apparent in the results of evolution. Gilaie-Dotan et al.
(2009), for example, have demonstrated non-hierarchical functioning in
the human cortex.

3.5 Scale and its implications


How do we resolve the dichotomy of accepting that different sizes may
imply different phenomena, but that different sizes in an organism are
intimately connected together?
Matsunos (2000) description of observation as a mutual measure-
ment extends the now conventional view of observer-interference in
quantum mechanics to a general principle that it is impossible to act
on a system without being acted on oneself. To effectively implement
this idea we must invoke not only a conventionally scientific externalist
third-person perspective but also the internalist first person perspec-
tive of every entity involved in such a mutual measurement.9 While
this kind of collaboration may be approximately irrelevant to macro
inanimate interactions, it is certainly not so for organisms. Specifically in
relation to our current quest, we must look at obviously differently sized
living structures as measuring instruments with limited perceptional
bandwidths.
Figure 3.1 indicates how differing sensitivities within limited band-
widths may partially isolate different-sized structures from each other
in a common environment. This partial nature is the key to resolving
our dichotomy: different scales which these are are partially isolated
from each other and partially inter-communicating: there is finally no
dichotomy! Additionally, partial scalar isolation permits local structures
to develop individual identities and properties, while remaining part of
the entire organism system. This makes it possible for survival-necessary
functions to be distributed advantageously between the different scales
of an organism, providing an exchange of autonomies (Ruiz-Mirazo and
Moreno 2004). It should be noticed that this kind of partial isolation can
also operate between elements at the same scale, for example, between

9 We use the words externalist and internalist here in a general philosophical sense, indi-
cating a view from outside or a view from inside a system, respectively, and not in
relation to the positions in theory of mind of externalism or internalism, which refer to the
neural-plus-external or purely neural origin of the mind.
86 Ron Cottam and Willy Ranson

Differently sized perceptional structures


Sensitivity

Size
Regions of partial inter-communication

Fig. 3.1 Limitation in the perceptional bandwidth of differently sized


perceptional structures within the same environment causes them to be
partially isolated from each other.

adjacent cells in an organism, providing a locally rich structure of auton-


omy exchange as well as a globally rich one: at the macro scale, for
example, Collier (1999) has proposed that the brain has ceded metabolic
autonomy to the body in exchange for extended information-processing
autonomy.
An important feature we must take account of is complementarity: the
interconnection and interdependence of polar opposites or apparently
conflicting forces in the natural world. The concept of unification of
opposites was first suggested by Heraclitus (see Kahn 1979), follow-
ing Anaximanders (see Kahn 1960) proposal that every element was
defined by or connected to its opposite. Taking account of the Aris-
totelian (2012) imprecision of our knowledge of nature, we can establish
a one-dimensional scale for any acceptably descriptive parametric10 char-
acteristic, within which it only exists between its two perfectly defined
Platonic opposite precursors (see Kahn 1996). An excellent example of
this can be found in the description of light, which according to current
scientific belief exists as photons of measurable microscopic size between
the two unattainable dimensional extremes of perfectly localized point-
entities and perfectly nonlocal single frequency optical waves. This cor-
responds to the description of dual-aspect monism we presented earlier,
where a single reality can express itself in one or other of two differ-
ent forms which are irreducible to each other or to any other entity, to
dual-aspect monisms high-level analog of mind and brain, and to the

10 Parameter: an arbitrary constant whose value characterizes a member of a system


(Merriam-Webster).
Biosemiotics of consciousness: System hierarchy 87

complementarity of self-image and proxy environment which appeared


in our working definition of consciousness.
As soon as we evoke a multi-parametric description of nature we
are locked into a scheme of multiple coupled complementarities whose
paired dimensional extremes are always outside our reality. A corollary
is that any apparent complementarity is only real if there are observ-
able intermediate conditions. Also, any region between complementary
extremes exhibits complicated fractal11 structure which invokes com-
plexity as a diffuse coupling medium between less diffuse semiotic scales
(Cottam et al. 2004b). This makes semiotic signs appear like reliable
islands in a sea of vagueness. We are not permitted to insist that the
different scales of a single organism operate following one and the same
logic: the rules for relationships between biochemicals are very different
from those for relationships between biological cells, for example. Nor
can we assert that the partial communication between adjacently sized
scales must take place following a reversible logic.
The simplest example of this difficulty is found in the basic arithmetic
procedure 1 + 1 = 2. Moving from left to right, degrees of freedom
are lost to the participants (if any digit has n degrees of freedom, then
the left-hand side of the equation is exemplified by 2n degrees, and the
right-hand side by only n). Consequently, although arithmetic rules nor-
mally provide for operational symmetry, in reality we cannot return easily
from right to left. This same difficulty appears between adjacent scales
of an organism, where higher scales (for example, that of an organism
itself) require less information to describe them than do lower ones (for
example, the organism exemplified by a collection of cells). However, this
informational reduction at higher biological scales results in a fundamen-
tal advantage when compared, for example, to a digital computer. The
more complication we add to a digital computers circuitry, the slower it
will operate. The opposite is the case, however, for an organism, where
higher scales operate faster than lower ones, providing a survival advan-
tage in complex environments (Cottam et al. 1998b).
So, irrespective of locally scaled advantages, there is an overall gain
in accumulating a number of different scales at least up to some
optimal maximum beyond which increasing complexity would hamper
cross-scalar coordination: it is notable that the number of different scales
associated with animals is independent of size, whether this is for a mouse
or one of the largest dinosaurs. A question which now arises is whether

11 The property of a fractal we call upon here is that of detailed recursive self-similarity
across endless magnifications of the internal structure of the complex inter-scalar layer.
88 Ron Cottam and Willy Ranson

this gain can be simply associated with scale itself, or whether there are
further properties associated with a scalar assembly, or hierarchy?12

3.6 Hierarchy and its properties


The concept of hierarchy appears in many and varied contexts. It is most
widely encountered in the context of social (Macionis 2011) or business
(Marlow and OConnor Wilson 1997) organizations. Salthe (1985) has
specified that there are only two kinds of hierarchy the scale hierarchy
(e.g., atom, molecule, biomolecule, biological cell, organism, . . . ) and
the specification13 hierarchy (e.g., physics, chemistry, biochemistry, biol-
ogy, . . . ). We would disagree, in that we believe that a model hierarchy
best describes the generality of entities which compose the universe. A
second point of difference is that Salthe maintains that hierarchy is merely
the construction of human minds, whereas we believe that it is a primary
feature of the evolution of natural systems.
But what is the nature of a model hierarchy? A simplistic example is
that of a tree, which may be specified at a number of levels14 as {a tree
consisting of atoms}, {a tree consisting of molecules}, {a tree consisting
of cells}, . . . , up to {a tree as itself}. Each of these scalar levels of descrip-
tion constitutes a model of the same entire entity of a tree, from its
representation in terms of its most primitive constituents up to its most
complete form. Each level can be defined in terms of two kinds of
information: containing information that which is required to describe
the level itself and contained information that which is subsumed at the
level in terms of more primitive constituents. This starts to look a little
like Salthes specification hierarchy, and as Salthe himself has noted,15 it
resembles in some ways a specification hierarchy constructed in terms of
scale.
The readers attention is drawn to the similarity of this description to
the depiction of a multi-scaled system in Fig. 3.1. Indeed, it is more
than a similarity. Figure 3.1 can be interpreted as an illustration of the
relationships between different levels of a multi-scaled biological system.

12 The reader should note that, given the partial nature of inter-scalar and intra-scalar
communications, heterarchy is subsumed into hierarchy as we will describe it.
13 Salthe has now renamed the scale hierarchy a composition hierarchy, and the specification
hierarchy a subsumption hierarchy (Salthe 2012).
14 In the literature there is often a distinction made between the scales in terms of physical
size and the levels in terms of function of a hierarchical description. Although this
distinction has value for either a scale or specification hierarchy, it disappears for a model
hierarchy, which can be successfully used to represent either structure or function.
15 Private communication.
Biosemiotics of consciousness: System hierarchy 89

Increasing containing information


increasing contained information

Simpler
models
More
complicated
models

Complex inter-
model regions

Fig. 3.2 The representation of a multi-scalar hierarchy. A conventional


top-down corresponds here to right-to-left, and vice versa.

Each model-level is both partially isolated from and partially communi-


cating with its neighbors, as we described earlier. Figure 3.2 shows our
chosen representation for this kind of hierarchy. Each level is indicated by
a vertical line, whose length denotes the containing information required
to describe the model at that level. Conventionally, a hierarchy would
be illustrated in a top-down manner on the page, but here the usual
top-down corresponds to our right-to-left, to avoid any automatic
presumption that a specific level is superior to all the others. In relation
to our description of a tree, the left-hand side corresponds to {a tree
consisting of atoms}16 and the right-hand side to {a tree as itself }.
So, how does this specification of a model hierarchy help us? In com-
mon with conventional descriptions of the birth of the Universe through
a Big Bang, and in comparison with a scale hierarchy, we would expect
the evolution of higher scalar levels to be from lower levels, rather than
the inverse. Given the pre-existence of atoms, and suitable conditions,
molecules can have formed through atom combinations, enabled by pre-
existing atomic propensities (Popper 1990). These propensities them-
selves will have been pre-defined by yet lower organizational levels. In
general, therefore, a new higher level must be to some extent coordinated
with its pre-existing lower one, that is, partially in communication with
it, both bottom-up from it and top-down to it. Often the bottom-up
creation of a new level is referred to as emergence (Goldstein 1999), and
the consequent top-down influence as slaving (Haken 1984), or down-
ward causation. We can apply a similar argument to relations between any
pair of adjacent levels, and consequently, in a unified hierarchy, every

16 N.B. For simplicity we have not explicitly taken account of subatomic levels in this
sequence.
90 Ron Cottam and Willy Ranson

scalar level communicates directly or indirectly with every other level.


By unified here we imply that our initial condition holds namely that
every level is a representation of the same entity. This means that there
is a hierarchy-wide coordination coupling together not only all the scalar
levels but also, directly or indirectly, all the constituents of those levels.
The hierarchy-wide coordination of a unified hierarchy is initially
apparently (scientifically) abstract a figment of our imagination but it
is real . . . it is the real nature of the entity which is described. We cannot
emphasize this sufficiently: the unification of a multi-scalar system is its
real nature!
We must remember that individual scales of a model hierarchy are par-
tially isolated from each other and bidirectionally communicating, which
places their description in second-order cybernetics.17 This means that
there is no way that a conventional scientific third-person perspective can
be accurately formed. Depending on where we decide to stand and look
at a given scale, we must either accept a vague, incomplete view (from
completely outside the hierarchy, or from another scale) or accept that
our view is a first-person one (from the same scale)! This automatically
makes nonsense of the usual all-encompassing third-person prescriptions
of scientific modeling. Neither of these two points of view is exclusive,
and this points to a primary characteristic of unified hierarchy that
there is no single fits all description of this kind of system,18 except in
its quasi-external unification, which we will refer to as hyperscale (Cottam
et al. 2006). This quasi-external quasi-third-person viewpoint consists of
approximate models of all the internal scales in a self-consistent single
unified super-model which is the real nature of the system when viewed
from outside.
The super-model should more correctly be referred to as a glob-
ally related sign, constituting a vaguely defined hierarchical collection of
locally referred signs. Aristotle (2012) proposed that there are four ways,
or causes, in which change can be provoked: material, formal, efficient,
and final cause. The most important of these the final cause relates to
the overall purpose or goal of change. In Aristotles terminology, hyper-
scale can be associated with an imprecisely defined, globally related final
cause, which is assembled from the final causes of individual scalar lev-
els, and of their constituent elements. The partial nature of enclosure

17 Cybernetics is the investigation of the structures of regulatory systems. In second-order


cybernetics the investigator must take account of his or her involvement in the system
being investigated itself, bringing to the fore questions of self-referentiality (von Foerster
2003).
18 Which, of course, raises the question of whether there could ever be any other kind of
system . . .
Biosemiotics of consciousness: System hierarchy 91

and process closure for an individual scale resolves the open or closed
system? dichotomy; vagueness of the scalar representations in hyperscale
resolves the this scale or that scale? multiple operational partitioning
of a multi-scalar system.
We should now begin to distinguish between inanimate and animate
entities, which until now we have lumped together. The distinction from a
structural assessment is one of degree of the extent to which inter-scalar
communications are more or less complete. In a clearly inanimate entity,
for example, a crystal, the inter-scalar informational differences are so
limited that microscopic and macroscopic properties approximately coin-
cide. We say approximately, because there are no real physical systems
where microscopic and macroscopic properties precisely coincide. Even
for crystals the archetypical inanimate entities minute differences in
cross-scalar measurements appear (Cottam and Saunders 1973). Organ-
isms, however, exhibit enormous inter-scalar informational divergences,
making first-person viewpoints dominate third-person ones and making
the definition of properties elusive. The reader should note that we are
here in no way belittling the importance of other observable character-
istics of living systems, merely relating their implications to inter-scalar
differences.
The most important aspect of hyperscale to an organism is the way in
which it provides a means for the organism to exhibit itself to the outside
world; it may set up a facade which corresponds more or less to different
high- or low-scalar properties. A cell, for example, tries to close itself
off from the outside world by a lipid barrier, while permitting specific
survival-related in- and out-ward communications.

3.7 Ecosystemic inclusion through birationality


Since the quantum mechanical establishment of observer-dependence at
the beginning of the twentieth century, biology has led the way towards
raising the importance of environmental effects on system properties of
ecosystemics. This has reinforced the position taken earlier by Bohr (1998),
among others, but it should not be assumed that ecosystemic ideas should
solely apply to biological ecosystems. The major problem, however, is
to see how to apply ecosystemic ideas to other domains.
Scientific endeavor takes place in and around formal models. These
models are dependent on a higher-level paradigm, within which they
exist, for example, the Newtonian paradigm, or quantum mechanics.
So far, so good. Going one step further, the ecosystemic paradigm
resorts to a complementary pair of sub-paradigms; one for an organ-
ism, another for its environment. But there is another, yet higher level of
92 Ron Cottam and Willy Ranson

definition required that of the logic within which a paradigm is con-


structed. Cottam et al. (2008a) have proposed that a universal ecosys-
temic description can be created by shifting the binary complementary
nature of the ecosystemic paradigm up to the level of logic itself, pro-
ducing a complementary pair of logics and rationalities:19 a birational
system. But how would this relate to the multi-scalar hierarchy we have
described?
Figure 3.2 portrays the containing information content of the different
scales of a unified hierarchy. We must remember that each scalar level rep-
resents a different model of one and the same entity, and that therefore as
we move towards the right hand side and through models of less and less
containing information there is a progressively greater and greater infor-
mational discrepancy, corresponding to the concealed contained infor-
mation. This discrepancy has the character of hidden variables,20 and
it constitutes the (internal) environmental information through which
the scalar models are derived (Cottam et al. 2008b). In Fig. 3.2 this
contained information would appear between the different model levels,
providing in each case a hidden or implicate ecosystem for the following
explicit or explicate model.21 Each of these differently scaled ecosystems
is related to its equivalently scaled model in a manner which is reminis-
cent of Jakob von Uexkulls (Kull 2001) multiple biosemiotically differ-
ent Umwelten (individually experienced surrounding worlds) for multiple
biological species within one and the same physical environment.
Figure 3.3a illustrates the hierarchy, including these intermediate
ecosystemic layers. If the model collection is successfully unified through
correlation, then the resulting set of ecosystemic layers forms a second
hierarchy, as shown in Fig. 3.3b, which has very different properties
from the first one (Cottam et al. 2000). Whereas progression through
the first hierarchy (Fig. 3.3b) is reductive (Oppenheim and Putnam
1958; Barendregt and van Rappart 2004) towards localization towards
the right-hand side progression through this new one is expansive,
or reductive in its own way, towards non-localization towards the
left-hand side. Figure 3.3c indicates the ecosystem-model pairings. Each
ecosystem-model pair completely describes the entity at that scale, and
the less the containing information of a scalar model, the greater the
contained, or hidden, associated ecosystemic information.

19 In this work we refer to logic as a set of rules which may/must be followed, and rationality
as the path of signs towards a desired end which is followed using logic.
20 The De Broglie-Bohm interpretation of quantum theory (Bohm and Hiley 1993) has
been described in terms of hidden variables which must be added in to standard quantum
theory to make it nonlocally self-consistent and complete.
21 Following David Bohms nomenclature for hidden order and for explicit order.
Biosemiotics of consciousness: System hierarchy 93

Individual model levels Entity model hierarchy Model/ecosystem pairs

(a) (b) (c)

Intermediate Model ecosystem


complex regions hierarchy
Fig. 3.3 Hierarchical complex layers and scaled-model/ecosystem pair-
ings. (a) The hierarchy, shown including the intermediate complex lay-
ers. (b) The second hierarchy consisting of the complex layers. (c) The
individual scalar model/ecosystem pairings.

We now have two quasi-independent hierarchical subsystems, whose


individual properties correspond to the dual logics and rationalities of
the birational system we targeted earlier. All natural unified hierarchical
systems have this form (Cottam et al. 2000). In essence, a natural ecosys-
temic birational hierarchy is very different from an artificially established
one, such as that which has traditionally been used to describe many large
business enterprises, in which communication between the different hier-
archical levels is possibly totally asymmetrical (e.g., only top-down) or
uncaring of the short-term states of the levels. Most particularly, the
scales of a natural ecosystemic birational hierarchy are internally gen-
erated and adjusted through the overall correlation of a multiplicity of
first-person perspectives, instead of being externally imposed as in an
artificially established hierarchy.

3.8 The implications of birationality


The establishment of this kind of natural ecosystemic birational hierarchy
drags us into an acceptance that everything depends on everything
else. An immediate objection would be that this makes it impossible to
ever be sure of anything which is true in an absolute sense, if relatively
moderated by context.22 We pointed out earlier that even crystals show
differences between their micro and macro properties, although these are

22 This assertion is related to Heisenbergs uncertainty principle in quantum mechanics


(Busch et al. 2007), which sets out pairs of an entitys properties where the precision of
definition of one is inversely dependent on the precision of definition of the other.
94 Ron Cottam and Willy Ranson

relatively insignificant: it is the degree to which knowledge can be precisely


ascribed to a specific context that must be questioned. This context-
dependent view of knowledge corresponds in a birational hierarchy to
the context-dependent stability of its constituent scales, which depends
on the relationship between two aspects of the inter-scalar information
transfer. Information that changes rapidly in time will inject novelty or
instability from a scalar level to its higher or lower neighbors; informa-
tion that only changes very slowly will contribute to stabilization of the
entire hierarchy. The long-term stability of crystals indicates that their
information transfer across scales must essentially consist of structural
order and not novelty; the dynamic quasi-stability of organisms suggests
that for them novelty is of great importance.
The temporality of inter-scalar information transit is of controlling
nature. Communication is never instantaneous, and limitations on infor-
mation transfer speed are many and varied. It is convenient to describe
these in terms of a generalized relativity,23 in which inter-scalar conse-
quences depend on transit speed. In thixotropic materials, for example,
the macro appearance of liquid or solid nature depends on the speed of
inter-scalar transport of movement; in the brain, slow inter-glial calcium
waves influence local neuro-modulating processes. We maintain that the
phenomenon of scale only has meaning in a natural ecosystemic bira-
tional hierarchy (Cottam et al. 1998b, 2000), where what happens at one
scale ultimately affects all other scales to a greater or lesser extent, and
where the way in which this takes place depends on information transfer
through the layers of the second inter-scalar hierarchy.
The levels of a model hierarchy themselves have the character of
Newtonian potential wells, where well-correlating arriving information
is locally integrated rather than being propagated onward to other levels.
The implication of this is that the inter-scalar layers become reposito-
ries of complexity, which is effectively squeezed out of the scalar levels
themselves. The constitution of the inter-scalar layers then corresponds
to the generally complex nature of ecosystemic influences, as opposed
to the nominally simple nature of model structures. We must not, how-
ever, forget that the two sub-hierarchies are ecosystemically and logically
complementary. If we take our habitually biased first-person point of view
that the model levels are simple, then the intermediate layers will indeed
appear complex. However, if we change our first-person point of view to
that of the intermediate layers, then they themselves will appear simple
and the Newtonian wells will appear to be complex (Cottam et al. 2003;

23 Note that we are here only indirectly referring to Einsteins General Relativity, as one
of a large family of different relativistic phenomena.
Biosemiotics of consciousness: System hierarchy 95

Cottam et al. 2004a). Simple and complex have a wide range of related
implications, and any attempt to describe them as being only disjoint
categories is doomed to failure. In this context a specific semiotic sign
cannot be in any way independent of others: its implications are subject
to the same birational local/global control as are the scalar levels.
Hoffmeyer and Emmeche (1991) have maintained that life necessitates
two types of coding: digital related to the genetic code of DNA and
analogue related to semiotic structures of metabolism.

(These semiotic structures) are based on the biochemical topology of molecular


shapes, not on the Boolean algebra of genetic switches. Since, in contrast to
digital codes, analogue ones are considered by Hoffmeyer as computationally
too demanding to be adequately modeled by mathematics, code-duality is an
important case of ontological complementarity (Artmann 2007, p. 230).

This is again reminiscent of dual-aspect monism. Natural ecosystemic


birational hierarchy takes biosemiotic dual-coding to a new stage. The
parametric Newtonian scalar models are by their nature close to equilib-
rium, and their features may be reasonably accurately digitized, whereas
those of the far-from-equilibrium fractal complex inter-scalar layers may
not. Digital-analog dual-coding permeates the entire birational frame-
work and operates at all scales and in all contexts. We should again
make clear that the birationality of natural hierarchy we have devel-
oped here is not restricted to the biological domain it is generic to
all domains. We should also express our view that it is a natural first
step in a journey away from mans typical monorational thought towards
a future multi-rationality expressing the multi-complementarity of
nature.

3.9 Hyperscale: Embodiment and abstraction


It is inconceivable that a brain could operate independently of its asso-
ciated body. Not only are the brains primary inputs and outputs related
to the body, but they are coupled to each other through the sensory-
cognitive-motor-effect loop of neural operation. Even the function of
some regions of the cortex is dependent on the presence or absence of
specific body-parts: if a limb is amputated then that region of the brain
may be taken over to perform some other function (Melzack and Wall
2008). Natural unified hierarchy provides a basis for describing the brain
in an embodied manner, in that lower levels of such a hierarchy can
represent atomic, molecular, or cellular levels of an organism. This is
supported by Jagers op Akkerhuiss (2010) development of the operator
96 Ron Cottam and Willy Ranson

hierarchy, which in an analogous manner establishes a universal hierar-


chy from plasma, through hadrons, atoms, and cells to memons, or
culturally replicating neural networks.
For the moment we will leave aside the birational nature of natural hier-
archy to avoid generating excessive complication, and return to it later.
The partial nature of inter-scalar correspondences apparently makes it
impossible to find a platform from which we can accurately view all the
scales of a hierarchy from a single point of view. But we appear to some
extent to do just this all the time with respect to everything we meet! The
very nature of the descriptions we have already presented in this chapter
depends on being able to do exactly this: Fig. 3.2, for example, makes use
of this capability in its portrayal of a multi-scalar system. So how is this
possible? We should remember that a large part of the inter-scalar trans-
fer of information is chiefly time-independent in a quasi-stable structure.
This means that we could formulate a fairly accurate outside view of the
internal scales of a crystal without too much difficulty. But what about
forming an external view of the internal scales of an organism? Well, to
some extent this must be possible because, as we pointed out, we do this
all the time. Or what about generating a view of the internal scales of an
organism from within the brain of the organism itself? Again, this must
be to some extent possible, but certainly not to any extreme degree of
accuracy. But what about cutting open an organism to examine its scalar
parts? Now we arrive at the problem. We could indeed examine the phys-
ical parts in this way, but we would have removed their inter-relationships
by cutting the organism open! As Rosen (1991) has commented: Biol-
ogy is the study of the dead.
The principle reality of any entity or system is its unification, which,
unsurprisingly, is always to some greater or lesser extent incomplete or
vague. But it does exist. We have noted how all of the internal scales
of a natural hierarchy correlate as well as possible, and this creates the
hierarchys unity. Here again, if we cut open an organism we destroy its
unification. But where does its unification exist? This is more difficult,
and it requires us to relax our materialist view of existence to take account
of the clear existence of unification of entities. An organism exists, but
it only does so as its complex construction of scalar levels. Or should
we say that an organism does not exist, that it is only an abstraction?
To do so in the face of the generality of natural hierarchy would indeed
be strange! So must we accept that all abstractions are real? Well it is
certainly the case that an idea which is part of a psychologically defined
personal reality will seem real to its possessor, but at what point can
we say it is more globally real? Fortunately, partial character comes to
our rescue by removing the absoluteness of existence. Some abstractions
Biosemiotics of consciousness: System hierarchy 97

more or less exist; some do not. But which are which? Simply, we can
take partial abstractions which are the result of embodiment as being
more real than those which are not. Existence is not absolute: existence
depends on the grounds from which it is derived, and the domain in
which it exists!
This previous discussion is not just for its own sake: we must decide
on the reality of a systems unification, and on its character. We are
consequently led to accept the reality of unification and its character. It
is, however, impossible to formulate an accurate representation of the
internal scales of an organism from any one of its scalar levels or from
outside. But we can create from the outside an approximate model of the
organisms various scales which will be sufficient for most purposes. We
should remember that any internal details which are invisible from our
outside platform will only cause a problem if they contradict the overall
picture we create. We will refer to this external approximate model of the
internal scales of an organism as a hyperscalar representation. There does
not appear to be any reason why such an external model cannot be
created inside the brain of the organism.
If we ourselves form a representation of any entity, then it is by its very
nature a more- or less-complete third-person hyperscalar image. If we do
so for an organism, we risk great inaccuracy, as the cross-scalar informa-
tion transfers are restrictedly structural and more novel in character. If
an organism forms such an external viewpoint of itself and its environ-
ment in its own brain, then this corresponds to the first-person perspec-
tive of its mind. We form a hyperscalar perspective of ourselves and the
world about us, which we construct from the entire history of our individ-
ual and social existences, including the facts of our believed reality,
numerous apparently consistent but insufficiently investigated logical
suppositions, and as-yet untested or normally abandoned hypothetical
models which serve to fill in otherwise inconvenient or glaringly obvious
omissions in its landscape for example, the convenient supposition of
a flat Earth.

3.10 A hierarchical biosemiosis


Biosemiotics makes use of the syntactic, pragmatic, and semantic aspects
of semiotics to implement meaning as a characteristic of biological sys-
tems and relate biological processes to the semiotic process of semiosis,
which describes the emergence of signs. This process of emergence princi-
pally, necessarily and always provisionally takes place through abduction.
Nature is complex as a consequence of relativity, and it is at its most
complex at biological inter-scalar membranic interfaces, where novelty
98 Ron Cottam and Willy Ranson

is of pronounced importance and abduction operates strongly. Taborsky


(1999) has described the evolution of referentiality in the abductive semi-
otic membrane of a bi-level information architecture, and has pointed out
that the membrane zone is not necessarily defined within spatiotemporal
perimeters.
This supports to our earlier comments on the nature of existence,
and the reality of unification. It is clear, however, that a large part of
the interfacing between biological tissues and their constituent cells, for
example, will be associated with the en-closing cell membranes, with
cellular process closures, and with the exposure of these to extraneous
and more globally tissue-hosted influences. We believe that it is the evo-
lution of cross-scalar correlative development which has resulted in the
functional appearance of common semiotic resources at the different organi-
zational levels of organisms. A prime example of this functional diversity
is the appearance of abduction itself in many forms: iconic-to-symbolic
abduction is the foundation of a Peircean architecture of information;
conceptual abduction is the precursor to conceptual consciousness; ener-
getic abduction provides the grounding for quantum mechanical state
changes.
The biosemiotic code-duality described by Hoffmeyer and Emmeche
(1991) and Hoffmeyer (2008) is yet another example of the diversity of
abduction. In the natural hierarchy of Fig. 3.3 the alternation of New-
tonian wells and complex regions can be related to digital and analog
characters, respectively. Or, at least, it can from a point of view corre-
sponding to a Newtonian world: examination from the viewpoint of the
complex regions would indicate precisely the opposite! The entire hierar-
chy is a balance between digital and analog; between local and nonlocal;
between reduction and expansion. Biology evolves through a temporal
alternation of digital (DNA) and analog (the organism) characters, and
a hierarchys stabilization mirrors this.
The birational multi-layered structure we have described is the embod-
iment of a self-consistent hierarchical biosemiosis, which operates in the
manner of Matsunos (1998, 2000) living memory. Local stabilization
of the Newtonian potential wells which we have described is by ref-
erence to their intimately associated complex ecosystemic complements
through abduction. Communicative transfer between any pair of adjacent
wells always takes place through one of their associated complex regions
involving abduction, and universal semiotic stabilization progresses from
the (multiple) local signs to the global and back again in an endless
cycle. This makes it impossible to attribute a single sign to either a local
internal state(ment) or a global external one, as each is influenced by
the other in a continuous manner. It is even dubious whether we can
Biosemiotics of consciousness: System hierarchy 99

successfully invoke a semiotic approach out of the confines of the hierar-


chys hyperscalar representation.
The establishment of a framework which integrates concrete aspects
of structure and process along with less-than-concrete aspects of non-
Newtonian complexity begs the question of the nature of the material
from which such a framework could be constructed. We will address
this in the following section of the chapter.

3.11 Energy and awareness


Current scientific descriptions of our universe rest on the constitutional
concept of energy, and its constituent entropy.24 The extensive associa-
tion of energy with formal modeling means that we must treat it with
some skepticism if we are trying to deal with systems which cannot
be comfortably described in a formal manner. The concept of energy
itself fragments if we try to formally relate thermodynamic or statis-
tical entropy25 to information entropy,26 in much the same way that
attempts to linguistically describe complex phenomena fragment if we
try to define them too closely. What happens to entropy in a birational
hierarchy; how does the formal concept of energy react to the presence
of low-level awareness at the root of the Universes evolution? And how
does entropy relate to organisms and to natural ecosystemic birational
hierarchy? Organisms are (partially) open systems, and they subsist by
ingesting low-entropy food and excreting higher-entropy residuum. We
are usually told that the Universe is heading towards a high entropy state
described as heat death (e.g., Adams and Laughlin 1997), but living
systems appear to survive by colonizing entropy. Particularly relevant
to our present purpose, is a single entropy sufficient in the context of a
birational description?
Entropy is described as the inverse of order, but how should we define
order? Conventionally, given a mixture of black and white balls, then the
most ordered state is taken to be that with all the black balls together
and all the white balls together. But what about the state where we have
the balls arranged as . . . black, white, black, white, black . . . ? Surely this
is another kind of order? Yet it is the state which is nominally associated
with the highest entropy. While this conventional viewpoint makes sense

24 Entropy is a measure of disorder or randomness in a system.


25 In general non-living systems evolve in such a direction that the thermodynamic entropy
(Clausius 1965) increases; statistical entropy (Boltzmann 1896) measures the number
of ways in which a systems elements can be arranged.
26 Information entropy is usually taken in communication theory (Shannon 1948) to be a
measure of the uncertainty associated with a random variable.
100 Ron Cottam and Willy Ranson

within a monorational description, it is far from satisfactory when we


move to birationality. Here, the latter alternating arrangement of balls
is indeed another kind of order it is the complement of the conven-
tional one, in the sense that our primary (model) hierarchy is reduc-
tive towards localization, and our second (complex) one is expansive
towards non-localization. Clearly, this means that we have to invoke a
second kind of entropy which is complementary to the conventional
one.
If we adopt the simplest representation of a complement, then this sec-
ond entropy is the opposite of the first one not its negative, its opposite in
the sense that positrons are the opposite of electrons. Consequently, if
a system moves away from a conventionally ordered state towards (con-
ventional) disorder, then the first (conventional) kind of entropy will
increase but the second (complementary) kind of entropy will simul-
taneously decrease, and vice versa. Here, again we meet a parametric
description which is characterized by a one-dimensional scale existing
between two opposite precursors mirroring the nature of dual-aspect
monism (Cottam et al. 2013). So where do we now stand with respect
to life in a birational description? Life attempts to reduce both entropies
by establishing a position between the two kinds of order. This is one
reason why it is difficult to understand life from a standard scientific
(monorational) viewpoint.
Given our belief that awareness is a part of the foundation of the
universe, how do we square sciences presumption that energy is the
foundation of all of nature? Energy, as it is used, is something of a catch-
all term. Whenever something is found in nature which is difficult to
deal with, it is reduced to an energetic term to try and solve the problem.
Thus, energy is not only related to order, it is convertible to and from
matter, for example, according to Einstein.27 Following on from the dis-
covery of deterministic chaos in the 1960s (e.g., Schuster and Just 2005),
it is evident that only a very small part of our immediate environment can
be easily described by the kinds of logical formality which are currently
available to science and mathematics, and that the vast majority of our
interesting surroundings are subject to the large-scale unpredictability of
small causes within extreme nonlinearity (Ivancevic and Ivancevic 2008).
We consider that the properties of energy should be included in this cat-
egory. The logical incompleteness of large quantum mechanical systems
(Antoniou 1995) is abductive evidence that, although small-scale quan-
tum electrodynamics may be very accurate, there is a slight discrepancy
between real energy and our abstract model of it awareness whose

27 The well-known E = mc2 .


Biosemiotics of consciousness: System hierarchy 101

character ultimately accounts for the appearance in large information


processing networks of the phenomenon we call consciousness. At the
lowest levels of description (i.e., elementary particles, quarks, super-
strings, . . . ) this difference would not be noticeable; we presume, as did
Bohm (Weber 1987), that it is of negligible measurable magnitude at
these levels.
We conclude that energy constitutes the inanimate description of dual-
aspect monism corresponding to its attempted measurement in formally
described simple systems. This removes the apparent dichotomy we faced
in evaluating the material energy or dual-aspect monism from which
a natural ecosystemic birational hierarchy could be constructed.
Let us look at our natural hierarchy as a computational structure.
Moving from left to right in Fig. 3.2 we move from more complicated
models and towards simpler ones. These simpler representations are
computationally preferable, as they can be dealt with very rapidly. This
is why the partially autonomous higher levels of a multi-scalar organism
are survivally advantageous in the face of unforeseen threats. We propose
that at higher levels there is an increase in awareness associated with the
greater applicability of simpler models. This magnification could well
explain how more complex organisms can have a heightened sense of
awareness, but it does not yet explain consciousness.

3.12 Stasis neglect and habituation


So, we are nearing the target of our considerations. But first we must
sidestep a little to prepare the last elements we require those of stasis
neglect, and habituation. Our brains have evolved to react to things that
move, things that change. This is most obvious in the neural/retinal pro-
cessing of vision, where foveal attention preferentially targets movement
in a surrounding scene (Nakayama 1985; Simeon et al. 2008), to the
detriment of static or uninteresting features (Henderson 2003). If you
are sitting in a stopped automobile, for example, looking straight ahead
while waiting for a traffic signal at your side to change from red to green,
the red light will slowly fade from your attention unless you concentrate
on it. We will refer to this phenomenon as stasis neglect.
Imagine looking out of the window of an electric train travelling at high
speed. At first you will notice the pylons which support the electric cables
passing again, and again, and again . . . But finally your brain doesnt
bother reacting to the repetitive appearance and you become unaware
of their passing. This is an example of habituation (Barry 2009), where
the orienting reflex (Sokolov 1963) of attention is reduced by temporally
multiple experiences of the same visual feature.
102 Ron Cottam and Willy Ranson

So it is not only that your brain doesnt react to things that do not
change, it also finally removes from consciousness temporally regular
features of a scene. This combination of neglect of stasis and habituation
is of vital importance in reducing the amount of incoming information
that the brain must do something with. It is not that static features
or regularities do not arrive at the retina or brain; it is simply that their
processing is curtailed early on. This is somewhat analogous to the tech-
nique in electronics of setting up a circuit by first applying to it a static
bias voltage, thus fixing the region within which the time-varying signal
voltage can be successfully processed.28
In noticing things that change we are reacting to their signs; initially in
our electric train example we react to the firstness of something which
flashes by the window. This sign may be equated to ? a questioning.
After we notice that the same effect occurs again it seems that there is an
outside effect, but it is not immediately clear what. The sign throws us
into secondness; throws us back on our history and experience to attempt
to abductively correlate the effect with some prior knowledge or phe-
nomenon. With further repetitions of the effect we will (probably) focus
on the trains electric power, the necessity to get that power from some-
where, the need for cables to supply the power, and the need for pylons
to support the cables in a regular array alongside the track. We sink
into the thirdness of habit, of no longer being surprised by the pylons
appearance. But now the sign drives us towards acceptance, towards no
longer being startled by the regular effects because each appearance is
predictable on the basis of previous appearances: the sign slowly decays
and finally disappears through habituation.
It is not only our brains which carry out this selective process. As we
described earlier, most of the information which crosses between the
scales of a natural hierarchy is statically structural and not novel. It is
information which supports the hierarchys status quo. Particularly as a
result of stasis neglect, information processing across the scales of any
natural hierarchy is greatly reduced. But it is in the brain that we can find
the most important consequence of habituation.

3.13 A birational derivation of consciousness


We must now return to the birationality of natural hierarchy. To recapit-
ulate . . . the multiple scales of a natural hierarchy automatically correlate

28 Note that this bias voltage can be effectively zero, as in the quasi-linear setup of a class
B amplifier.
Biosemiotics of consciousness: System hierarchy 103

Hyperscale Hyperscale

(a) (b)

Fig. 3.4 Unifications of the first and second hyperscales. (a) The unifi-
cation of hyperscale. (b) The second hyperscale derived from the ecosys-
temic complex regions.

and produce system unification. We have pointed out that this unifica-
tion really exists, in the form of hyperscale (Fig. 3.4a). Complex regions
develop between adjacent scalar levels, and these regions are also corre-
lated, to produce a second hierarchy, whose properties are complemen-
tary to those of the first one. Consequently, the complex region hierarchy
is also associated with a hyperscalar unification, which is complementary
to the first one (see Fig. 3.4b). In common with the individual scales,
these two sub-hierarchies are partially isolated, partially in communica-
tion, and are thus endowed with partial autonomy. This biosemiotic dual-
ity of character an extreme generalization of Hoffmeyer and Emmeches
(1991) dual-coding is a major departure from conventional monora-
tional views of an organism, and it applies equally well both to individual
scales and their ecosystems and to their hyperscalar unifications. A con-
sequence is that some kind of evolution29 will take place at all scalar levels
of an organism, from its macro relationships with the natural world down
to interactions of the components of its cells.
Quantum systems exhibit multiple context-dependent discrete lev-
els of existence, from low to higher energies. It is more than coin-
cidence that this structure, with its quantally defined discrete mod-
els separated by unformalizable intermediate regions, mirrors that of
a natural ecosystemic birational hierarchy. The general hierarchy we
have described appears to be generic for all other hierarchies, including
quantum mechanical energy structures, which are its low-dimensional
derivatives. In its general form, a natural ecosystemic birational hierarchy
provides a framework within which life and consciousness can flourish

29 We consider that the formalization of Evolution into the three Darwinian operators of
mutation, reproduction, and selection is a late-stage crystallization of an earlier more
fluid process, closer to the evolution of chemical reactions.
104 Ron Cottam and Willy Ranson

(Cottam et al. 1998b). In a manner analogous to that of a quantum


system, the lowest, or ground, state of a natural hierarchy corresponds
to the description of an entity as being lifeless, and higher states cor-
respond to higher degrees of livingness. In this framework there is no
fundamental difference between localizations of all kinds, from quantum
quasi-particles to perceptions to living organisms, and all of these con-
stitute biosemiotic signs whose beginnings obey a consistent set of rules
of emergence (Cottam et al. 1998b). The processes of emergence of new
entities, species, or hierarchical levels all coincide with a single descrip-
tion which is related to the maintenance of stability of localized entities
in a global context (Cottam et al. 1998b). The emergent part of this
description corresponds conceptually to the second half of a quantum
jump (Cottam et al. 1998b), and to Peircian processes of abduction
(Taborsky 1999) in a biosemiotic scheme.
Predetermined inter-scalar transfers along the formalized lines of
1 + 1 = 2 are absent from the strongly temporally dependent novelty
which may be imposed on one scalar level from others by emergence.
This lends weight to a description of intelligence as that which promotes
inter-scalar correlation (Cottam et al. 2008b). If we adopt the assump-
tion that the natural hierarchy of the brain constitutes the substrate
for consciousness, then intelligence plays a crucial role in the emer-
gence of consciousness from lower-level awareness. Here we depart from
any supposition that consciousnesss substrate lies at a limited specific
location in the brain: if the brains hierarchical character is unified, then
the substrate will be associated with unification. This is consistent with
much opinion in the literature (e.g., Edelman and Tononi 2000; Crick
and Koch 2003; Bob 2011), even though these assessments are from a
monorational materialist position.
It will by now be becoming clearer which direction we are taking.
Consciousness is unified. Natural hierarchy is unified. Hyperscale is the
reality of that unification, and it makes up the substrate of high-level
consciousness, as the expansion of lower-scale-dependent intelligence-
coupled awarenesses. But we have two hyperscales. So which of these is
the substrate? Well, neither individually, but to see why we must look
more closely at the processes that take place in hierarchical systems.
If we again interpret natural hierarchy as a computational structure,
we must take into account that the computational power available at any
particular scalar level will be limited (Cottam et al. 1998b). But as we
have clarified earlier, the majority of the information which is transferred
between scales is static, structural in character. The static information
acts as a carrier for more rapidly changing novelty (Cottam et al. 2005), in
the same way that communication systems superimpose a time-varying
signal on a temporally stable transport medium. Stasis neglect can then
Biosemiotics of consciousness: System hierarchy 105

Hyperscale Hyperscale

Mutual
observation

Fig. 3.5 Mutual observation between the model hyperscale and its ecosys-
tem hyperscale.

reduce the computational power required, by eliminating static, struc-


tural information, or, at least, by effectively pushing its processing into
the background at lower scalar levels. This is a major advantage for a
natural hierarchy: higher scalar levels can run faster than lower ones.
The partial nature of scalar enclosure means that unification across the
assembly of scalar levels can only ever be an approximate one (unless all
of the levels models coincide, in which case the hierarchy will collapse
into a single level; Cottam et al. 1998b). Consequently, any hyperscalar
description must be perpetually updated from the individual scales to
establish a current best fit to the scalar assembly, and this in terms of
both temporally stable and temporally unstable information. Here again,
stasis neglect will prioritize the latter in the short term.
At this level of unification we are still left with a duality of partially
coupled, partially autonomous complementary hyperscalar descriptions,
mirroring the individual facets of dual-aspect monism but not wholly
integrating them. However, if we adopt Matsunos (2000) proposition,
then their interrelationship will be one of mutual measurement of
mutual observation, as indicated in Fig. 3.5. We pointed out at the begin-
ning of this chapter that the interpretant of a semiotic sign can be another
sign. Matsunos mutual observation between the two hyperscales corre-
sponds to the recursion of a biosemiotic interpretation of global signs,
reflecting both the nature of dual-aspect monism and Metzingers (2004)
conclusion that the conscious self is an illusion which is no ones illu-
sion. While we concur with Metzinger that the self is objectively illusory,
self-consistency would suggest that it is its own illusion.
We can now see the importance of habituation, because ultimately
this inter-observation will retain only novelty, and not its carrier, in the
way that communication systems eliminate the transport medium to be
left with only the signal. First-person awareness depends critically on
introspection that is, on the capacity for the self to observe itself. We
106 Ron Cottam and Willy Ranson

propose that this remainder of effectively novel recursive self-observation


constitutes the singular unification of high-level conceptual conscious-
ness, through the medium of embodied self-awareness. This, rather sur-
prisingly, places self-awareness rather than environmental-awareness in
primary position, which appears contrary to the comment from David
Bohm which we quoted earlier. But it is maybe not so surprising, as the
grounding for consciousness in this description is the multi-scalar nature
of the organisms embodiment, as consciousnesss ecosystem.

3.14 Coda
We strongly believe that high levels of conceptual consciousness are
impossible without embodiment, and that therefore any idea that con-
sciousness could transcend the physicality of life is mistaken. The deriva-
tion of consciousness from lower more localized forms of awareness poses
a more pragmatic question: why, from a high level of consciousness,
are we apparently unaware of these lower awarenesses? The adoption
of a carrier-plus-signal description for the mutual observation of the
two hyperscales suggests that these lower-level awarenesses may well
be present, but that they may not occupy the center of attention, and
may only be recognizable as lower-level neural noise. Striking support
for neural birationality comes from the degree to which the two hemi-
spheres of the brain apparently concentrate on different styles of process-
ing (Tommasi 2009; Glickstein and Berlucchi 2010): linear, sequential,
logical, symbolic for the left hemisphere and holistic, random, intuitive,
concrete, nonverbal for the right (Rock 2004, p. 124), corresponding
to the dual rationalities we have described, and to the primitives of logic
and emotion, respectively (Cottam et al. 2008b).
The two hemispheres of the brain are normally connected together
by the largest nerve tract in the brain: the corpus callosum. Only bilat-
eral and extensive damage of the cerebral hemispheres provokes stupor
or coma. Unilateral lesions cannot by (themselves) itself cause coma
(Piets 1998). Studies carried out in the 1940s following sectioning of
the corpus callosum in human patients (Akelaitis 1941) as a treatment
for intractable epilepsy (Akelaitis et al. 1942) intriguingly indicated that
this massive neural intervention resulted in no definite behavioral deficits.
Later experiments carried out by Sperry et al. (1969) provided even more
startling results: the split-brain subjects of neural bifurcation provided
direct verbal confirmation that the left and right hemispheres afford sepa-
rate domains of consciousness,30 apparently supporting the dual-hyperscalar
argument we have presented.

30 In our terminology, consciousness here would correspond to a high level of awareness.


Biosemiotics of consciousness: System hierarchy 107

REFERENCES
Adams F. C. and Laughlin G. (1997). A dying universe: The long-term and
evolution of astrophysical objects. Rev Mod Phys 69(2):337372.
Akelaitis A. J. (1941). Psychobiological studies following section of the corpus
callosum: A preliminary report. Am J Psychiat 97:11471157.
Akelaitis A. J., Risteen W. A., Herren R. Y., and van Wagenen W. P. (1942).
Studies on the corpus callosum. III. A contribution to the study of dyspraxia
in epileptics following partial and complete section of the corpus callosum.
Arch Neuro Psychiatr 47:9711008.
Antoniou I. (1995). Extension of the conventional quantum theory and logic
for large systems. Presented at the International Conference: Einstein meets
Magritte An Interdisciplinary Reflection on Science, Nature, Human Action
and Society, Brussels, Belgium, May 29June 3, 1995.
Aristotle (2012). Physics. Trans Hardie R. P. and Gaye R. K. http://etext.library
.adelaide.edu.au/a/aristotle/physics/ (accessed February 27, 2013).
Artmann S. (2007). Computing codes versus interpreting life. In Barbieri M.
(ed.) Introduction to Biosemiotics: The New Biological Synthesis. Dordrecht:
Springer.
Barendregt M. and van Rappard J. F. H. (2004). Reductionism revisited. Theory
and Psychology 14(4):453474.
Barry R. J. (2009). Habituation of the orienting reflex and the development of
preliminary process theory. Neurobiol Learn Mem 92:235242.
Brenner J. E. (2008). Logic in Reality. Berlin: Springer.
Brenner J. E. (2010). The philosophical logic of Stephane Lupasco. Logic and
Logical Philosophy 19:243284.
Brier S. (2006). The paradigm of Peircean biosemiotics. Signs 2:2081.
Bob P. (2011). Brain, Mind and Consciousness: Advances in Neuroscience Research.
New York: Springer.
Bohm D. and Hiley B. J. (1993). The Undivided Universe: An Ontological Interpre-
tation of Quantum Theory. London: Routledge.
Bohr N. H. D. (1998). Causality and complementarity Supplementary papers.
In Faye J. and Folse H. J. (eds.) The Philosophical Writings of Niels Bohr,
Vol. 4. Woodbridge, CT: Oxbow Press.
Boltzmann L. (1896). Lectures on Gas Theory. Trans. Brush S. G. Berkeley, CA:
University of California Press.
Busch P., Heinonen T., and Lahti P. (2007). Heisenbergs uncertainty principle.
Phys Rep 452:155176.
Chalmers D. J. (1995). Facing Up to the Problem of Consciousness. J Conscious-
ness Stud 2(3):200219.
Chandler D. (2012). Semiotics for Beginners. URL: www.aber.ac.uk/media/
Documents/S4B/semiotic.html (accessed February 27, 2013).
Chapman M. (1999). Constructivism and the problem of reality. J Appl Dev
Psychol 20(1):3143.
Clausius R. (1865). The Mechanical Theory of Heat. Trans. Browne W. R.
Charleston, SC: BiblioBazaar.
Collier J. D. (1999). Autonomy in anticipatory systems: Significance for func-
tionality, intentionality and meaning. In Dubois D. M. (ed.) Computing
Anticipatory Systems: CASYS98 2nd International Conference, AIP
108 Ron Cottam and Willy Ranson

Conference Proceedings 465. Woodbury, NY: American Institute of Physics,


pp. 7581.
Cottam R., Ranson W., and Vounckx R. (1998a). Consciousness: The precursor
to life? In Wilke C., Altmeyer S., and Martinetz T. (eds.) Third German
Workshop on Artificial Life. Frankfurt, Germany: Harri Deutsch, pp. 239
248.
Cottam R., Ranson W., and Vounckx R. (1998b). Emergence: Half a quantum
jump? Acta Polytech Sc Ma 91:1219.
Cottam R., Ranson W., and Vounckx R. (1999). Life as its own tool for survival.
In Allen J. K., Hall M. L. W., and Wilby J. (eds.) Proceedings of the 43rd Annual
Conference of the International Society for the Systems Sciences. Asilomar, CA:
ISSS, paper #99268, pp. 112.
Cottam R., Ranson W., and Vounckx R. (2000). A diffuse biosemiotic model for
cell-to-tissue computational closure. Biosystems 55:159171.
Cottam R., Ranson W., and Vounckx R. (2003). Autocreative hierarchy. II.
Dynamics self-organization, emergence and level-changing. In Hexmoor
H. (ed.) International Conference on Integration of Knowledge Intensive Multi-
Agent Systems. Piscataway, NJ: IEEE, pp. 766773.
Cottam R., Ranson W., and Vounckx R. (2004a). Autocreative hierarchy. I.
Structure ecosystemic dependence and autonomy. Semiotics, Evolution,
Energy, and Development 4:2441.
Cottam R., Ranson W., and Vounckx R. (2004b). Diffuse rationality in complex
systems. In Bar-Yam Y. and Minai A. (eds.) Unifying Themes in Complex
Systems, Vol. 2. Boulder, CO: Westview Press, pp. 355362.
Cottam R., Ranson W., and Vounckx R. (2005). Life and simple systems. Sys
Res Behav Sci 22:413430.
Cottam R., Ranson W., and Vounckx R. (2006). Living in hyperscale: Internal-
ization as a search for unification. In Wilby J., Allen J. K., and Loureiro-
Koechlin C. (eds.) Proceedings of the 50th Annual Conference of the Interna-
tional Society for the Systems Sciences. Asilomar, CA: ISSS, paper #2006362,
pp. 122.
Cottam R., Ranson W., and Vounckx R. (2008a). Sapient structures for intelligent
control. In Mayorga R. V. and Perlovsky L. I. (eds.) Toward Artificial Sapi-
ence: Principles and Methods for Wise Systems. New York: Springer, pp. 175
200.
Cottam R., Ranson W., and Vounckx R. (2008b). The mind as an evolving
anticipative capability. J Mind Theory 0(1):3997.
Cottam R., Ranson W., and Vounckx R. (2013). A framework for computing like
Nature. In Dodig-Crnkovic G. and Giovagnoli R. (eds.) Computing Nature.
SAPERE Series. Heidelberg: Springer, pp. 2360.
Cottam R. and Saunders G. A. (1973). The elastic constants of GaAs from 2K
to 320K. J Phys C Solid State 6:20152118.
Crick F. and Koch C. (2003). A framework for consciousness. Nat Neurosci
6(2):119126.
Damasio A. (1999). The Feeling of What Happens: Body, Emotion and the Making
of Consciousness. London: Vintage.
de Saussure F. (2002). Ecrits de Linguistique Generale. Paris: Gallimard.
Biosemiotics of consciousness: System hierarchy 109

Edelman G. M. (2003). Naturalizing consciousness: A theoretical framework. P


Natl Acad Sci USA 100(9):55205524.
Edelman G. M. and Tononi G. (2000). A Universe of Consciousness. New York:
Basic Books.
Gilaie-Dotan S., Perry A., Bonneh Y., Malach R., and Bentin S.
(2009). Seeing with profoundly deactivated mid-level visual areas: Non-
hierarchical functioning in the human visual cortex. Cereb Cortex 19:1687
1703.
Glickstein M. and Berlucchi G. (2010). Corpus Callossum: Mike Gazzaniga, the
Cal Tech Lab, and subsequent research on the corpus callossum. In Reuter-
Lorenz P. A., Baynes K., George R., Mangun G. R., and Phelps E. A.
(eds.) The Cognitive Neuroscience of Mind: A Tribute to Michael S. Gazzaniga.
Cambridge, MA: MIT Press, pp. 324.
Goldstein J. (1999). Emergence as a construct: History and issues. Emergence
1(1):4972.
Haken H. (1984). The Science of Structure: Synergetics. New York: Prentice Hall.
Henderson J. M. (2003). Human gaze control during real-world scene percep-
tion. Trends Cogn Sci 7(11):498504.
Hoffmeyer J. (2008). Biosemiotics. An Examination into the Signs of Life and the
Life of Signs. Scranton, PA: University of Scranton Press.
Hoffmeyer J. and Emmeche C. (1991). Code-duality and the semiotics of nature.
In Anderson M. and Merrell F. (eds.) On Semiotic Modeling. New York, NY:
Mouton de Gruyter, pp. 187196.
Ivancevic V. G., and Ivancevic T. T. (2008). Complex Nonlinearity: Chaos, Phase
Transitions, Topology Change, and Path Integrals. Berlin: Springer.
Jagers op Akkerhuis G. (2010). The Operator Hierarchy. Doctorate Thesis, Rad-
boud Universiteit, Nijmegen.
Kahn C. H. (1960). Anaximander and the Origins of Greek Cosmology. New York:
Columbia University Press.
Kahn C. H. (1979). The Art and Thought of Heraclitus: Fragments with Translation
and Commentary. London: Cambridge University Press.
Kahn C. H. (1996). Plato and the Socratic Dialogue: The Philosophical Use of Literary
Form. London: Cambridge University Press.
Kull K. (2001). Jakob von Uexkull: An introduction. Semiotica 134:159.
Laing R. D. (1960). The Divided Self: An Existential Study in Sanity and Madness.
Harmondsworth: Penguin.
Langsjo J. W., Alkire M. T., Kaskinoro K., Hayama H., Maksimow A., Kaisti
K. K., et al. (2012). Returning from oblivion: Imaging the neural core of
consciousness. J Neurosci 32(14):49354943.
Macionis J. J. (2011). Sociology, 14th edn. Upper Saddle River, NJ: Prentice
Hall.
Marlow E. and OConnor Wilson P. (1997). The Breakdown of Hierarchy: Com-
municating in the Evolving Workplace. Boston, MA: Butterworth-Heinemann.
Matsuno K. (1998). Dynamics of time and information in dynamic time. Biosys-
tems 46:5771.
Matsuno K. (2000). The internalist stance: A linguistic practice enclosing
dynamics. Ann NY Acad Sci 901:322349.
110 Ron Cottam and Willy Ranson

Melzack R. and Wall P. D. (2008). The Challenge of Pain. London: Penguin Books.
Merker B. (2007). Consciousness without a cerebral cortex: A challenge for
neuroscience and medicine. Behav Brain Sci 30:6381.
Merrell F. (2003). Sensing Corporeally: Toward a Posthuman Understanding.
Toronto: University of Toronto Press.
Metzinger T. (2004). The subjectivity of subjective experience: A representation-
alist analysis of the first-person perspective. Networks 34:3364.
Morin A. (2006). Levels of consciousness and self-awareness: A comparison
and integration of various neurocognitive views. Conscious Cogn 15(2):358
371.
Morris C. W. (1971). Writings on the General theory of Signs. Part One: Foundations
of the Theory of Signs. The Hague: Mouton.
Nagel T. (1986). The View from Nowhere.: Oxford University Press.
Nakayama K. (1985). Biological image motion processing: A review. Vision Res
25(5):625660.
Oppenheim P. and Putnam H. (1958). Unity of science as a working hypothesis.
In Feigl H., Scriven M., and Maxwell G. (eds.) Concepts, Theories and the
Mind-Body Problem. Minneapolis: University of Minnesota Press, pp. 336.
Pauli W. and Jung C. G. (1955). The Interpretation of Nature and the Psyche. New
York: Pantheon Books.
Pearson C. (1999). The semiotic paradigm. Presented at the 7th International
Congress of the International Association of Semiotic Studies, Dresden, Germany,
October 611, 1999.
Peirce C. S. (19311958). Collected Papers of Charles Sanders Peirce. Hartshorne
C. and Weiss P. (eds.) Vols. 16; Burks A. (ed.) Vols. 78. Cambridge, MA:
Belknap Press.
Piets C. (1998). Anatomical substrates of consciousness. Eur J Anaesthesiol 15:4
5.
Pirsig R. (1974). Zen and the Art of Motorcycle Maintenance. New York: William
Morrow.
Polkinghorne J. (2005). Exploring Reality. New Haven, CT: Yale University
Press.
Popper K. R. (1990). A World of Propensities. Bristol: Thoemmes.
Ravasz E., Somera A. L., Mongru D. A., Oltval Z. N., and Barabasi A.-L.
(2002). Hierarchical organization of modularity in metabolic networks. Sci-
ence 297:15511555.
Rock A. (2004). The Mind at Night: The New Science of How and Why We Dream.
New York: Basic Books.
Rosen R. (1991). Life Itself. New York: Columbia University Press.
Ruiz-Mirazo K. and Moreno A. (2004). Basic autonomy as a fundamental step
in the synthesis of life. Artificial Life 10(3):235259.
Salthe S. N. (1985). Evolving Hierarchical Systems. New York: Columbia Univer-
sity Press.
Salthe S. N. (2012). Hierarchical structures. Axiomathes 22. doi: 10.1007/
s10516-012-9185-0.
Schopenhauer A. (1818). The World as Will and Representation. Translation EFJ
Payne. Indian Hills, CO: The Falcons Wing, 1958.
Biosemiotics of consciousness: System hierarchy 111

Schroeder M. J. (2012). The role of information integration in demystification of


holistic methodology. In Simeonov P. L., Smith L. S., and Ehresmann A. C.
(eds.) Integral Biomathics: Tracing the Road to Reality. Heidelberg: Springer.
Schuster H. G. and Just W. (2005). Deterministic Chaos: An Introduction. Wein-
heim: Wiley-VCH.
Sebeok T. A. (1977). Preface. In Sebeok T. A. (ed.) A Perfusion of Signs, Proceed-
ings of the First North American Semiotics Colloquium. Bloomington: Indiana
University Press, pp. ixxi.
Shannon C. E. (1948). A mathematical theory of communication. AT&T Tech J
27:379423 and 27:623656.
Sigmund K. (1993). Games of Life: Explorations in Ecology, Evolution, and Behavior.
New York: Oxford University Press.
Simion F., Regolin L., and Bulf H. (2008). A predisposition for biological motion
in the newborn baby. Proc Natl Acad Sci USA 105(2):809813.
Sokolov Y. N. (1963). Higher nervous functions: The orienting reflex. Annu Rev
Physiol 25:545580.
Sperry R. W., Gazzaniga M. S., and Bogen J. E. (1969). Interhemispheric rela-
tionships: The neocortical commisures; syndromes of hemisphere discon-
nection. In Vinken P. J. and Bruyn G. W. (eds.) Handbook of Clinical Neurol-
ogy, Vol. 4. Amsterdam: North-Holland, pp. 273290.
Spinoza B. (1677). Ethics. Translation RHM Elwes. MTSU Philosophy
WebWorks Hypertext Edition. URL: http://frank.mtsu.edu/rbombard/RB/
Spinoza/ethica-front.html (accessed February 27, 2013).
Taborsky E. (1999). Architectures of information. In Allen J. K., Hall M. L. W.,
and Wilby J. (eds.) Proceedings of the 43rd Annual Conference of the International
Society for the Systems Sciences. Asilomar, CA: ISSS, paper #99111, pp. 115.
Tommasi L. (2009). Mechanisms and functions of brain and behavioural asym-
metries. Phil Trans R Soc B 364:855859.
Tononi G. (2004). An information integration theory of consciousness. BMC
Neurosci 5:42.
Van Gulick R. (2001). Reduction, emergence and other recent options on the
mind/body problem: A philosophic overview. J Consciousness Stud 8(9
10):134.
Vespignani A. (2009). Predicting the behavior of techno-social systems. Science
325(5939):425428.
Vimal R. L. P. (2008). Proto-experiences and subjective experiences: Classical
and quantum concepts. J Integr Neurosc 7(1):4973.
Vimal R. L. P. (2009). Meanings attached to the word consciousness: An
overview. J Consciousness Stud 16(5):927.
von Foerster H. (2003). Understanding Understanding: Essays on Cybernetics and
Cognition. New York, NY: Springer.
von Uexkull T. (1992). Introduction: The sign theory of Jakob von Uexkull.
Semiotica 89(4):279315.
Weber R. (1987). Meaning as being in the implicate order philosophy of David
Bohm: A conversation. In Hiley B. J. and Peat F. D. (eds.) Quantum Impli-
cations: Essays in Honor of David Bohm. London: Routledge and Kegan Paul,
pp. 440441 and 445.
112 Ron Cottam and Willy Ranson

West G. B., Brown J. H., and Enquist B. J. (2000). The origin of universal
scaling laws in biology. In Brown J. H. and West G. B. (eds.) Scaling in
Biology. Oxford University Press, p. 87.
Wiener, N. (1954). The Human Use of Human Beings: Cybernetics and Society.
Boston, MA: Houghton Mifflin.
Wilby J. (1994). A critique of hierarchy theory. Syst Practice 7(6):653670.
4 A conceptual framework embedding
conscious experience in physical processes

Wolfgang Baer

4.1 Introduction 113


4.1.1 Quantum interpretations and the inadequacy of classic physics 115
4.2 The nave reality model in the first-person cognitive cycle 119
4.3 The generalized reality model and its visualization 123
4.3.1 Is there a Ding-an-Sich reality? 128
4.4 Thought processes that create the feeling of Entities-in-Themselves 130
4.4.1 The dual role of symbols-of-reality 133
4.4.2 Under what circumstance is there reality in the mathematical
formalism? 134
4.4.3 Is reality a set of interacting cognitive loops? 136
4.5 Quantum theory approximation of cognitive loops 137
4.5.1 Interpretation of the wave function 141
4.6 The neurophysiologists point of view 144
4.7 Conclusion 145

4.1 Introduction
Traditional neuroscience assumes that consciousness is a phenomenon
that can be explained within the context of classical physics, chemistry,
and biology. This approach relates conscious processes to the activity
of a brain, composed of particles and fields evolving within a space-
time continuum. However, logical inconsistencies emphasized by the
explanatory gap (Levine 1983) and the hard problem of conscious-
ness (Chalmers 1997) suggest that no system as currently defined by
classic physics could explain how physical activities in the brain can pro-
duce conscious sensations located at large distances from that brain.
Though physiological investigations highlight important data process-
ing paths and provide useful medical advances, the hard problem of
consciousness is not to be answered by detailed knowledge of the prop-
erties of biochemical structures. Rather, the conceptual foundations of
our conscious experience need to be re-examined in order to determine
the adequacy of our physical theories to incorporate consciousness and
to suggest new ideas that might be required.
113
114 Wolfgang Baer

This chapter will provide a brief review of the difficulties encountered


when attempting to explain consciousness phenomena in classic physical
terms. It will then show how the inclusion of the observer provides an
opening to integrate the science of consciousness into a broader physical
framework. Some alternative interpretations of quantum theory, which
include the Copenhagen, Everetts Multi-World, Bohms Pilot Wave, and
Landes Quantization Rules, will be considered for their ability to include
consciousness sensations, or qualia, in their formulations.
A review of these theories will lead to the identification of a common
architecture within which all theories, classic and quantum, are imbed-
ded. This common architecture consists of a cyclic process that trans-
forms observable sensations into unobservable causes of those sensations
and then uses those causes to predict the appearance of sensations again.
When this architecture is implemented as a physical entity, the cyclic
process will be recognized as the ubiquitous activity that characterizes
both human thinking and the calculations guiding the apparent motions
of inanimate systems. The Loop in Hofstadters I Am a Strange Loop
(Hofstadter 2007) is thereby offered as both the incorporation of a con-
scious self and a basis for the explanation of the behavior of all physical
systems.
To grasp an architecture in which the cause of observables is forever
hidden requires understanding the visualization mechanism used to por-
tray such causes in terms of sensations that can be observed. The classic
brain is fundamentally viewed as a gravito-electric object. We propose
a new force, holding charge and mass together as the accommodating
mechanism balancing gravito-electric influences from the rest of the Uni-
verse (Baer 2010b). A changing field of mass-charge-separations can be
visualized as a third-person view of the physical process that causes con-
scious experiences. The flow of energy in the mass-charge-separation
field that passes directly through the experiencing entity is conscious
experience itself. This analysis shows that an observable experience is
not caused by the visualization of its causes, but rather both (first-person
experiences and third-person views of their causes) are part of cognitive
cycles which accommodate the influences from the rest of the Universe
(Maturana 1970).
The last section of this chapter applies these ideas to the first-person
perspective of a neurophysiologist investigating a second-persons brain.
We will clearly identify what the neurophysiologist actually observes in
contrast to the beliefs he inadvertently projects upon the second-persons
brain. Once these projections are recognized as the neurophysiologists
own theory of particles and fields, rather than what is actually in the
Framework embedding consciousness in physics 115

second-persons brain, they can be replaced by a visualization of the


model of a cyclic activity that incorporates conscious experiences.

4.1.1 Quantum interpretations and the inadequacy of classic physics


The conventional discussion of conscious phenomena is based upon a
set of conceptual principles which assume that an objective brain is the
physical seat of consciousness and that diligent analysis of this objective
system will eventually lead to its full understanding. That the barrier to
our understanding is not due to a lack of diligence, but rather is due to
a lack of understanding of the fundamental reality in which we find our-
selves has been suggested by many authors. Henry Stapp (1993) pointed
out that the entire subjective experience of man has been eliminated
from objective physical theory and hence no physical explanation for the
brains ability to generate conscious experiences is available within neu-
roscience as long as it is based on classic physical principles. Stapp
along with many others (Schwartz 2004; Rosal 2004) has gone on to
suggest that conceptual principles underlying quantum theory should be
adopted because these principles would provide a new framework within
which conscious phenomena could be integrated into physical theory.
The introduction of quantum theory has lead to two directions of
investigation. The first direction is based on the obvious fact that if
biological components are built of atoms that act like quantum objects
then neural systems must exhibit quantum effects. The second direction
assumes that the brain operates like a quantum computer and seeks
to identify how its components present problems, carry out quantum
calculations, and extract answers from the underlying quantum process.
In this chapter we will add a third direction by suggesting that quantum
theory describes something we do as cognitive beings, not something we
see inside our phenomenal bodies or its environment.
Objections to the possibility of biological quantum computers have
been raised on the basis that the brain is too warm and soggy to main-
tain quantum coherence long enough to perform useful calculations
(Tegmark 2000). However, these objections are largely based upon our
understanding of quantum effects in low-temperature solid-state environ-
ments. Further investigation of biological systems has produced evidence
that quantum interactions are involved in biological processes such as
photosynthesis (Engel 2007) and that sufficient isolation can be achieved
at the atomic level to avoid the coupling to the warm and soggy brain envi-
ronment, thereby allowing quantum calculations. Analysis of quantum
effects in ion channels (Summhammer 2007) or microtubules (Hameroff
116 Wolfgang Baer

1998; Bieberich 2000; Hagan 2002) are examples of this research direc-
tion.
The very likely possibility that the brain employs quantum effects, and
operates as a quantum computer, does little to explain the hard prob-
lem of consciousness, because quantum theory is itself an ontological
mystery that has defied our understanding since its inception. Though
successful as a calculation tool, quantum theory itself does not provide
a coherent interpretation of what its symbols, such as the wave function
, actually mean, let alone how consciousness could arise from them.
A comprehensive review of interpretations (Blood 2009) shows that no
interpretation is fully adequate. The three most popular ones are:
The Copenhagen probability interpretation from Max Born
(Faye 2008): The wave function completely describes the
physical system. Interprets the wave function squared as a
probability for getting a possible measurement result. The
probability is spread out in space but collapses instantaneously
at the point of measurement to the one result observed.
The Pilot Wave interpretation from David Bohm (Goldstein
2012): The wave function is like a message to the pilot of a
ship, that is, of a particle; the message accompanies the actual
particles and ferrets out the best path for the actual particle to
take. Since real particles are only guided by the pilot waves
no collapse to the actual observed reality is necessary.
The Many Worlds interpretation from Hugh Everett (Everett
1983; Mensky 2006): The wave function squared is a proba-
bility, but rather than collapsing into a single observed result
all possibilities are real since all realities exist in parallel
worlds.
I would also like to mention one additional interpretation by Alfred
Lande (1973), which proposes a tangible physical world of particles that
can only change their momentum, angular momentum, and energy by
discrete multiples of Plancks constant, and derive the probabilities of
such changes from the shape of the gravito-electric object in question.
This eliminates probabilities and maintains a belief in a tangible inde-
pendent reality.
Despite interpretation difficulties, quantum theory does however pro-
vide a significant step toward an explanation of conscious phenomena by
admitting a necessary role for the observer and his extensions through
measuring instruments. The question How does a classic object gener-
ate consciousness? is unanswerable in classical physics, because clas-
sical physics is based upon a nave realist philosophy that simply
assumes entities are what they appear to be. Quantum theory, on the
Framework embedding consciousness in physics 117

other hand, includes: (1) the existence of a physical reality described by


the wave function; (2) the separate existence of measurement results,
called observables; and (3) a measurement process connecting the two
(Wigner 1983). Clearly, quantum theory is dual in the sense that there is
an object-subject division between physical reality and observables; this
is known as the von Neumann cut (von Neumann 1955).
Quantum theory requires a measuring instrument to define a Hilbert
space within which quantum fluctuations occur. According to the Copen-
hagen interpretation, the measurement collapses the multiple possibilities
present in the quantum fluctuations into the single output produced by
the measuring instrument during the measurement process. If the mea-
suring instrument is included in physical quantum reality, then its output
is also described by a possibility wave and requires a second measurement
to produce an observable output. If the second instrument is included
in physical quantum reality, a third measuring instrument is required,
and the chain of measurement instruments continues until your brain,
acting as the final measuring instrument, collapses the wave function and
produces your observables. Thus your, the readers, brain has been given
a role as a measuring instrument of last resort in quantum theory. The
process of measurement by the brain assigns a role to conscious processes
and the resulting observables have been given the status of sensations,
qualia, or mental images experienced by the owner of the brain. If we
could understand exactly how and why an objective measuring instru-
ment generates observables from a quantum reality we could directly
map this understanding to the the hard problem of consciousness.
Unfortunately, quantum theory fails to provide the necessary details to
explain the measurement process. The question, How does a quantum
object generate consciousness? is answered by postulating a classic brain
that looks at the quantum object and produces an observable result for
its owner (Pereira 2003). The measurement process has been given the
name quantum Process I by von Neumann and is described mathemat-
ically by the well-known measurement equation (see Section 4.4.2), in
which a symbol of a mathematical operator acts on a symbol of a quantum
object to produce a symbol of the average measurement result that can
be directly compared with what is observed. The measurement equation
is bolted onto quantum theory to specify how symbols in a theory must
be manipulated to produce symbols of mental sensations, and in that
sense describes a miracle. There is no physical description of how this
miracle of collapse actually happens. The predictive physics of quantum
theory is packaged into the time evolution described by Schrodingers
equation and identified as Process II by von Neumann. It is not built
into the measurement process. The measurement ultimately happens in
118 Wolfgang Baer

a classic brain that has been attached as an ad hoc and external non-
quantum element in order to make the theory useful. This is not to say
that the analysis of the measurement process or the construction of mea-
suring instruments does not involve physics and does not represent a
valuable body of knowledge. Similarly, the analysis of the brain or the
reverse engineering performed by neurophysiologists and psychologists
also represents a valuable body of knowledge. However, these investi-
gations address the easy problem of consciousness because they look
at the brain from an external third-person perspective, which does not
answer the question of how biochemical activity becomes conscious expe-
riences, or how possibility waves become observables in our externalized
measuring instruments.
Since neither classic nor quantum physics adequately addresses the
consciousness phenomena we will introduce the third approach men-
tioned above. This approach follows Bohm by assuming there is some
ontological explanation underlying the Copenhagen probabilistic inter-
pretation of quantum theory and advances some of his major ideas. These
include:
1. Thought is a cyclic process connecting memory with its environment
(Bohm 1983, p. 59),
2. The perception of space is derived from a physical plenum that is the
ground for the existence of everything (Bohm 1983, p. 192),
3. The plenum can be visualized as a field of space cells each modeled
as a clock (Bohm 1983, p. 96),
4. An additional quantum force directs the motion of physical particles
(Bohm 1983, p.65).
It is generally accepted that Bohms hidden variables model underly-
ing quantum theory was disproved by experiments testing Bells Theorem
(Aspect 1982). In our opinion these experiments are flawed. They may
have shown that physical reality is not composed of independent objects
but do not exclude independent cognitive processes as building blocks
of a cognitive universe. Our goal is to provide a simple mathematical
description of a conscious universe that can be graphically presented as
a set of interacting processing loops of which we are one. This approach
begins with a review of what we do when achieving conscious awareness
of our bodies and its environment. The key concept is the recognition
that our perceived environment, including space and its objective con-
tent, is the display node happening inside our own processing loop. Such
a loop is conceived as a closed cycle in time, with time defined as the
name of the state of a system. The concepts of processes and events will
replace the concepts of fundamental particles as the building blocks of the
Universe. Interacting processing loops will contain classic entities mass,
Framework embedding consciousness in physics 119

charge, and their fields in space-time and provide a new basis of physics
and, by extension, a new basis for the host of neuroscience-related disci-
plines.
The implication of this new direction is that quantum theory describes
a linear approximation to the activities performed by a cognitive being.
The solidity of space-time as the a priori background to quantum oper-
ations will be identified with the permanence of memory cells within
which processing activities occur. The possibility waves of quantum the-
ory will be identified with the data content of such permanent memory
cells, which are themselves stable repeating processing cycles. Newtonian
physics of the classical world is conceived as the physics of observables, while
quantum theory describes the physics of our knowledge processing capacity
within which observables appear.
The idea that the Universe is conscious and can be divided into indi-
vidual cognitive parts has been proposed by many authors, representing
a Panpsychist philosophical viewpoint (Nagel 1979, p. 181). The fact
that it can be modeled as a processing machine has also been advanced
as a logical extension of that idea (Fredkin 1982; Kafatos and Nadeau
1990; Schmidhuber 1990; Svozil 2003). Both religious and philosophi-
cal traditions have long identified the physical world as a manifestation
of the consciousness of a god or gods. The idea was also adopted in
Niels Bohrs contention that measurement creates electrons (McEvoy
2001) or Archibald Wheelers conclusion that the universe measures
itself (Wheeler 1983). The final jump, connecting our consciousness to
its manifestation as our physical world, is addressed in this chapter. A dis-
covery of the mechanism by which the universe is conscious answers the
question of how a piece of the Universe, that is, our brain, is conscious.
Such an advance in our understanding of the physical world is therefore
equivalent to an advance in our understanding of consciousness.

4.2 The nave reality model in the first-person cognitive cycle


Our fundamental assumption is that the cognitive process can be modeled
by transforming a description of observable sensations into a descrip-
tion of physical explanations of those sensations, and then measuring
these physical descriptions to regenerate the description of observables
(Atmanspacher 2006; Baer 2011). A stable cycle captures a single con-
scious experience. This models an endless measurement-explanatory
process, which in our view describes the reality of a conscious entity. By
using graphics as our descriptive language an example of such a model of
the consciousness process is shown in Fig. 4.1. The cycle shown depicts
the observable sensations of a first-person in the upper half of the figure
120 Wolfgang Baer

r r
e e
External
t External t
Measurement eXplanation
i x( )= i
m( )=
n n
a a

Internal Internal
Measurement eXplanation
=1( ) = M( ) =1( ) = X( )

Description of
nave physical
reality
t

Fig. 4.1 A first-person cognitive cycle with a nave model of physical


reality.

and a nave physical reality model that explains those sensations in the
lower section. A much more realistic version of the first-person visual
view was originally drawn by Ernest Mach (1867) who promoted the
idea that all theory of reality should be based upon sensory experience.
For this reason we have called the man sitting in his armchair Professor
Mach.
Machs ideas greatly influenced the Vienna positivists (Carnap 2000)
who believed there should be a clear distinction between an observa-
tional language, describing what can be seen, and a theoretical language,
describing the causal reality generating those appearances. To the extent
that pictures are a form of language this distinction is reflected in the
difference between the upper and lower portion of Fig. 4.1. In the upper
portion of Fig. 4.1 the desert, sky, armchair, body, and apple describe
optical sensations experienced by the first-person observer with what
would be called observational language by Positivists. In the lower por-
tion the desert, sky, armchair, body of, and apple being held by Mach
describe what are believed to be real physical objects and would be drawn
in theoretical language by positivists.
Framework embedding consciousness in physics 121

The designation of theoretical is appropriate for positivists because


the cause of sensation is described by a theory. The lower section of
Fig. 4.1 is a theoretical description of a first-persons belief in nave
reality, that is, entities are where and what they appear to be. The inter-
nal eXplanatory and Measurement functions are identity operations that
simply equate observational and theoretical descriptions. In mathemati-
cal language this nave reality description is called the classic physics the-
ory, or a classic model of physical reality. Classic physicists assume phys-
ical reality is composed of moving objects in a priori space-time because
that is the way it looks and proceed to systematically describe the inter-
actions between these objects from there. The fact that the configuration
of some of those objects must be calculated from observables outside
the classic physics description cannot be explained within it. Rather than
acknowledge the existence of sensations and their interaction with the
physical reality model, most physical scientists simply ignore the subjec-
tive. Neurophysiologists cannot be quite so dismissive. They traditionally
assume that Neural Correlates of Consciousness (NCC) in the observed
human brain are calculated from observables outside of physical reality
(Metzinger 2000). This assumption is wrong because the NCC are not
in the observed brain nor in its nave reality model equivalent.
When the first-person, the reader, looks at the second-persons brain of
Mach and imagines a connection between the NCC in Machs brain and
what he believes Mach actually sees an explanatory gap arises (Levine
1983). The NCCs are located at a different position from the sensa-
tions they are thought to produce. A similar distance exists between the
assumed NCCs in the readers brain and the sensations of these letters.
If we, first-persons, believe that Machs NCCs are inside his perceived
brain and are mapped to his experiences, then we must also believe that
our experiences are mapped to our NCC in our perceived brain. But
how can physical activity behind our perceived eyes produce sensations
in front of our noses? Science based either on classic or quantum physics,
offers no mechanism for bridging this distance.
The issue is only resolved when we remember that Fig. 4.1 describes
our visualization of a second-persons conscious process within the model
of our own conscious process. We are not seeing Machs real brain but
have only projected a model onto his perceived image. That projection,
Machs perceived body, and his entire environment is generated for us
by what is described by the lower part of Fig. 4.1. If we were nave
realists, then this entire lower part describes our first-person NCC. The
neurologic brain structure inside this description is only a small part of
what explains observable sensations. The whole lower section assumes
122 Wolfgang Baer

the explanatory role in our cognitive process. Both Machs NCC as well
as our own first-person NCC incorporate our belief in an external world,
and this incorporated belief generates our sensation of a whole world,
including the small parts that represent our observation of both our
brains.
Once we understand the architecture of a cognitive cycle and the role
a reality model plays in it we can turn our attention to the question of
accuracy and functionality of the model we have chosen to believe in. The
question before us is whether our incorporation of a nave reality belief
is adequate to explain consciousness and, if not, what incorporation of a
new belief might be proposed to replace it.
Though the nave reality model is efficient and useful for individu-
als who are concerned with navigating their bodies, brains, and psyches
through everyday challenges it does little to help those who wish to under-
stand how these entities actually work and in what context these everyday
challenges are imbedded. Machs brain, seen by a first-person neurophys-
iologist, is only the observational display resulting from a measurement
made by the neurophysiologist on his own underlying reality belief. The
brain he can observe does not generate sensations anymore than one
image on a television screen generates its neighboring image. Both are
the result of a generation process that occurs outside the observables
displayed. It is the process and the mechanism behind the appearance
of the observable brain that explains the appearance not the appearance
itself. Machs brain, defined as a classic biological object by nave real-
ists and described in the lower portion of Fig. 4.1, is likewise, only the
first-persons internal mechanism that generates the observable brain we
see, not the actual reality. This generation mechanism is more analogous
to the refresh memory of a television system. The ultimate cause of dis-
played images lies outside the television system boundary, but what is
seen is derived from the content of the refresh memory inside that sys-
tem. This generation mechanism would therefore incorporate our idea
of what Machs brain is like, not be the actual mechanism that gener-
ates Machs thoughts, which is outside the boundaries of our first-person
loop.
To understand this cognitive generation process it will be necessary
to replace the nave-reality model of physical reality with a generalized
symbolic version and separate the sensory modalities, normally fused
in everyday operations, into distinct display channels. This will allow
us to examine how observational display channels influence each other
through our physical reality model and how the visualization of such a
model, rather than the model itself, is often falsely taken to be explanation
for sensations.
Framework embedding consciousness in physics 123

4.3 The generalized reality model and its visualization


When a first-person understands a language he hears auditory stim-
ulation in terms of sensations that convey the meaning those sounds
have. A similar effect is applicable in the optic domain. The optically
trained first-person reads optical stimulation in terms of the meaning
those images have. The primate meaning of optical sensations is knowl-
edge of expected tactile experiences: As early as the 1940s, thinkers
began to toy with the idea that perception works not by building up
bits of captured data, but instead by matching expectations to incom-
ing data (Eagleman 2011, p. 48). Expected tactile knowledge is used
both to identify solid objects of interest and avoid potentially harmful
collisions. A display of potential touch sensations can be experienced by
closing ones eyes and noting the sensations associated with ones knowl-
edge of ones environment. With eyes closed one can feel objects existing
in the black phenomenal space surrounding the feeling of ones skull.
The location of these objects appears in this space by what is some-
times described as light-white ghostly outlines. These outlines are the
display of ones knowledge in ones private phenomenal space and repre-
sent surfaces that would produce touch sensations if one were to reach
out with ones hand or some other portion of ones body. For this reason
we will call this space the expected-touch-space and will use the icon of
a thought bubble to represent it in our diagrams. The thought bubble
is often used to signify the mind as a whole but for purposes in this
chapter it will be used to signify the display space of an expected-touch-
sensation.
Although the accuracy of expected-touch-sensations may be verified
by matching expectations with external tactile experiences there are no
external sensors associated with such expected experiences. Similarly the
recall of memories also produces sensations that are not associated with
external sensors. Nevertheless sensations whether derived from external
measurements or internal calculations can be treated in a very simi-
lar manner. Signals produced by the retina from external stimulation
are similar to signals produced by a simulated retina measuring inter-
nal memory structures. The difference is that the external side of the
retina is connected to entities not under our control while the simulated
sensors external side is connected to an entity that has been carefully
selected to act in our interests. Both what is on the external side of
the retina and the external side of a memory sensor must be treated
as entities-unto-themselves and thus only seen in terms of measure-
ment process reports displayed in the observable node of a cognitive
cycle.
124 Wolfgang Baer

r r
e e
t m( ) x( ) t
i i
n n
a a

Internal Measurement Internal eXplanation


M() X()
Model of
1 physical 1
reality
2 2
=T( )
NCC at 3 3
NCC at
time t' time t
Time t

Fig. 4.2 Cognitive loops with a reality belief.

Figure 4.1 showed a first-person optic display imbedded and registered


with the expected-touch-sensation display generated by a measurement
of the memories defining the first-persons model of the physical world.
This superposition of sensory modalities is characteristic of our normal
everyday experience. We co-locate outlines of optical blobs to indicate
the boundaries of objects that inform us of where a touch sensation is
expected. To emphasize the fact that different processing channels con-
nect different sensors to their individual display modalities we separate
the optic field, the expected touch field, and our belief display in the upper
observable region of Fig. 4.2. The result shows two cycles connecting two
sensation modalities and a third cycle containing the visualization of an
explanatory belief. These explanations are incorporated in the form of
a generalized physical reality model designated by the time transport
expression, =T(). Unlike the classic nave reality model described pre-
viously the details inside =T() are purposely not specified, since it
is the architectural relationships, not the culturally dependent physical
model details, we wish to emphasize at this time.
Framework embedding consciousness in physics 125

The processing involving the two observable modalities and the gen-
eralized model can be described as follows. Starting with an optical
sensation at the top the first-person calculates the change in space chan-
nel ( 3) in our simulated retina. The simulated retina is shown as a
rectangle with one side in his mental processing mechanism and the
other side in his model of physical reality. This information is combined
with the content of the other sensor arrays to produce an updated array
output in the reality model function. This is formally indicated by the
equation
( 1 , 2 , 3 , . . .) = T( 1, 2, 3. . . . t , t). (4.1)
Further details of this function will presented as the discussion pro-
ceeds. For now we note that the expected touch array content 2 has
been updated to 2 which is measured to produce an updated expected
touch display. This display serves to provide the first-person with the
feeling of knowing where entities are. The expected-touch-sensations
are then explained by the configuration of simulated touch sensors in the
model. These confirm the solid outlines of model objects that could be
measured by simulated optic sensors and processed back into the optic
display with which we started. During normal operation the expected-
touch-sensation is quickly superimposed with the optical sensation to
produce a feeling of solidity and comfort. When they do not match a
feeling of dizziness is felt. This feeling of discomfort is evidence of the
continued calculations carried out by the first-person loop in its attempt
to establish stability and equilibrium.
The utilization of optical and expected touch modalities are used in
the last paragraph in order to familiarize the reader with examples of how
sensations are processed around the cognitive cycle architecture. How-
ever, looking at an object and generating a co-located touch sensation
does not define ones stake-your-life-on-it reality. We may see an apple
and document our belief that it is real by generating an expected-touch-
sensation, but, before finalizing this conclusion, incorporating additional
information is appropriate. Was the optical sensation a reflection in a
mirror? The sensation of the moon might require observation of its orbit.
For a bird the motion of passing wind or depression of branches it lands
on may be adequate to verify its objective actuality. The exact informa-
tion is not important to our discussion. What is important is that one
registers the result of reality testing by generating a sensation that visual-
izes ones true beliefs. An example of a much-held belief has been added
as a third sensation category incorporated in the inner cognitive cycle
shown in Fig. 4.2. The quad circle and square are designed to look like
a center-of-mass/charge icon. It is intended to document the belief that
126 Wolfgang Baer

a pattern of gravito-electric objects exists in the first-persons model of


physical reality. The reader may choose his or her own sensation icons.
We chose this example because most individuals would agree that phys-
ically real objects are characterized by some mass and/or charge, so this
selection should not be controversial.
The purpose of adding the mass-charge icon into the first-person cog-
nitive loops is to acknowledge some absolute stake-your-life-on-it reality
belief as part of our internal processing activity. The optic sensation can
disappear simply by closing ones eyes. The expected touch display and
its subset of actual touch can disappear when we fall asleep. But the exis-
tence of the mass-charge distribution of our body and its surroundings
is taken to be a fact whether we are asleep or awake, dead or alive. No
matter how often we realize a sensation is only a sensation, we still believe
that some really real entity behind the sensation is actually there, and we
produce an explanatory sensation as evidence of the absolute reality on
which we have settled. What we are emphasizing in Fig. 4.2 is, that no
matter what truth we settle on it will be incorporated into a cognitive
process which contains the feeling of that truth on one node and some
physical entity that generated that feeling on the other. It is the process
that is fundamental not the details it may contain. Once the fundamental
nature of ourselves as a cognitive cycle is grasped the detailed content
can be added to flush out the nature of who we are and how we can
change.
This flushing out has already begun in Fig. 4.2 because we have added
several components that are likely to be included if such a cognitive
cycle is to perform useful functions in the human thought process.
Clearly any detail, such as the assumption that reality is composed of
mass-charge patterns, will invite controversy and close doors to alterna-
tives. Such constrictions are necessary if the concept that we are interact-
ing cognitive loops is to support practical engineering advances. Besides
the addition of a fundamental reality belief loop discussed previously,
Fig. 4.2 also has two internal process boxes. The function of these boxes
is to convert observables, sensations, feelings, or qualia into physical
model entities and back again. On the right downward branch observ-
ables are eXplained by assuming they must have been caused by patterns
placed into the simulated detector arrays. These arrays are shown as
rectangles on the right and left edge of the physical reality model. The
detector arrays are split by dashed lines that indicate their interface role
between the inner and outer world. The stimulation content of these
arrays is labeled by deltas ( 1, 2, and 3) to indicate that a change in
the internal and external sensor arrays actually represents the stimula-
tion. Numerical values are used as space names. A change in a typical
Framework embedding consciousness in physics 127

space volume named x is designated by x. Each such name labels


a bundle of parallel processing channels that are alternatively called opti-
cal, tactile, and so on, arrays. These arrays are shown from the side in
Fig. 4.2 and correspond to the NCC shown from the front in the lower
half of Fig. 4.1.
These NCC are processed by the time transport function T( 1, 2,
3, . . . , t, t ). As written, the time propagator only shows the changes in
three sensor modalities as explicit input variables. However, in general
T() transforms all modality channels, which includes memories in the
first-person doing the modeling. As shown T() acts directly on the stim-
ulation pattern on the right side of the reality model and produces an
expected stimulation pattern in the left-side detector arrays. The detec-
tor arrays are generally not in the same state on the left and right side.
Their difference is parameterized by the state interval t -t, where the time
parameter names the state of the physical array. In Fig. 4.2 the model of
the rest of the Universe is not defined by the detector arrays but is buried
inside T( . . . , t, t ). The t -t variable specifies how this model must be
transformed to produce the change in the detector arrays. When t = t it
is assumed the rest of the Universe has not changed during the execution
of a cycle and therefore its model state need not be incremented. In this
case T() only handles the interaction between the internally simulated
detector array cell regions, and no external change is identified as the
cause of the sensations.
For an isolated cycle, time is an internal parameter, marking the
progress of the cycles execution. If the first-person adopts an exter-
nal clock, then time is brought in through an additional detector array
channel which holds the position of the clock pointer. In this case the
state of the rest of the universe is handled as another observable, and
the time-transfer function would propagate all detector array states on
an equal footing. The form of this more symmetric relationship is
( 1 , 2 , 3 , . . . , t ) = T( 1, 2, 3, . . . , t). (4.2)
Further details of how the T() function implements interactions
between all parts of the Universe has been provided by the author (Baer
2011). The T( 1, 2, 3, . . . , t, t ) form appropriately shows the com-
patibility with quantum theory and includes the Newtonian assumption
that time is an independent a priori quantity not effected by the activities
in the Universe. By temporarily adopting this erroneous tradition both
the t and t variables are assumed to be supplied by some mysterious
agent. This tradition further assumes that time, that is, the state of the
universe, propagates relentlessly forward, driven by this mysterious agent
and thus no loop in the process universe can change the progress of time.
128 Wolfgang Baer

This assumption is also adopted here only to show compatibility with


existing theories but is incorrect in the more general theory of cognitive
beings.
The model of physical reality should not be confused with a con-
ventional semiconductor input and output gate because it represents a
process between the two sides of time. This corresponds to the ion trap
implementation of a quantum computer which is constructed with a
string of ions (Cirac 1995). The program calculations in such a com-
puter are performed by illuminating each ion with light pulse sequences
thus producing new states of the same ion at a later time. The gate is
thereby implemented by a transform in time, not between spatially sep-
arated input and output leads. In our case, the program corresponds to
T(), while the detector arrays are the ions. The vertical axis of our real-
ity model corresponds to space and conventional input/output happens
between modalities.

4.3.1 Is there a Ding-an-Sich reality?


Specifying the interaction between the content of simulated sensor arrays
or actual internal memory arrays as a mathematical rule does little to
enhance our ontological understanding. This situation will not change
when we approximate T( x(t), . . . t, t ) with the quantum unitary oper-
ator U(t, t ) (x, t) in Section 4.5, or reference the many detailed
expressions of it found in the literature. When encountering such mathe-
matical rules it is natural to ask, What reality is actually modeled by such
expressions? In our notation, the question boils down to whether T() or
its approximation as a quantum operator U() should be some represen-
tation of the entities-in-themselves rather than just an implementation of
a mechanism that implements changes in our sensor arrays.
Niels Bohr was convinced that it was useless to speculate about the
ultimate nature of reality because the disturbances in the detector arrays
define the first-person total base of knowledge. Hence he argued that
there was no justification for imagining anything inside the physical model
of reality besides the rules that manipulate the disturbances, , and
mathematical formalism required to connect those disturbances at two
different times. This has become the dominant view among quantum
physicists and has therefore lead to the opinion that quantum theory is
simply a calculation tool, and it is silly to look for any further meaning
in its symbols. Bohr had some good reasons for this opinion. It is clear
from our discussion and the architecture of cognitive processes shown
in Fig. 4.2 that the model of physical reality is an internal structure that
is updated through external measurement to be consistent with external
Framework embedding consciousness in physics 129

detector array stimulation. However, the model of physical reality is not


reality itself and therefore is indeed merely a calculation tool designed to
manipulate sensory stimulation within the cognitive process. Any number
of theories can perform this function in the cycle. We could be a brain in
a vat with a computer programmed to stimulate our detector arrays, so
why not concentrate on the manipulation efficiency and accuracy rather
than getting distracted with speculative visualizations of what we can
never verify directly.
The down side of Bohrs anti-realism viewpoint is that it is not sat-
isfying to individuals who have an intuitive feeling that some form of
reality exists and closes the door to those who seek some improve-
ment in our understanding of that reality, even if such improvement
requires an evolutionary leap in our display mechanisms to make it
imaginable.
If we go back several paragraphs and return to the function of the inter-
nal eXplanation process in Fig. 4.2, we can see that this function traces
the appearance of sensation backwards in time to the stimulation in the
sensor arrays that must have happened in order to cause the observables.
Stimulation to our retina is certainly a cause of optical appearances, but
it is not the ultimate cause. After we have calculated and updated the
stimulation pattern in our simulated retina the story does not end. An
add on to the original quantum theory called Quantum Electro Dynam-
ics (QED) handles the calculation from the retina to the emitting surface
however the quest for a cause does not stop there. Where did the surface
come from? How was the illumination beam placed to make it visible?
The causal chain goes all the way back to the origin. The unitary oper-
ator of quantum theory is formally written in terms of the Hamiltonian
energy operator as U(t ,t) = e2iHt/h e2iHt/h where the right term
transforms the function back to time zero and the left term propa-
gates it forward to time t . Time zero is an arbitrary point in the past
when initial conditions are specified. Generally it is a point in the past
where the cause of the sensation meets a possible intervention action that
can update the initial conditions and thereby change the sensation. The
sequence of backwards cause calculations introduces a nested hierarchy
of deeper and deeper calculations that stops when the cause-control point
is reached. For physicists this point is usually the last chance to control an
experiment before it starts. Ultimately, for the big cosmological experi-
ment it is the origin of the Universe. At this point the wave function of the
universe is updated, a state change t is introduced, and the subsequent
forward calculation eventually produces the next expected stimulation
on our first-person simulated retina which is then measured to produce
the next expected optic appearance.
130 Wolfgang Baer

Examination of this algorithm shows quantum theory as a candidate for


the physical model used in the first-persons thought-process is incom-
plete because:
1. It requires the definition of a universally applicable time-independent
energy operator called the Hamiltonian, which is usually derived from
a classic visualization and describes a classic Ding-an-Sich evolving, in
parallel and outside the realm of quantum theory, which includes the
necessary measurement instrument in the here and now.
2. The cause of the next expected observable is attributed to a state
change t imposed by that outside Ding-an-Sich on the () function
without specifying the cause of that change as anything other than an
inevitable progress in time rather than attributing that state change to
an interaction with the first-person.
Even if the classic observer is reduced to mere detector arrays that
only define the space within which the () evolution progresses they,
like consciousness itself, must still emerge through some additional pro-
cess. Quantum theory clearly provides an advance over classical thinking
because it includes an observer outfitted with detector arrays through
which measurement operations generate observable experiences. How-
ever, the requirement for some physical reality beyond the state function
has not disappeared. Quantum mechanics, as the diehard realist Ein-
stein contended, is not complete.
The issue is not whether there is an external independent reality but
rather whether the words Ding or Thing usefully describe that reality.
For nave realists described in Section 4.2 physical realities are composed
of things. However, in the generalized model introduced in this section
the words entity, event or Ereigniss would be more appropriate. Hence
we generalize Kants Ding-an-Sich to a Ereigniss-an-Sich as a descrip-
tion of an independent reality beyond our observations. Bohr correctly
dismisses physical reality as composed of things such as apples or elec-
trons, but dismissal of any speculation about ultimate nature of reality
is premature. The next section elaborates on the operations required to
implement cognitive processing loops and show why explicit symbols
describing entities-in-themselves are necessary to complete our physical
reality model.

4.4 Thought processes that create the feeling of


Entities-in-Themselves
In order to proceed with a further analysis of these issues we will reduce
the complexity of the scene from one that includes Mach, the earth,
and sky to one that includes only the single apple sensation held in
Framework embedding consciousness in physics 131

Physical Observable
change sensation
converted to converted
observable to physical
sensation change

= P(A) = M(A') A = X A=L

Meaning of A' A Meaning


symbol-of =T( ) coded
-reality
A into
projected
symbol-
into
As a symbol A means a sensation of the real apple of-reality
sensation
As a physical object A processes the change.

Fig. 4.3 Architecture of a human thought process that creates the feeling
of permanent objects in our environment.

Machs hand in Fig. 4.1. Concentrating on an apple as a test experience


allows us to analyze how a single sensation is processed in order to
achieve the feeling of solid reality characteristic of our daily experience.
This is done by reducing the number of parallel processing channels
to only those required to carry information about the apple. The apple
shown near the top of Fig. 4.3 combines the three modalities discussed
in the last section into one superimposed sensation. The combination is
shown as a registered outline surrounding a color-filled blob containing
a visualization of a mass-charge structure. The outline and color blob
are now combined in an inner cycle representing both the optic and
expected-touch-sensation. This allows us to de-clutter the drawing and
emphasize the relationship between the opti-tactile and the reality belief
modality channels in order to emphasize the relationship between two
parallel cognitive loops. This example also serves as a template for the
general relationship between an observable change and a non-observable
permanent entity doing the changing. In subsequent sections we will
assign alternative sensory modalities to the two interacting loops to show
how sensory experiences in one modality are explained by visualizations
in second modalities and how such dual relationships are combined to
form multi-sensory experience hierarchies.
As in Fig. 4.2, the processing loops run through observable-to-
theoretical converter boxes on the explanatory right branch and
theoretical-to-observable converter boxes on the left measurement
132 Wolfgang Baer

branch. We have designated the L() and P() boxes symbols differing
from M() and X() in order to emphasize their role in establishing and
recalling the permanent entity that changes in contrast to the change
itself. The dual-cognitive cycle architecture connecting change with the
entity changing can be applied to many situations and used to build
hierarchies of change within change within change and so on.
The arcs show the transport of entities from one process to the next
and have no further processing significance.
The model of the human thought-process contains several symbols
that are defined as follows. A is a description of what models the real
apple while A is a description of the change in A. I will use bold
underlined first capital letters as symbols-of-physical-reality that refer
to descriptions of entities-in-themselves. Lower case letters will refer to
symbols-of-sensations or observable descriptions in the Positivist tra-
dition discussed earlier. Hence Fig. 4.3 shows the optical apple sen-
sation, , referenced in English as apple and contracted to a math-
ematical symbol a. This sensation is connected to a change A,
while the apple reality belief sensation, , referenced as the apples
mass-charge and contracted to a, is connected to the real Apple, A,
doing the changing. The words real apple, or the capitalized name
Apple, refer to entities outside the cognitive loop which can never be
experienced directly and may or may not be correctly modeled by the
vectors A.
The logic of the calculation in the cognitive loop is that the apple
sensation propagates through a series of transformations, X(), backwards
through a causal chain until it is finally explained as a change in a model
of a real external entity. At some point inside the T() transform, A
+ A exists as a combined system and then separates to release A.
This change is propagated through the model and to the simulated first-
person optical detector array where it is processed by the M() function
to produce the report of a change, a, which is displayed as the first-
persons optical apple sensation. The inner circle represents a processing
path that handles the change in the entity itself, while the outer processing
cycle maintains a fixed and permanent belief in the entity itself. If no
change happens, the feeling of the permanent structure persists in the
first-person.
The A is created as a reality belief during a creation-learning process
L() and similar symbols populate the physical reality model as a set of
permanent structures used to explain sensations. These structures were
created as an explanation of the permanent reality feeling symbolically
represented by the mass-charge icon, . This feeling is generated by the
projection function P() from A. Thus the outer cycle constantly refreshes
Framework embedding consciousness in physics 133

the feeling of permanent reality by generating and in turn explaining itself


in terms of A. Once learned, however, A remains in the reality model
to be used by the time transform T(A, A, . . . t, t ) to calculate future
change because some entity has changed from time state labeled t to one
labeled t .
What we have demonstrated is that the feeling of permanence inherent
in human cognitive processing requires symbols such as A that define
the entity doing the changing in our model of reality. Further the con-
figuration of learned entities provides the framework for what can be
explained. If no change in our learned configuration of permanent entities
in our model is available, the proposed change cannot be accommodated
in the model and will be rejected.

4.4.1 The dual role of symbols-of-reality


Symbols-of-reality, A and A, represent entities that cannot be observed
directly. They act like memory structures in the cognitive loop because
they are translated into meaning by the projection and measurement
functions M() and P(), respectively. In this referential sense the meaning
of these symbols is the observable associated with them. Likewise observ-
ables are coded into such symbols-of-reality much like experiences are
coded into words. The referential meaning of a change, Apple, happen-
ing in a real Apple is the sensation a it causes. In short the referential
meaning of A is a and similarly the referential meaning of A is a. The
statement I see an apple is a shortcut for describing the process I see
the meaning of a A which I take as evidence that the meaning of an A
exists behind my observable experience. This processing results in the
sensation of an apple, a, and the sensation of its solidity, a, co-located
in the same place.
If both the appearance of sensations and the appearance of explana-
tory sensations are accommodations made in our cognitive loop to some
external influences, what then is the cause of these external influences
that are being accommodated? The answer is found not in the referen-
tial meaning of the words cause of these external influences but rather
in their functional meaning. The functional meaning defines what the
symbol does on its own not what it means to someone else. All symbols
are implemented as physical objects and as such exhibit properties of the
form and substance from which they are built. Symbols A and A are
no exception. As symbols they carry the meaning of their observables;
however, when implemented as physical entities, they have minds of their
own that interact with other physical entities. For us looking down on
the cause of these external influences, A, or A we may simply see
134 Wolfgang Baer

letters that reference some aspect of the universe. However, as parts of a


processing mechanism it is their physical attributes, that is, the material
weight, size, and so on, that must be carefully chosen so that these let-
ters can physically interact with each other and the rest of the simulated
universe inside the T() function to actually produce the observables we
expect to see. Quite clearly the symbols A and A must be implemented
in material that automatically carries out their function in the model of
physical reality.
To understand this dual use we can use a digital computer analogy. If A
and A are symbols used in a program, then presumably the programmer
had some meaning in mind when the symbols were assigned. However,
as soon as they are compiled and executed in a program these symbols
are implemented as voltages or currents in the electronic circuitry of
a machine. The machine knows nothing about the meaning intended
by the programmer but merely follows the physical laws of electricity
and magnetism to manipulate its registers and logic gates. By analogy
the brain of the first-person contains A and A entities which when
processed through the cognitive loop circuitry produce their referential
meaning consisting of the observables they represent. However, inside
the brain A and A are implemented as entities that act on each other as
physical objects. If a physical configuration of neural structures is built
that allows the brain to produce the desired result, it is not necessary to
believe those neural structures mean anything but what they do.

4.4.2 Under what circumstance is there reality in the


mathematical formalism?
We can now understand how Niels Bohr and the Copenhagen School
could get away with the contention that the reality of quantum mechan-
ics was in its mathematical formalism. If A and A were the symbols of a
well-defined theory, they would be instructions to a physicist to perform
certain calculations. These instructions are usually coded into the phys-
ical shape of the letters used in the theory. These shapes are recognized
by the calculating physicist and loaded as instructions into the struc-
ture of his brain which then performs the demanded manipulations. For
example, the time transform T(A, A, . . . t, t ) was introduced earlier as
such an instruction. It produces an array, {A , A }, without our needing
to imagine what other sensation these symbols stand for. They tell the
physicist to do something not to substitute visualization for the symbol.
All the physicist needs to know is that their meaning can be found by
applying a measurement operation M( A ) to produce an observable
a . There is no use in visualizing an additional meaning to A. As a
Framework embedding consciousness in physics 135

symbol in a cognitive cycle their meaning is their respective observable.


As an instruction in the same cycle their meaning is their effect on each
other. Thus both the symbol and the physical reality with which they are
implemented are critical.
When dealing with physical theories, such as quantum mechanics, we
can take for granted that its symbols are implemented in a brain which
has been specifically trained to contain physical entities that execute the
called for operations. Additional visualizations may be a heuristic aid,
but treating such visualizations as further descriptions of reality could
turn physicists into iconoclasts who then attach excessive reality to their
visualizations. Even though we treat the Apple in our model of physical
reality as a symbol that stands for some real entity outside ourselves, the
answer as to what that external reality is will be returned in the form of an
observable sensation. We explicitly show this sensation as a mass-charge
icon, , in Fig. 4.2 and Fig. 4.3. The fact is no matter how much we try
to find and external reality we cannot get outside our own cognitive loop.
For this reason Niels Bohr and the Copenhagen gang were justified in
asserting that the only reality one will find is in the formalism of mathe-
matical symbols which comprise the reality model and all we can really
do is calculate. However, such a statement is only justified if we define
formalism as the physical implementation of the mathematical symbols. In
other words it is justified if the mathematical symbols are implemented
correctly in the structure of a physicists brain. There is nothing magic
about the letters on a page it is how they are implemented in physicists,
and eventually in the rest of society, that provides a functional model of
physical reality.
Recognizing the futility of seeking reality outside the physical imple-
mentation of mathematics is a practical stance since we cannot get outside
our loop. However, the elimination of an independent Ding-an-Sich in
favor of a mathematical description of possibilities can at best be seen as
a temporary phenomenon in the development of scientific thought. Just
because we cannot exhibit external reality in our own loop does not mean
there is no such reality. All it means is that Bohr came from a tradition
and was talking to individuals who believed the collection of cars, trees,
and apples in front of their noses was external reality itself. Such indi-
viduals naturally define the word external as outside their observable
bodies rather than outside their cognitive loop. For such individuals
the meaning of the symbols-of-physical-reality is what they feel to be real
behind the sensations in front of their noses. If quantum mechanics pro-
posed as its primary symbol-of-reality, then the feeling of reality pro-
jected into observable sensations is the probability, 1 , of its inter-
action with the observer. The reality behind this probability feeling is the
136 Wolfgang Baer

mathematical formalism that implements the quantum physical reality


model in the trained physicists Brain and nothing more. This clarifies
Bohrs position because the apparent reality outside our observed bodies
is indeed generated by an implementation of the mathematics.
In addition the view we have been developing follows the later opinion
of Werner Heisenberg who believed quantum theory was the physics of
our knowledge of reality rather than reality itself. That quantum mechan-
ics should therefore be at least a primitive model for the operations of a
cognitive system has been suggested by the author and others (Walker
2000; Baer 2010b). Once we adopt this viewpoint and recognize our-
selves as a cognitive loop that is doing the knowledge processing, then
it is quite obvious that the reality behind our observations is indeed the
calculating mechanism of our own physical reality model. The physical
reality model that processes our knowledge is only part of reality and
visualizing it as the universe should not be confused with a visualization
of all that is out there.
If quantum theory has developed the physics of our knowledge of
reality, the natural question to ask is What could reality itself be?

4.4.3 Is reality a set of interacting cognitive loops?


The word Apple refers to an entity inside the first-persons cognitive
loop and the sensation apple refers to an experience also inside the
first-persons cognitive loop. It is highly unlikely that these entities appear
inside the first-person for fun, but rather that they are both part of the
mechanism the first-person uses to accommodate influences from enti-
ties outside his or her own loop. Of course, external entities themselves
cannot be inside our own loops but our accommodations can be. The
accommodations are certainly real because they are aspects of our feelings
and sensations which are undeniable. Whether or not these accommo-
dations are knowledge of some entity outside might be questioned but
the knowledge itself, where knowledge is used as an alternative for the
word accommodations, simply exists. Our goal is not to find and identify
an external reality but rather to find a knowledge structure that improves
our accommodations and makes our sensations of them more useful.
If we are cognitive loops that perform processing activities, it is rea-
sonable to assume that we are not alone, but rather, that we are part of
a universe of cognitive loops that interact with each other to optimize
their accommodations. We may not be able to contain the external real-
ity itself, but we can accommodate it by managing a model of it. The
implication of such a hypothesis is that the model of physical reality should
not contain a model of a quantum universe nor should it contain a model of a
classic universe but rather a model of interacting cognitive loops.
Framework embedding consciousness in physics 137

The groundwork for such a substitution has already been laid. Figure
4.3 shows the architecture of the human thought-process required to cre-
ate the feeling of reality we experience when looking at everyday objects.
This architecture shows two interacting loops. In the inner loop an apple
sensation is transformed into a change A that interacts with an entity A
being changed. This entity in turn determines the feeling of solid reality
associated with a in the outer loop. Nothing has been said about the size
of this entity. It is used as an example of any entity being accommodated.
As presented, A refers to the entire macroscopic body of an apple and A
as the cumulative changes occurring on its surface required to emit light
rays. We are talking on the order of Avagadros number (6.02 1023 ) of
individual quantum actions. This is nothing close to the quantum limit.
In this domain one would expect classic physics to apply. However, we
are not presenting the physics of observables but rather the physics of the
processing system within which observables occur. That is, the physics of
a cognitive being reduced to the essential form of a cognitive loop. The
loop does quantum theory.
A single cycle of such a loop processes the physical change of an entity
we call the Brain from the past through a display of sensory experience
called now which then influences the changes in the Brain at the future
side of the time interval circumscribed by the cycle. A single closed
cycle inner loop as shown in Fig. 4.3 simply holds the change as a static
observable experience, that is, the recall of a memory appears as a thing
but is a stable activity. The general architecture describes what you the
reader, conceived as a processing loop, does. The formation, changes,
and destruction of processing loops forms a new vision of reality as inter-
acting cognitive loops. The development of the physics describing such a
new vision is in its infancy. What can be said at this juncture is that in the
limit of small changes which do not destroy the entities being changed
the theory of cognitive loops will converge to the theory of quantum
mechanics. This approximation will be discussed in the next section.

4.5 Quantum theory approximation of cognitive loops


The similarity of the architecture of a cognitive loop and quantum theory
can be qualitatively understood by substituting an Atom for an Apple in
our previous discussion and substituting for A and for A. Like
the Apple we cannot see an Atom but visualize its existence as a gravito-
electric permanent ground-state structure by imagining its orbitals
as a mass-charge distribution. Assume a photon of the right frequency
bounces off a spherical mirror that focuses its energy on the Atom. The
Atom absorbs the photon as a change and thus transitions into an
excited state + . After some time the Atom releases its change by
138 Wolfgang Baer

emitting a photon and returns to its ground state . The photon hits
the concave mirror and bounces back toward the Atom and the process
repeats. The inner cycle of Fig. 4.3 has now been completed. During
each cycle a change is processed from to a photon and back again.
If we identify as a mass-charge separation change and a passing
photon with an observable sensation, an atom emitting and absorbing a
photon would be an extremely simple cognitive system that, if completely
isolated, would maintain the single experience of a light flash forever.
This simple case shows how the architecture of quantum theory can
be used to explain consciousness. The conscious system described by
quantum theory is visualized as a pattern of observables explained as
a mass-charge separation structure which acts as the content of a per-
manent Hilbert, that is, memory, space (see NCCs in Fig. 4.2). The
mass-charge structure emits gravito-electric influence patterns which
are processed through interconnecting logic gates with which a quan-
tum physical reality model is implemented to produce influences that
determine new mass-charge separation patterns. The action required to
produce the separation patterns are measured as observable sensations.
Some of these sensations are derived from and are used to control the
mass-charge separation patterns not in an internal Hilbert space defining
memory but rather an external Hilbert space defining the sensor arrays
that interact with the rest of physical reality.
The qualitative descriptions provided above do not provide proof that
quantum theory described a cognitive loop containing consciousness.
However, the plausibility of this hypothesis is increased by providing a
detailed mapping between the nomenclature of quantum theory and the
general description of operations and functions of a cognitive loop. To
do so requires us to formalize an assumption about the nature of physical
reality. Let us assume for the moment that A models a physical entity that
is composed of a mass field, m(x), and a charge field, ch(x), spread out in
space. Here, x, is the name of a space cell in which the mass or charge dis-
tributions are located. Furthermore the mass projects a gravito-inertial
influence field while the charge projects an electromagnetic field. The
generally accepted functions relating masses to their influence fields are
Einsteins general relativity equations and those relating charges to their
influences are Maxwells equations. These influence fields apply physical
forces so that each mass-charge point particle feels the combined forces
from all other such entities. Since the gravito-inertial forces do not nec-
essarily pull the mass in the same direction as the electromagnetic-forces
pull the charge the mass-charge combination is pulled apart.
This introduces a new possibility that has not been used by physicists
because of their habit of defining particles as single bundles of properties
without asking the question, What holds mass and charge together?
Framework embedding consciousness in physics 139

The answer is not known at this point; however, we can speculate that if
the separation is small enough the two properties will be held together
by a linear restoring force characterized by Fc = kc z where k is a mass-
charge attraction spring constant and z is the world line distance between
a mass and charge. This force has been identified as the cognitive force
(Baer et al. 2012). The action held in the separation can be formally
identified with quantum theory by a series of substitutions,

E(x)dt = kc Z dZ = Z(kc d/dt) Z dt

= (ih d/ dt) dt = H dt (4.3)

Where:
E(x) dt = action in a cycle of standard length dt, that is small enough
so that the energy density E(x) can be treated as a constant.
k = ih/; the spring constant.
H = (ih/2)d /dt; the Schrodinger equation.
Z = , the wave function, for small enough Z.
h = Plancks constant.
x = the Hilbert space cell name labeling each of the Z(x) or
(x). If space is the only observable, x equals Cartesian
coordinates (x,y,z).
The total energy in the entire mass-charge configuration is calculated
by integrating Equation 4.3 over all space cells. The reader will recognize
the integral on the far right as the form of the measurement equation of
quantum theory. The full mapping to the architecture of quantum theory
is accomplished in Fig. 4.4.
Here the classic world observable is defined by a spatial-temporal
energy function. The existence of this observable is processed into a
real physical world displacement by the explanation Process III. The
result is a description of the physical reality in terms of a displacement
pattern as a function of time and space. The time derivative of this dis-
placement pattern is related to the Hamiltonian energy operator. This
relation is known as the Schrodinger equation or von Neumanns Pro-
cess II. Lastly, observables are extracted from the displacement pattern
by von Neumanns Process I (von Neumann 1955).
Figure 4.4 shows material energy in the inner ring corresponding to
the explain-measurement cycle in Fig. 4.3. This cycle carried the change
around the cycle [ . . . A - a- A . . . ] which in this case is inter-
preted as a displacement-energy cycle. In Fig. 4.3 the inner loop pertains
to a change while the outer loop pertains to the permanent entity being
changed. In Fig. 4.4 the permanent entity being changed is felt as the
140 Wolfgang Baer

Classic World
Process 0
E(x,t) t

v. Neumann Measurement
Explanation Process III L( )
Process 1
= E(x,t) t = iE(x,t)t/-h
(x,t) = e =
P(X) t+t X
t (x,t) H (x,t) dt

Quantum word
(x,t)
v. Neumann-Schrdinger Process II
H (x,t) = i h d(x,t)/dt

Fig. 4.4 Mapping quantum theory to the architecture of a cognitive


cycle.

space background which is assumed to be a priori in quantum theory.


The name of A, the entity being changed in Fig. 4.4, is mapped to the
name of space cells labeled x in Fig. 4.4 previously. Thus (x,t) is
properly identified as a displacement pattern in space occurring in the
cycle labeled t which lasts for an amount of time t and is known as
a matter wave originally postulated by deBroglie. The square of these dis-
placement amplitudes resulting from measurements provide the material
energy content of space.
The key to the derivation above is the assumption that the displacement
between charge and mass is small enough so that the force holding them
together is a reversible linear function of the distance. The significance
of this approximation is that the deviation from a stable mass-charge
can be processed through the cycle without damaging the underlying
configuration. In conventional terminology this means the objects move
from place to place without destroying the space they travel through.
Once we have an observable sensation described by action happening
in a cycle, it can be explained by the existence of mass-charge separation
Z = L(E t) without restricting ourselves to small separations. Let us
assume some ground-state configuration of mass and charge is stable
so that in the general stability conditions Z = L(P(Z)) or E t =
P(L(E t)) apply for all parallel processing cycles labeled by the space
parameter x. This implies that a balance exists between the physical
Framework embedding consciousness in physics 141

gravito-electric influences on each mass-charge point and the influences


between the mass and charge at each of those same points. Each loop,
labeled x, accommodates the physical gravito-electric influences through
its separation Z[x] and becomes aware of a change E[x] t = P( Z[x]),
which replaces the generalized sensation a[x] as the feeling of changing
objects. The objects doing the changing are permanent structures which
are felt as empty space sensations. Changes Z[x] in these structures are
perceived as the material content of space.
Analysis of the brain as an open system (Vitiello 2001) requires that
the environment within which the brain operates can be accommodated
within the brain by doubling its degrees of freedom. This second set of
variables acts as a physical model of the environment and has here been
implemented by giving both charge and mass of every single particle an
individual location in time and space. The Brain entity interacts with
the rest of the universe through gravito-electric influences and becomes
aware of these influences through measurement of their balancing mass-
charge separation inside its own cognitive cycle.

4.5.1 Interpretation of the wave function


We have interpreted the wave function of quantum theory as a small
enough deviation, Z, in a mass-charge separation Z and have inter-
preted the observable of quantum theory as action, A = E t, which
is experienced as sensations and feelings occurring inside the feeling of
space cell (Baer 2010a). If correct, what is consciously experienced is the
form of energy currently unknown in physics mentioned by Velmans
(2000). A field of physical systems can be visualized as an array of clocks
(Bohm 1983). Such an array is used to represent the NCCs in the node
of a cognitive cycle as shown in Fig. 4.5. Here a single clock named x
in state t has been enlarged to identify its generic change Z[x,t] as a
difference in the position of its pointer between where it actually is and
where it would have been without the disturbance.
If the disturbance is small enough, the restoring force is linear and
one could imagine the actual and undisturbed pointers connected by
a spring. The resulting motion is oscillatory around the undisturbed
pointer. Hence as the undisturbed clock pointer moves around the dial,
the disturbed clock pointer oscillates around it. Sometimes the disturbed
pointer is a little bit ahead and sometimes it is a little bit behind the
undisturbed pointer providing an oscillation in time.
If each of the clocks in the field were completely isolated, which in
our context means it is the physical reality node of an isolated cogni-
tive cycle, then the oscillation would continue indefinitely at constant
142 Wolfgang Baer

Z(x,t) Z(x,t) Z X(E)


E X(Z)

=T ( (
Z L(E)
E P(Z)

Fig. 4.5 Reality model of space and content.

amplitude and period. However, no system is completely isolated and,


at minimum, we can assume a small gravitational coupling between the
clock pointers in the array. This coupling introduces a spatial variation
in the amplitude and phases of the oscillations between the clocks so that
rather than independent and random motions, the clock pointers execute
coordinated motion patterns. It should not be surprising that the form
of the pattern so produced satisfied the Schrodinger Wave Equation and
therefore the disturbance pattern:
(x, t) = limsmall enough Z(x, t)
as identified in the previous section. This result can be demonstrated by
visualizing a set of box springs connected to each other in a mattress.
When undisturbed, each spring sits statically in its equilibrium position.
However, a slight bump on its side will set up oscillatory patterns through-
out the entire spring array. The wave equation these patterns conform to
can be calculated from classical physics using the theory of small oscilla-
tions and is identical to the Schrodinger equation in the non-relativistic
approximation (Goldstein 1965, pp. 318346; Baer 2010a).
Of course what you see is only your internal accommodation to the
optical stimulation, but the box springs themselves act as a Hilbert Space
that host oscillations and represent a model of a physical quantum space.
When quiescent each node in the box spring moves, just like the array
of clock pointers, exactly along an equilibrium trajectory determined
Framework embedding consciousness in physics 143

by its relation to other nodes in the array. If such an array is used to


model the space of physical reality, then as long as all the nodes are at
equilibrium there are no oscillations and the feeling associated with such
a configuration is that of empty phenomenal space. Oscillations in the
array will show up as the feeling of material content in that feeling of
empty space.
We have used the term clock to refer to a classic physical system mod-
eling space cells and a deviation in a clock pointer from its dynamic
equilibrium motion as an interpretation of the wave function. In the last
section we made a mapping between the wave function and the distance
between mass and charge. The two describe the same interpretation.
Any physical system such as a clock mechanism can in turn be reduced
to wheels and springs. These in turn can be reduced to configurations
of molecules and atoms which in turn can finally be reduced to patterns
of mass and charge. The existence of mass, charge, space, time, and the
gravito-electric influence of mass and charge in space and time are con-
sidered to be the a priori metaphysical foundations of classical physics
(Goldstein 1965, p. 1). If we look very carefully at the position of a clock
pointer, we will find some configuration of mass-charge at its tip. The
equilibrium position for the pointer mass is determined by gravitational
influences from all the other clocks, while the position of the pointer we
actually see in the optic domain is determined by its charge equilibrium
position. The difference between where the gravito-inertial field wants
the mass of the pointer tip to be and where the electromagnetic field
wants the pointer tip to be results in a physical mass-charge separation
pattern we have identified with deBroglie waves.
We introduced the term clock to describe a space cell without know-
ing the exact physical nature of space. We are assuming a classic physical
system that tells its own time will eventually be found to be adequate
to describe space cells, however, investigations in this direction are on-
going and speculative (Cahill 2003). The reason to use clocks rather than
just mass-charge distributions in our discussion is that a clock introduces
the concept of time into the mass-charge separation Z. An oscillating
deviation may complete a cycle earlier or later than its neighboring cycle.
The vector describing such a deviation has both a spatial and tempo-
ral component and time is usually displayed on an imaginary axis. This
explains why a complex number (x,t) is necessary to describe what at
first glance might appear to be a purely spatial mass-charge separation.
Our ability to reduce all the structural organization of complex physical
systems to their basic cognitive cycles is the key to understanding the
hard problem of consciousness in the Brain from a neurophysiologists
perspective, and this will be done in the next section.
144 Wolfgang Baer

4.6 The neurophysiologists point of view


As first-persons, we have taken on the role of a neurophysiologist looking
at Machs body and imagining what this gentleman is thinking. Under
normal circumstances, a neurophysiologist will consider neural pulses
traveling between brain regions and chemical agents when reviewing the
mechanism that might be causing Machs mental experiences. But these
are not normal circumstances because we are reading The Unity of Mind,
Brain and World and now must recognize the following.
1. What we believed was Machs brain is merely a measurement of our
own theories which are not the real Brain causing his experiences.
2. Such measurements produce a display in our internal processing loops
that are evidence that we have accommodated influences from some
real external Wesen-an-Sich as captured Entities-in-Themselves inside
our model of physical reality.
3. A visualization of our model allows us to co-locate the physical phase
of a cognitive cycle as the reality behind what used to be seen as Machs
brain.
4. The ontological interpretation of this physical phase is a mass-charge
separation field that exactly balances the gravito-electric influences
from both external and other internal cycles gives us a tool to visualize
the actual mechanism of consciousness.
The latter realization allows us to substitute a visualization of a Hilbert
space built from an array of clocks, reduced to cells of mass-charge
separation, as containers for Neural Correlates of Consciousness (NCC).
This visualization replaces the biochemical structures that used to be
considered as possible candidates for the NCC that are now recognized
as aggregates of mass-charge patterns. The mental aspect of an oscillating
time field is a visualization of an energy flow identified as thoughts,
feelings, and qualia in a thought bubble. Objects in everyday experiences
are therefore symptoms of accommodations held in the cognitive cycles.
When asking Nagels question, What does it feel like to be a human?
(Nagel 1974). The answer is not limited to aches, pains, and emotions,
but rather includes the past, present, and future Universe we see and
feel around ourselves. That feeling, classic westerners used to call the
real world, is the internal symptom of our accommodating that forever
beyond our grasp outside our loop.
The reduction of the brain to a mass-charge configuration puts it on
the same footing as any other physical system and allows us to conclude
that the nature of all objects can be visualized with a projection of a cog-
nitive loop into our optical or touch sensation of them. The simultaneous
visualization of both thoughts and their physical cause is a model of a
Framework embedding consciousness in physics 145

cognitive processing loop. The neurophysiologist can use this model to


understand what is going on in a second-persons brain. The aggregation
of separation and their energy fields into atoms, molecules, and up the
scale to neurophysiologic entities is only beginning (Baer et al. 2012). A
speculative glimpse into this project follows:
The cortex, spread out flat along the vertical space axis in Fig. 4.2,
is divided into large regions coded for particular sensory stimulus. The
regions are further divided into segregations, consisting of a correlated
ion states involving 106 ion channels, that is, channel states within cells
coding for a particular sensory stimulus (Bernroider and Roy 2004).
Coordination within segregations may involve astrocytes as discussed in
Miterauers chapter in this book. Only three such regions are shown.
Many more should be added to the diagram to cover both traditional
sensory as well as memory recall and imagination sensations character-
istic of human experience. The function of these aggregates is to act as
coordinated detector arrays providing a Hilbert Space, which define the
model of physical reality internally, and the detector arrays that pro-
vide the interfaces to the external world. A configuration of Entities-in-
Themselves is captured as the internal model which has been described as
the implementation of the symbols of the theory. The subelements such
as ions and proteins interact with each other through gravito-electric
influences thus transferring changes between cognitive cycle channels as
discussed in Section 4.3. These influences establish a balancing field of
mass-charge separation which generates conscious experiences. Studies,
such as fMRI or micro-probes, monitor the passage of change, as mea-
sured by the occurrence of action, through the regions or sub-regions
thus mapping cortex geography to observable experience. This provides
an external third-person description of the cognitive process as the change
flowing past the observer while the brain feels the change as a subjective
experience flowing directly through itself along its personal direction of
time.

4.7 Conclusion
We have analyzed the human thinking process and identified its cognitive
processes with the activities described by the formulation of quantum
theory. This identification allows us to conclude that physics already
describes an integrated mind-body mechanism. Although quantum the-
ory is only a linear approximation to the full understanding of cognitive
beings, the recognition that matter generates influences that generate
matter in an endless loop coupled with the recognition that those influ-
ences are the sensations experienced by and in turn remembered by the
146 Wolfgang Baer

brain opens the door for further development both in physics and the
cognitive sciences.
The build up of mass-charge configurations into electrons, atoms,
molecules, and biological structures organizes fundamental cognitive
activity into forms that could be called human. However, when seek-
ing the origin of consciousness one must reduce even an electron to its
fundamental mass-charge pattern and recognize the process that prop-
agates influence fields and controls that mass-charge configuration. We
propose a new tool that visualizes the cause of mental experiences as
separation patterns and directly equates those experiences to the energy
fields which hold the charge and mass together. This separation energy
is not limited to the human brain but is exhibited by all material. Thus
the entire universe and every part of it exhibits a form of primitive con-
sciousness.

REFERENCES
Aspect A., Grangier P., and Roger G. (1982). Experimental realization of
EinsteinPodolskyRosenBohm gedankenexperiment: A new violation of
Bells inequalities. Phys Rev Lett 49(2):9194.
Atmanspacher H. and Hans P. (2006). Paulis ideas on mind matter in the context
of contemporary science. J Consciousness Stud 13(3):34.
Baer W. (2010a). Introduction to the physics of consciousness. J Consciousness
Stud 17(34):16591.
Baer W. (2010b). Theoretical discussion for quantum computation in biological
systems. Quantum Information and Computation VIII, Paper #7702-31, URL:
http://dx.doi.org/10.1117/12.850843 (accessed March 22, 2013).
Baer W. (2011). Cognitive operations in the first-person perspective. Part 1: The
1st person laboratory. Quantum Biosystems 3(2):2644.
Baer W., Pereira A., and Bernroider G. (2012). The Cognitive Force in the Hierarchy
of the Quantum Brain. URL: https://sbs.arizona.edu/project/consciousness/
report poster detail.php?abs=1278 (accessed March 22, 2013).
Bernroider G. and Roy S. (2004). Quantum classical correspondence in the
brain: Scaling, action distances and predictability behind neural signals.
Forma 19:5568.
Bieberich E. (2000). Probing quantum coherence in a biological system by means
of DNA amplification. Biosystems 57(2):109124.
Blood C. (2009). Constraints on Interpretations of Quantum Mechanics. URL: http://
arxiv.org/abs/0912.2985 (accessed March 6, 2013).
Bohm D. (1983). Wholeness and the Implicate Order. London: Ark.
Carnap R. (2000). The observation language versus the theoretical language. In
Carnap R. and Schlick T. (eds.) Readings in the Philosophy of Science. Mt.
View, CA: Mayfield, pp. 166 ff.
Cahill R. T. (2003). Process Physics. Proc Stud Suppl 2003 (5) URL:
www.mountainman.com.au/process physics/HPS13.pdf (accessed March 6,
2013).
Framework embedding consciousness in physics 147

Chalmers D. J. (1997). Facing up to the problem of consciousness. J Consciousness


Stud 4:346.
Cirac J. I. and Zoller P. (1995). Quantum computations with cold trapped ions.
Phys Rev Lett 74:40944097.
Eagleman D. M. (2011). Incognito: The Secret Lives of the Brain. New York:
Pantheon.
Engel G. S., Calhoun T. R., Read E. L., Ahn T. K., Mancal T., Cheng Y. C., et al.
(2007). Evidence for wavelike energy transfer through quantum coherence
in photosynthetic systems. Nature 446:782786.
Everett H. (1983). Relative state formulation of quantum mechanics. In Wheeler
J. A. and Zurek W. H. (eds.) Quantum Theory and Measurement. Princeton
University Press.
Faye J. (2008). Copenhagen interpretation of quantum mechanics. In Zalta
E. N. (ed.) Stanford Encyclopedia of Philosophy. URL: http://plato.stanford
.edu/entries/qm-copenhagen/ (accessed March 6, 2013).
Fredkin E. and Toffoli T. (1982). Conservative logic. Int J Theor Phys 21(3):219
253.
Goldstein H. (1965). Classical Mechanics. Cambridge, MA: Addison-Wesley.
Goldstein S. (2012). Bohmian mechanics. Stanford Encyclopedia of Philosophy.
URL: http://plato.stanford.edu/entries/qm-bohm/ (accessed February 27,
2013).
Hagan S., Hameroff S. R., and Tuszynski J. A. (2002). Quantum computation
in brain microtubules: Decoherence and biological feasibility. Phys Rev E
65:061901.
Hameroff S. (1998). Quantum computing in brain microtubules. Philos T Roy
Soc A 356:18691896.
Hofstadter D. R. (2007). I Am a Strange Loop. New York: Basic Books.
Kafatos M. and Nadeau R. (1990). The Conscious Universe. New York: Springer.
Lande A. (1973). Quantum Mechanics in a New Key. New York: Exposition
Press.
Levine J. (1983). Materialism and qualia: the explanatory gap. Pac Philos Quart
64:354361.
Mach E. (1867). Contributions to the Analysis of the Sensations. Trans. Williams
C. M. Chicago, IL: Open Court.
Maturana H. R. (1970). Biology of cognition. Biological Computer Laboratory
Research Report BCL 9.0. Urbana: University of Illinois.
McEvoy P. (2001). Niels Bohr: Reflections on Subject and Object. Pymble NSW,
Australia: MicroAnalytix.
Mensky M. B. (2006). Reality in Quantum Mechanic. Extended Everett Concept, and
Consciousness. URL: http://arxiv.org/abs/physics/0608309 (accessed Febru-
ary 27, 2013).
Metzinger T. (2000). Neural Correlates of Consciousness: Empirical and Conceptual
Questions. Cambridge, MA: MIT Press.
Nagel T. (1974). What is it like to be a bat? Philos Rev 93:435450.
Nagel T. (1979). Mortal Questions. Cambridge University Press.
Pereira A. (2003). The quantum mind classical brain problem. Neuroquantology
1:94118.
148 Wolfgang Baer

Rosal L. P. and Faber J. (2004). Quantum models of the mind: Are they com-
patible with environment decoherence? Phys Rev E 70:031902.
Schmidhuber, J. (1990). Zuses Thesis: The Universe is a Computer. URL: www.
idsia.ch/juergen/digitalphysics.html (accessed February 27, 2013).
Schwartz J. M., Stapp H. P., and Beauregard M. (2004). Quantum physics in
neuroscience and psychology: a neurophysical model of mind-brain interac-
tion. Philos T R Soc B 360(1458):13091327.
Stapp H. P. (1993). Mind, Matter, and Quantum Mechanic. Berlin: Springer.
Summhammer J. and Bernroider G. (2007). Quantum entanglement in the volt-
age dependent sodium channel can reproduce the salient features of neuronal
action potential initiation. URL: arXiv:0712.1474v1 (accessed February 27,
2013).
Svozil K. (2003). Calculating Universe. URL: arXiv:physics/0305048v2 (accessed
February 27, 2013).
Tegmark M. (2000). The importance of quantum decoherence in brain processes.
Phys Rev E 61(4):41944206.
Velmans M. (2000). Understanding Consciousness. London: Routledge.
Vitiello G. (2001). My Double Unveiled: The Dissipative Quantum Model of the
Brain. Amsterdam: John Benjamins.
von Neumann J. (1955). The Mathematematical Foundations of Quantum Mechan-
ics. Princeton University Press.
Walker H. (2000). The Physics of Consciousness. New York: Perseus.
Wheeler J. A. (1983). Law without law. In Wheeler J. A. and Zurek W. H. (eds.)
Quantum Theory and Measurement. Princeton University Press, pp. 182 ff.
Wigner E. P. (1983). The problem of measurement. In Wheeler J. A. and Zurek
W. H. (eds.) Quantum Theory and Measurement. Princeton University Press,
pp. 324 ff.
5 Emergence in dual-aspect monism

Ram L. P. Vimal

5.1 Introduction 149


5.2 Philosophical positions regarding mind and matter 151
5.2.1 Materialism 151
5.2.2 Extended dual-aspect monism (the DAMv framework) 152
5.2.2.1 DAM and the doctrine of inseparability 153
5.2.2.2 Dual-mode in DAM 154
5.2.2.3 The concept of varying degrees of the dominance of
aspects depending on the levels of entities in DAM
with dual-mode 157
5.2.2.4 The evolution of universe in the DAMv framework 158
5.2.2.5 Comparisons with other frameworks 159
5.2.3 Realization of potential subjective experiences 173
5.3 Explanation of the mysterious emergence via the matching and
selection mechanisms of the DAMv framework 173
5.4 Future researches 176
5.4.1 Brute fact problem 176
5.4.2 Origin of subjective experiences 177
5.5 Concluding remarks 181

5.1 Introduction
Subjective experiences potentially pre-exist in the Universe, in analogy to a
tree that potentially pre-exists in the seed. However, the issue of how a spe-
cific subjective experience (SE) is actualized/realized/experienced needs
rigorous investigation. In this regard, I have developed two hypotheses:
(1) the existence of mechanisms of matching and selection of SE patterns
in the brain-mind-environment, and (2) the possibility of explaining the
emergence of consciousness from the operation of these mechanisms.
The former hypothesis was developed in the theoretical context of
Dual-Aspect1 Monism (Vimal 2008b) with Dual-Mode (Vimal 2010c)

The work was partly supported by VP-Research Foundation Trust and Vision Research
Institute Research Fund. The author would like to thank Alfredo Pereira, Wolfgang
Baer, Andrew Fingelkurts, Ron Cottam, Dietrich Lehmann, and other colleagues for
their critical comments, suggestions, and grammatical corrections.
1 One could argue that the term dual aspect resembles dualism and the term dou-
ble aspect suggests complementarity (such as wave-particle complementarity). In this

149
150 Ram L. P. Vimal

and varying degrees of dominance of the aspects depending on the levels


of entities (abbreviated DAMv), where the inseparable mental and phys-
ical aspects of states of entities are assumed to have co-evolved and co-
developed. The DAMv framework is consistent, to a certain extent, with
other dual-aspect views such as reflexive monism (Velmans 2008), the
retinoid system modeling (Trehub 2007), triple-aspect monism (Pereira
Jr., this volume, Chapter 10).
DAMv is complementary to the global workspace theory (Baars 2005;
Dehaene et al. 1998), neural Darwinism (Edelman 1993), the neural
correlates of consciousness framework (Crick and Koch 2003), emer-
gentist monism (Fingelkurts et al. 2009, 2010a, 2010c), autopoiesis and
autonomy (Varela et al. 1974; Varela 1981; Maturana 2002), theories
of cognitive embodiment/embeddedness (Thompson and Varela 2001),
and neurophenomenology (Varela 1996). It is also affine to theories of
a self-organization-based genesis of the self (Schwalbe 1991), and the
mind-brain equivalence hypothesis (Damasio 2010).
The latter receives special attention in this chapter. Damasio pro-
poses that brain states and mental states are equivalent. This can be
re-interpreted using the DAMv framework for example, when the phys-
ical aspect of the self-related neural-network (NN) state or process is
generated in three stages (protoself, core self, and autographical self), its
inseparable mental aspect also emerges because of the doctrine of insepara-
bility of aspects.
The hypothesis of emergence is often taken as a mysterious one
(Vimal 2009d). In this chapter, I further elaborate on this hypothe-
sis, considering it a case of strong emergence (according to the concept
advanced by Chalmers, 2006) of SE that can be unpacked in terms
of matching and selection mechanisms. Given the appropriate funda-
mental psychophysical laws (Chalmers 2006), a specific SE is strongly
emergent from the interaction between (stimulus-dependent or endoge-
nous) feed-forward signals and cognitive feedback signals in a relevant
NN. These laws, in the proposed framework, might be expressed in the
matching and selection mechanisms to specify a SE. We conclude that
what seems to be a mysterious emergence could be unpacked partly
into the pre-existence of potential properties, matching, and selection
mechanisms.

chapter, however, these terms are used interchangeably and represent the inseparable
mental (from the subjective first-person perspective) and physical (from the objective
third-person perspective, and/or matter-in-itself) aspects of the same state of same
entity. This is close to the double aspect theory of Fechner and Spinoza (Stubenberg
2010).
Emergence in dual-aspect monism 151

5.2 Philosophical positions regarding mind and matter


One could categorize all entities of our Universe in two categories: physical
(P: such as fermions, bosons, and their composites, including classical
inert entities and neural networks: NNs) and mental entities (M: such as
SEs, self, thoughts, attention, intention, and other non-physical entities).
This categorization entails four major philosophical positions:
1. M from P (P is primitive/fundamental): naturalistic/physicalistic/
materialistic nondual monism, physicalism, materialism, reduction-
ism, non-reductive physicalism, naturalism, or Carvaka/Lokayata
(800500 BCE; Raju 1985);
2. P from M (M is primitive): idealism, mentalistic nondual monism, or
Advaita (788820 AD; Radhakrishnan 1960);
3. P and M are independent but can interact (both P and M are
equally primitive): interactive substance dualism, Prakr.ti and Purus.a
of Sam
. khya (1000600 BCE or even before Gta) or Gta (3000 BCE);
see (Radhakrishnan 1960); and
4. P and M are two inseparable aspects of a state of a fundamen-
tal entity (such as fermions and bosons, the primitive quan-
tum field/potential2 or unmanifested Brahman; they are primitive).
This view is assumed in Dual-Aspect Monism (DAM), triple aspect
monism (stating that M can be further divided into non-conscious
M and conscious M), neutralism, Kashmir Shaivism (860925 CE),
and Visis..tadvaita (10171137 CE: mind (cit) and matter (acit) are
adjectives of Brahman); (see Radhakrishnan 1960).
We will concisely elaborate (1) and (4); (2) and (3) are detailed by Vimal
(2010d).

5.2.1 Materialism
The current dominant view of science is materialism, which assumes
that mind/consciousness/SE somehow arises from non-experiential mat-
ter such as NNs of brain. In materialism (Levine 1983; Loar 1990,
1997; Levin 2006, 2008; Papineau 2006), qualia/SEs (such as redness)
are assumed to mysteriously emerge or reduce to (or to be identical with)
relevant states of NNs. This is taken as a brute fact (thats just the way
it is).

2 See t Hooft (2005, p. 4) for primitive quantum field, Bohm (1990) for quantum
potential, and Hiley and Pylkkanen (2005) for primitive mind-like quality at the quantum
level via active information.
152 Ram L. P. Vimal

The major problem of materialism is Levines explanatory gap (Levine


1983): the gap between experiences and scientific descriptions of those
experiences (Vimal 2008b). In other words, how can our experiences
emerge (or arise) from non-experiential matter such as NNs of our brain
or organism-environment interactions? In addition, materialism makes
a category mistake (Feigl 1967): mind and matter are of two different
categories and one cannot arise from other. Furthermore, materialism
has three more assumptions (Skrbina 2009): matter is the ultimate reality,
and material reality is essentially objective and non-experiential.

5.2.2 Extended dual-aspect monism (the DAMv framework)


Since materialism has problems, we propose the dual-aspect monism
framework with dual-mode and varying degrees of dominance of aspects,
depending on the levels of entities (the DAMv framework; Vimal 2008b,
2010c), which will be concisely detailed later. This framework is optimal
because it has the least number of problems (Vimal 2010c).
The mental aspect of a state of an entity (such as brain) is experi-
enced from the subjective first-person perspective; it includes subjective
experiences such as color vision, thoughts, emotions, and so on. The
physical aspect of the same state of the same entity (brain) is observed
from the objective third-person perspective; it includes, in this exam-
ple, the appearances of the related neural-network of the brain and its
activities. To elaborate on it further, the physical aspect of a state of an
entity has two components: (1) the appearances or measurements of the
entity from the objective third-person perspective, and (2) Kants Ding-
an-sich or thing-in-itself, whatever that might be (the intrinsic nature
of matter or matter-in-itself is unknown to us; we can only hypoth-
esize what it might be). For example, it may be (1) matter-in-itself
composing physical objects in classical mechanics, (2) mind-in-itself
or mind-like states and processes (Stapp 2009a, 2009b, 2001) in wave
theory of quantum physics, and/or (3) elementary-particle-in-itself in
the Standard Model, based on the particle theory of quantum physics.
Since we do not have consensus about which theory or model is cor-
rect, the DAMv framework should encompass all views until there is
consensus.
If an entity is a classical object (such as a ripe tomato), then we as
third persons can observe its appearance, but we will never know its
first-person experience (if any!), because for that we would need to be the
ripe tomato. One could then argue that in this case the physical aspect of
the state of the ripe tomato is dominant and its mental aspect latent. If an
entity is a quantum entity (such as the electron), then we as third-persons
Emergence in dual-aspect monism 153

should observe the appearance of the electron, but we cannot see the
electron (as it is too small); we can measure its physical properties (such
as mass, spin, charge in the Standard Model). We will never know the
first-person experience (if any) of the electron because for that we would
need to be an electron. One could then argue that the physical aspect
of a state of the electron is dominant and its mental aspect is latent
for us.

5.2.2.1 DAM and the doctrine of inseparability In the DAMv


framework, the state of each entity has inseparable mental and physical
aspects, where the doctrine of inseparability is essential to address various
relevant problems discussed in Vimal (2010d). There are a number of
hypotheses in this framework. In Vimal (2010c, 2010d, 2010g), three
competing hypotheses about the inseparability and status of SEs and
proto-experiences (PEs) are described: (1) superposition based (hypoth-
esis H1 ), (2) superposition-then-integration based (H2 ), and (3) integration
based (H3 ), where superposition is not required.
In H1 , the mental aspect of the state of each fundamental entity
(fermion or boson) or a composite inert matter is the carrier of superim-
posed potential SEs/PEs.3 In H2 , the mental aspect of the state of each
fundamental entity and inert matter is the carrier of superimposed poten-
tial PEs (not SEs); these PEs are integrated by neural-Darwinian processes
(co-evolution, co-development, and sensorimotor co-tuning by the evo-
lutionary process of adaptation and natural selection). There is a PE
attached to every level of evolution (such as atomic-PE, molecular-PE,
genetic-PE, bacterium-PE, neural-PE, and neural-net-PE). In H3 , for
example, a string has its own string-PE; a physical entity is not a car-
rier of PE(s) in superposed form as it is in H2 ; rather its state has two
inseparable aspects. H3 is a dual-aspect panpsychism because the mental
aspect of the entity-state is in all entities at all levels, even though psyche
(conscious SE) only emerges when PEs are integrated at human/animal
level. These two aspects of the state of various relevant entities for
brain-mind and/or other systems are rigorously integrated via neural
Darwinism.
In H1 , a specific SE arises (or is realized) in a neural-net as follows:
(1) there exists a virtual reservoir (detailed in Vimal 2008b, 2010c) that
stores all possible fundamental potential SEs/PEs. (2) The interaction of

3 In general, PEs are precursors of SEs. In hypothesis H1 , PEs are precursors of SEs in
the sense that PEs are superposed SEs in unexpressed form in the mental aspect of every
entity-state, from which a specific SE is selected via matching and selection process in
brain-environment system. In hypotheses H2 and H3 , PEs are precursors of SEs in the
sense that SEs somehow arise/emerge from PEs.
154 Ram L. P. Vimal

stimulus-dependent feed-forward and feedback signals in the neural net


creates a specific dual-aspect NN-state. (3) The mental aspect of this
specific state is assigned to a specific SE from the virtual reservoir dur-
ing neural Darwinian processes. (4) This specific SE is embedded in the
mental aspect of related NN-state as a memory trace of neural net-PE.
And (5) when a specific stimulus is presented to the NN, the associ-
ated specific SE is selected by the matching and selection processes and
experienced by this NN (that includes self-related areas such as cortical
midline structures, which were studied by Northoff and Bermpohl 2004;
Northoff et al. 2006).
For example, when we look at a red ball, it generates a state/
representation in the brain, which is called a redness-related brain state;
this state has two inseparable aspects: a mental and a physical aspect. Our
subjective color experience is redness, the mental aspect of the NN-state.
The red ball also activates a brain area called visual V4/V8/VO color
area; this structure and other related structures (such as the self-related
cortical midline structures) form an NN that has related activities such
as neuronal firing that we can measure using functional MRI. The phys-
ical aspect of this NN-state consists of the NN and its activities. These
two aspects are inseparable in the dual-aspect monism. Here, the sub-
stance is just a single entity-state (the NN-state), which justifies the
term monism; however, there are two inseparable aspects/properties,
which justifies the term dual-aspect.
In hypotheses H2 and H3 , a specific SE emerges mysteriously in an
NN from the interaction of its constituent neural-PEs, such as in feed-
forward stimulus-dependent neural signals and fronto-parietal feedback
attentional signals. In all hypotheses, a specific SE is realized and reported
when essential ingredients of SEs (such as wakefulness, reentry, attention,
working memory, and so on) are satisfied.

5.2.2.2 Dual-mode in DAM In (Vimal 2010c), the dual-mode


concept4 is explicitly incorporated in dual-aspect monism. The two
modes are called non-tilde and tilde modes:
1. The non-tilde mode is the cognitive nearest past approaching towards
present; this is because memory traces (that contains past information)
are stored in feedback system, which are involved in the matching
process (they match with stimulus-dependent feed-forward signals).
In the DAMv framework, the state of each entity has two inseparable
(mental and physical) aspects. Therefore, the NN-state of cognition

4 The dual-mode concept is derived from thermofield dissipative quantum brain dynamics
(Globus 2006; Vitiello 1995).
Emergence in dual-aspect monism 155

(memory and attention) related feedback signal in a NN of the brain


has inseparable mental and physical aspects.
2. The tilde mode is the nearest future approaching towards present and is
an entropy-reversed representation of non-tilde mode.5 This is because
the immediate future is related to the feed-forward signals due to
external environmental input and/or internal endogenous input. The
NN-state of feed-forward signals has its inseparable mental and physical
aspects.
The physical aspect (P) of the state related to the non-tilde mode is
matched with the physical aspect of the state related to the tilde mode
(P-P matching) and/or the mental aspect (M) of the state related to the
non-tilde mode is matched with the mental aspect of the state related
to the tilde mode (M-M matching). In other words, there is no cross-
matching/cross-interaction (such as M-P or P-M), and hence there is
no category mistake. As mentioned before, mind and matter are of two
different categories and one cannot arise from other; mind and matter
cannot interact with each other; a mental entity has to interact with
another mental entity but never with a physical entity and vice versa;
cross-interaction is prohibited, otherwise we make a massive category
mistake.
Interactive substance dualism (where mind and matter interact), mate-
rialism (mind arises from matter), and idealism (matter arises from
mind)6 make category mistakes and hence should be rejected. This is
because we do not have scientific evidence for M-P or P-M. However,
we have evidence for P-P from physics, which implies that same-same
interactions cannot be rejected. If we find scientific evidence for cross-
interaction M-P or P-M, then we can reject the categorization of all
entities into two categories (as needed in materialism or idealism, to
avoid category mistakes). In that case, we would not reject materialism,
idealism, or substance dualism based on the category mistake argument.
If we cannot reject the doctrine of category mistake, then it clearly supports
only dual-aspect monism and its variations such as the DAMv and triple
aspect monism frameworks.
Addressing biological structure and function and connecting their prop-
erties, there are many neuroscience models containing five major sub-
pathways of two major pathways (stimulus-dependent feed-forward and
cognitive feedback pathways):

5 Entropy is related to time.


6 There is no category mistake if idealism implies the emergence of appearance (mental
entity) of matter from mind. However, if the matter-in-itself is a real matter and
assuming it emerges from mind or is a congealed mind, it is indeed a category mistake.
156 Ram L. P. Vimal

1. The classical axonal-dendritic neuro-computation (Steriade et al.


1993; Crick and Koch 1998; Damasio 1999; Litt et al. 2006) Neural
Darwinism (Edelman 1993, 2003) and the consciousness electromagnetic
information field (CEMI field) theory (McFadden 2002a, 2002b, 2006;
see also Lehmann, this volume, Chapter 6),
2. the quantum dendritic-dendritic sub-pathway for quantum-
computation (Hameroff and Penrose 1998) and quantum coherence
in the K+ ion channels (Bernroider and Roy 2005),
3. astro-glia-neuronal transmission (Pereira Jr. 2007),
4. the sub-pathway related to extracellular fields, gaseous diffusion (Poz-
nanski 2002), or global volume transmission in the gray matter as fields
of neural activity (Poznanski 2009); and the sub-pathway related to
local extrasynaptic signaling between fine distal dendrites of cortical
neurons (Poznanski 2009), and
5. the sub-pathway related to information transmission via soliton prop-
agation (Davia 2006; Vimal and Davia 2008).
Furthermore, to link structure and function with experiences, there are
two types of matching mechanisms in the DAMv framework: (1) the
matching mechanism for the quantum dendritic-dendritic MT path-
way, and (2) the matching mechanism for classical pathways. In other
words, we propose that (a) the quantum conjugate matching between expe-
riences in the mental aspect of the NN-state in tilde mode and that of
the NN-state in non-tilde mode is related mostly to the mental aspect
of the NN-state in the quantum MT-dendritic-web, namely (2). And
(b) the classical matching between experiences in the mental aspect of the
NN-state in tilde mode and that of the NN-state in non-tilde mode is
related to the mental aspect of the NN-state in remaining non-quantum
pathways, namely (1) and (3)(5). Similarly, the physical aspects are
matched.
In all cases, a specific SE is selected (i) when the tilde mode (the
physical and mental aspect of NN-state related to feed forward input
signals) interacts with the non-tilde mode (the physical and mental aspect
of NN-state related to cognitive feedback signals) to match for a specific
SE, and (ii) when the necessary ingredients of SEs are satisfied. When
the match is made between the two modes, the world-presence (Now) is
disclosed; its content is the SE of the subject (self), the SE of objects,
and the content of SEs. The physical aspects in the tilde mode and that
in the non-tilde mode are matched to link structure with function, whereas
the mental aspects in the tilde mode and that in the non-tilde mode
are matched to link experience with structure and function. However, if
physical aspects are matched, mental aspects will be automatically and
Emergence in dual-aspect monism 157

appropriately matched and vice versa, because of the doctrine of insepara-


bility of mental and physical aspects.

5.2.2.3 The concept of varying degrees of the dominance of aspects


depending on the levels of entities in DAM with dual-mode We can intro-
duce the third essential component of our framework, namely the concept
of varying degrees of dominance of aspects depending on the levels of
entities in dual-aspect monism (Vimal 2008b) with the dual-mode (Vimal
2010c) framework. The combination of all these three essential compo-
nents is called the DAMv framework.7 For example, in an inert entity,
such as a rock, the physical aspect is dominant from the objective third-
person perspective, while the mental aspect appears latent (we really do
not know, because one would have to be an inert-entity/rock to know
its subjective first person perspective). When we are awake and con-
scious, both aspects are equally dominant. At the quantum level, the
physical aspect is dominant and the mental aspect is latent, similar to
classical inert objects. By the term latent, we mean that the aspect is
hidden/unexpressed/un-manifested and will re-appear when appropriate
conditions are satisfied.
Let us start examining aspects with respect to the mind-independent
reality (MIR), from humans to classical inert entities and to quantum
entities. As per Kant (1929), the thing-in-itself (in MIR) is unknown; we
know only its appearance (in the conventional mind-dependent reality:
cMDR). However, as per neo-Kantians, since the mind is also a product
of nature, the mind must be telling us something about MIR and the
human mind is the only vehicle to know MIR. If we assume that the
state of an Entity-in-Itself (MIR) has inseparable double/dual (men-
tal and physical) aspects, then the state of the human-in-itself has a
physical aspect (such as the body-brain system and its activities) and a
mental aspect (such as SEs, intentions, self, attention, and other cogni-
tive functions). The state of a being in the animal kingdom, such as a
bird-in-itself, has a physical aspect (such as the body-brain system and
its activities), but its mental aspect seems to be of lower degree compared
to humans. The state of a plant has a physical aspect, such as its roots to
branches and respective activities, and a mental aspect in term of adaptive
functions; it is unclear if a plant has experiences, a self, attention, and

7 The DAMv framework was discussed in detail in (Vimal 2008b, 2010c) and was elabo-
rated further in (Bruzzo and Vimal 2007; Caponigro et al. 2010; Caponigro and Vimal
2010; MacGregor and Vimal 2008; Vimal 2008a, 2009a, 2009b, 2009c, 2009d, 2010a,
2010b, 2010d, 2010e, 2010f, 2010g; Vimal and Davia 2010).
158 Ram L. P. Vimal

other human-like cognitions. The states of dead bodies (of human, ani-
mals, birds, and plants) and inert entities (such as cars, rocks, buildings,
roads, bridges, water, air, fire, Sun, Moon, planets, galaxies, and so on)
and other classical macro entities and micro entities (such as elementary
particles) have the dominant physical aspect and latent mental aspect.
When we march on to quantum entities, the dominance of aspects
needs further clarification: we are puzzled on a third-person perspective
of them, as we are unable to visualize and depend on our models and
indirect effects to know about them. We see quantum effects, such as non-
local effects (EPR hypothesis; Einstein et al. 1935), proved in Aspects
experiments (Aspect 1999). These results allow a description in terms
of probabilities/potentialities. These are mind-like effects (Stapp 2009a,
2009b, 2001) from the objective third-person perspective. Furthermore,
we will never know what quantum entities experience; so, the mental
aspect of a state of a quantum entity is hidden. Therefore, we propose
that the state of a quantum entity has a dominant physical aspect and a
latent mental aspect. However, the quantum mental aspect is not like a
human mind; rather, the quantum mind-like aspect has to co-evolve with
its inseparable physical aspect over billions of years and the end product is
the human mind (mental aspect) and inseparable human brain (physical
aspect), respectively.
This concept of varying degrees of the dominance of aspects depending on
the levels of entities is introduced to encompass most views. For example:
(1) in materialism, matter is the fundamental entity and mind arises from
matter. This can be re-framed by considering the state of the fundamen-
tal entity in materialism as a dual-aspect entity with dominant physical
aspect and latent mental aspect. (2) In interactive substance dualism, mind
and matter are on equal footing, they can independently exist, but they
can also interact. This can be re-framed as: the state of mental entity
has dominant mental aspect and latent physical aspect, and that of mate-
rial entity has dominant physical aspect and latent mental aspect. (3) In
idealism, consciousness/mind is the fundamental reality, and matter (i.e.,
matter-in-itself in addition to its appearances) emerges from it. This can
be re-framed, as the state of the fundamental entity (in idealism) is a dual-
aspect entity with dominant mental aspect and latent physical aspect; the
matter-in-itself arises from the physical aspect. Thus, the DAMv frame-
work encompasses and bridges most views; and hence it is closer to be a
general framework.

5.2.2.4 The evolution of universe in the DAMv framework The


evolution of universe in the DAMv framework (Vimal 2008b, 2010c)
is the co-evolution of the physical and mental aspects of the states of
Emergence in dual-aspect monism 159

the universe starting from the physical and mental aspect of the state
of quantum empty-space at the Big Bang to finally the physical and
mental aspect of the states of brain-mind over 13.72 billion years. It
can be summarized as: [Dual-aspect fundamental primal entity (such as
unmanifested state of Brahman, sunyata, quantum empty-space/void at
the ground state of quantum field with minimum energy, or Implicate
Order: same entity with different names)] [Quantum fluctuation in
the physical/mental aspect of the unmanifested state of primal entity]
Big Bang [Very early dual-aspect universe (Planck epoch, Grand
unification epoch, Electroweak epoch: Inflationary epoch and Baryoge-
nesis): dual-aspect universe with dual-aspect unified field dual-aspect
four-fundamental forces/fields (gravity as curvature of space, electro-
magnetic, weak and strong) via inflation in dual-aspect space-time con-
tinuum] [Early dual-aspect universe (supersymmetry breaking, Quark
epoch, Hadron epoch, Lepton epoch, Photon epoch: Nucleosynthesis,
Matter domination, Recombination, Dark ages): Dual-aspect funda-
mental forces/fields, elementary particles (fermions and bosons), and
antiparticles (anti-fermions) in dual-aspect space-time continuum]
[Dual-aspect Structure formation (Reionization, Formation of stars, For-
mation of galaxies, Formation of groups, clusters and superclusters, For-
mation of our Solar System, Todays Universe): Dual-aspect matter
(fermions and composites, galaxies, stars, planets, earth, and so on),
bosons, and fields and dual-aspect life and brain-states (experiential
and functional consciousness including thoughts and other cognition as
the mental aspect (Vimal 2009b, 2010d), and NNs and electrochem-
ical activities as the physical aspect) in dual-aspect space-time contin-
uum] [Ultimate fate of the dual-aspect universe: Big Freeze, Big Crunch,
Big Rip, Vacuum Metastability Event, and Heat Death OR dual-aspect
Flat Universe (Krauss 2012)]. In the DAMv framework, the state of the
dual-aspect unified field has the inseparable mental and physical aspects,
which co-evolved and co-developed eventually over 13.72 billion years
(Krauss 2012) to our mental and physical aspects of brain-state. The
mental aspect was latent until life appeared; then its degrees of dom-
inance increased from inert matter to plant to animal to human; for
awake, conscious, active humans, both aspects are equally dominant; for
inert entities, the mental aspect is latent and physical aspect is dominant.

5.2.2.5 Comparisons with other frameworks The DAMv frame-


work is consistent, to a certain extent, with other dual-aspect views such
as (1) reflexive monism (Velmans 2008), (2) retinoid system (Trehub
2007), and (3) triple-aspect monism (Pereira Jr., this volume, Chapter
10).
160 Ram L. P. Vimal

According to Velmans:

Reflexive Monism [RM] is a dual-aspect theory . . . which argues that the one
basic stuff of which the universe is composed has the potential to manifest both
physically and as conscious experience. In its evolution from some primal undif-
ferentiated state, the universe differentiates into distinguishable physical enti-
ties, at least some of which have the potential for conscious experience, such as
human beings . . . the human mind appears to have both exterior (physical) and
interior (conscious experiential) aspects . . . According to RM . . . conscious states
and their neural correlates are equally basic features of the mind itself. . . . the
reflexive model also makes the strong claim that, insofar as experiences are any-
where, they are roughly where they seem to be. . . . representations in the mind/brain
have two (mental and physical) aspects, whose apparent form is dependent on
the perspective from which they are viewed. (Velmans 2008)

In my view, the reflexive monism framework needs to address a few


explanatory type problems: (1) What is that mechanism which differ-
entiates the presumed primal undifferentiated state of the universe into
distinguishable physical entities, at least some of which have the potential
for conscious experience, such as human beings? (2) What is so special
about some entities that become conscious? (3) How can mind (a mental
entity) have two aspects: exterior (physical) and interior (conscious expe-
riential) aspects, that is, how can a mental entity have a physical aspect?
Is this because the third-person perspective (the physical aspect) is also
a minds construct? (4) How can the objects of experiences are roughly
where they seem to be whereas the process of experiencing is in the NN of
brain? One could argue that SEs, such as redness, belong to the subject
(in her/his subjective first person perspective) and is the function of the
triad: brain, body, and environment; otherwise, achromats should also
be able to experience redness if redness only belonged to external objects
such as the ripe tomato.
In DAMv framework, the SE of a 3D phenomenal world can be nothing
more than the mental aspect of a brain-state/representation that must
be inside the brain. However, its physical aspect can consist of (1) the
related NN and its activities that are inside the brain, (2) the body, and
(3) the environment. Moreover, the aspects of the brain state and that
of the 3D world state are tuned (Vimal 2010c). In other words, DAMv,
like reflexive monism, accepts (1) the world appearance-reality (cMDR-
MIR) distinction and (2) that conscious appearances (SEs) really are
(roughly) how they seem to be. The term appearance is in our daily
cMDR; and the term reality is the thing-in-itself or MIR that is either
unknown as per Kant or partly known via cMDR because the mind is
also a product of nature, and hence it must be telling us at least partly
about the thing-in-itself.
Emergence in dual-aspect monism 161

In reflexive monism, perceptual projection is a psychological effect pro-


duced by unconscious perceptual processing (Velmans 2000, p. 115).
This is the mental aspect of a related brain state in the DAMv framework.
The latter incorporates some of the features of both reflexive monism and
biological naturalism (non-reductive or emergent forms of physicalism:
Searle 2007; Velmans 2008). In the DAMv framework, the real skull and
its (tactile, visual image in a mirror) appearance are between the men-
tal aspect of brain-state and the psychologically projected phenomenal
world.
The Self, in the DAMv framework, is the mental aspect of the self-
related NN-state; its physical aspect is the self-related NN and the related
activities. The Self is inside the brain where it roughly seems to be. The
NN for protoself, core self, and autobiographical self are discussed in
(Damasio 2010) and later.
The DAMv framework is complementary to the framework of self-
organization-based autogenesis of the self (Schwalbe 1991), which elabo-
rates in detail the autogenesis of the physical aspect related to conscious-
ness and self, using anti-reductionistic materialism. Schwalbe (1991)
proposes four stages of self-organization for the development of the phys-
ical aspect for consciousness and the self: (1) self-organization of neural
networks (NNs), (2) the selective capture of information by the body,
(3) the organization of impulses by imagery, and (4) the organization
of imagery by language. Since the mental aspect is inseparable from its
physical aspect in the DAMv framework, the autogenesis of the mental
aspect is completed when the autogenesis of the related physical aspect
is completed, and vice versa.
The DAMv framework is also complementary to the neuroscience of
consciousness approached from the mind component such as: (1) Global
Workspace Theory (Dehaene et al. 1998; Baars 2005) that proposes
massive cross-communication of various components of mind process
and highly distributed brain process underlying consciousness; (2) Neu-
ral Darwinism (Edelman 1993) that proposes selection and reentrant
signaling in higher brain function based on the theory of neuronal group
selection for integration of cortical function, sensorimotor control, and
perceptually based behavior; and (3) the framework of neural correlates
of consciousness that proposes: a coherent scheme for explaining the
neural correlates of (visual) consciousness [NCC] in terms of competing
cellular assemblies (Crick et al. 2003).
The DAMv framework is also complementary to the neuroscience
of consciousness approached from the self-component and the mind-brain
equivalence hypothesis (Damasio 2010) that proposes (1) the two stages
of evolutionary development of the Self: the self-as-knower (I) and
162 Ram L. P. Vimal

the self-as-object (me) and (2) the three steps of the Self-as-knower:
protoself, core self, and autographical self.
According to Damasio:

two stages of evolutionary development of the self, the self-as-knower having


had its origin in the self-as-object . . . James thought that the self-as-object, the
material me, was the sum total of all that a man could call his [personal and related
entities] . . . There is no dichotomy between self-as-object and self-as-knower;
there is, rather, a continuity and progression. The self-as-knower is grounded
on the self-as-object . . . In the perspective of evolution and in the perspective
of ones life history, the knower came in steps: the protoself and its primordial
feelings; the action-driven core self; and finally the autobiographical self, which
incorporates social and spiritual dimensions. (Damasio 2010, p. 910)

Damasio elaborated in detail the physical aspect of the three steps of


self-as-knower (Damasio 2010; see also Table 5.1): (1) protoself (gener-
ated in the brain-stem for the stable aspect of the organism with primor-
dial feelings such as hunger, thirst, hot, cold, pain, pleasure, and fear;
it is independent of the organism-environment interaction); (2) core self
(generated when the protoself is modified by an interaction between the
organism and an object and when, as a result, the images of the object
are also modified: Damasio 2010, p. 181; it involves the feeling of
knowing the object, its saliency, and a sense of ownership: Damasio
2010, p. 203); and (3) autobiographical self (occurs when objects in ones
biography generate pulses of core self that are, subsequently, momentar-
ily linked in a large-scale coherent pattern (Damasio 2010, p. 181),
allowing an interaction with multiple objects).
When the protoself interacts with an object, both the organism and
its primordial feelings are modified, thus creating a core self with (1) the
feelings of knowing that results the saliency of object and ownership/agency
and (2) the first-person perspective (Damasio 2010, p. 206). The auto-
biographical self is constructed as follows: (a) past biographical memories
(total sum of life experiences, including future plans), individual or in
group, are retrieved and assembled together so that each can be treated
as an individual object. (b) Each of these biographical objects (and/or
current external multiple objects) is allowed to interact and modify the
protoself to make an object-image conscious, which (c) then creates a
core self pulse with the respective feelings of knowing and consequent
object saliency via the core self mechanism. (d) Many such core-self
pulses interact and the results are held transiently in a coherent pattern.
A coordinating mechanism coordinates steps (a), (b), and (d) to con-
struct the autobiographical self (Damasio 2010, pp. 212213). Qualia
are a part of the self-process (Damasio 2010, p. 262).
Emergence in dual-aspect monism 163

Table 5.1 Status of the three steps of self-as-knower under various


conditions; see also (Damasio 2010, pp. 225240).

Natural and Neurological Autobiographical


Conditions Protoself Core Self Self

Wakefulness Normal Normal Normal


Dream REM Not normal Not normal Not normal
Dreamless sleep (non-REM) Suspended, but Suspended Suspended
brainstem is
still active8
Revelation, samadhi, and Transcendental Transcendental Transcendental9
mystic-state
Near-death & out-of-body Compromised/ Compromised/ Compromised/
experiences altered-state altered-state altered-state
Anesthesia superficial level Intact Intact Anesthetized
Anesthesia deepest level Anesthetized Anesthetized Anesthetized
Alzheimers disease Initial Intact Intact Compromised
stage
Alzheimers disease mid Intact Compromised Compromised
stage
Alzheimers disease final Compromised Compromised Compromised
stage
Epilepsy Intact Intact Compromised
Locked-in syndrome Intact Intact Compromised
Vegetative Compromised Compromised/ Compromised
dysfunctional
Coma Compromised Compromised/ Compromised
dysfunctional
Death Dead Dead Dead

The neural correlates of protoself include: (a) area postrema (critical


homeostatic integration center for humoral and neural signals, toxin
detector) and the nucleus tractus solitarius (for body-state management
and primordial feelings) of the medulla, parabrachial nucleus (for body-
state management and primordial feelings) of pons, periaqueductal gray
(for life regulation and feelings) and superior colliculus (deep layers:
for coordination: Hartline et al. 1995) of midbrain, and hypothalamus
for interoceptive integration at brain-stem level; and (b) insular cortex

8 During non-REM (slow wave) sleep, the inferior frontal gyrus, the parahippocampal
gyrus, the precuneus and the posterior cingulate cortex, as well as the brain stem and
cerebellum are active (Dang-Vu et al. 2008).
9 Prophets/rishis/seers usually have three kinds of transcendental experiences during reve-
lation/samadhi/mystic states with altered activities in various brain-areas: bliss, inner light
perception, and the unification of subject and objects.
164 Ram L. P. Vimal

and anterior cingulate cortex for interoceptive integration, and frontal


eye fields (Brodmanns area 8) and somatosensory cortices for external
sensory portals at cerebral cortex level (Damasio 2010, pp. 191, 260).
The neural correlates of the core-self include: (a) all brain-stem nuclei
of the protoself, (b) the nucleus pontis oralis and nucleus cuneiform of
the brain-stem reticular formation, (c) intralaminar and other nuclei of
the thalamus, (d) monoaminergic (noradrenalinergic, norepinephriner-
gic locus coeruleus, serotoninergic raphe, and dopaminergic ven-
tral tegmental) and cholinergic nuclei (Damasio 2010, pp. 192193,
248).
The neural correlates of autobiographical-self include: (a) all structures (in
the brain stem, thalamus, and cerebral cortex) required for the core self,
and (b) structures involved in coordinating mechanisms, such as (i) pos-
teromedial cortices (posterior cingulate cortex, retrosplenial cortex, and
precuneus; Brodmann areas 23a/b, 29, 30, 31, and 7m), (ii) thalamus
and associated nuclei, (iii) temporoparietal junction, lateral and medial
temporal cortices, lateral parietal cortices, lateral and medial frontal cor-
tices, and posteromedial cortices, (iv) claustrum, (v) thalamus, and so
on (Damasio 2010, pp. 215224, 248).
Brain stem, thalamus, and cerebral cortex all contribute to the gener-
ation of the triad related to consciousness: wakefulness, mind, and self.
Some of the brain-stem functions can be divided as follows: (1) medulla
for breathing and cardiac function; its destruction leads to death; (2) pons
and mesencephalon (back part) for protoself; their destruction leads to
coma and/or vegetative state; (3) tectum (superior and inferior colli-
culi) for coordination and integration of images; and (4) hypothalamus
for life regulation and wakefulness (Damasio 2010, p. 244). The tha-
lamus (1) relays critical information to cerebral cortex, (2) massively
inter-associates cortical information, (3) addresses the major anatomo-
functional bottleneck between a small brain stem and a hugely expanded
cerebral cortex (that forms object-images in detail), by disseminating
brain stem signals to cortex; the cortex in turn funnels signals to brain
stem directly and with the help of subcortical nuclei such as the amygdala
and basal ganglia (pp. 250251), and (4) participates in the coordina-
tion necessary for the autobiographical self (pp. 247251). The cerebral
cortex, interacting with the brain stem (for protoself) and thalamus (for
brain-wide recursive integration), (1) constructs the maps that becomes
the mind, (2) helps in generating the core self, and (3) constructs auto-
biographical self using memory (pp. 248249).
As per Damasio, whenever brains begin to generate primordial feel-
ings and that could be quite early in evolutionary history organisms
acquire an early form of sentience (Damasio 2010, p. 26).
Emergence in dual-aspect monism 165

One could query precisely how can sentience arise, be acquired, hap-
pen, or emerge from non-sentient matter? In other words, it seems that
he assumes that subjective experiences, including the self (protoself, core
self, and autobiographical self) somehow emerge from non-mental/non-
experiential matter such as their related neural networks and their activ-
ities. It is unclear: precisely how can an experiential entity emerge from
a non-experiential entity and what is the evidence for that mechanism?
Damasio writes further:

Feeling states first arise from the operation of a few brain-stem nuclei . . . The
signals are not separable from the organism states where they originate. The
ensemble constitutes a dynamic, bonded unit. I hypothesize that this unit enacts a
functional fusion of body states and perceptual states . . . protofeeling . . . . (Dama-
sio 2010, pp. 257263)

It is unclear if Damasio satisfactorily addressed Levines explana-


tory gap problem (Levine 1983) and Feigls category mistake problem.
Mind/experiences/self and matter/brain are of two different categories;
to generate mind from matter is a category mistake (Feigl 1967; Searle
2004). This query is related to Chalmers hard problem (Chalmers
1995a), which has not been addressed in the materialism/emergentism
framework satisfactorily. If materialism cannot address these problems,
then we may need to consider the dual-aspect monism framework as a
complement to materialism (Bruzzo et al. 2007; Vimal 2008b, 2010c): the
state of a neural network (or of any entity) has two inseparable aspects:
physical (objective third person perspective) and mental (subjective first
person perspective) aspects. This is not inconsistent with Damasio: The
word feelings describes the mental aspect of those [composite neural]
states (Damasio 2010, p. 99). Furthermore,

the mental state/brain state equivalence should be regarded as a useful hypoth-


esis . . . mental events are correlated with brain events . . . Mental states do exert
their influence on behavior . . . Once mental states and neural states are regarded
as two faces of the same process . . . downward causality is less of a problem.
(Damasio 2010, pp. 315316)

In the framework of Fingelkurts and Fingelkurts (2011), when function-


ally integrated in healthy subjects, the default-mode network (DMN)
persists in activated state as long as a subject is self-consciously engaged
in an active, complex, flexible, and adaptive behavior. Such mode of
DMN functioning can, therefore, help to integrate self-referential infor-
mation, to facilitate perception and cognition, as well as to provide a
social context or narrative in which events become personally meaning-
ful. The authors further proposed that since the integrity of DMN is
166 Ram L. P. Vimal

increased in schizophrenic patients, who have an exaggerated focus on


self, diminished in children and autistic patients, very low in minimally
conscious patients, extremely minimal during anesthesia, in coma and in
vegetative patients, and absent in brain death (see references therein), one
may conclude that a functionally integrated and intact DMN is indeed
involved in self-consciousness. If this is correct, the self dies with the
death of the brain.
In the DAMv framework, once the necessary conditions for subjective
experiences or consciousness are satisfied, a relevant NN-state (or a brain
process), including the activated state of DMN, is created that has two
inseparable aspects: physical and mental/experiential aspects. The neces-
sary conditions for access (reportable) consciousness are (1) formation
and activation of neural networks, (2) wakefulness, (3) reentrant interac-
tions among neural populations that bind stimulus attributes, (4) fronto-
parietal and thalamic-reticular-nucleus attentional signals that modulate
the stimulus-related feed-forward signal and consciousness, (5) working
memory that retains information for consciousness, (6) stimulus at or
above threshold level, and (7) neural-network PEs that are superposed
SEs embedded in a neural-network. Attention and the ability to report
are not necessary for phenomenal consciousness.
Furthermore, the DAMv framework can be considered as complemen-
tary to Maturana-Varelas materialistic biogenic-embodied/embedded-
phenomenal framework of autopoiesis/autonomy (Varela et al. 1974;
Varela 1981; Maturana 2002), radical embodiment and embedded-
subsystems (Thompson et al. 2001), and neurophenomenology (Varela
1996; reviewed in Rudrauf et al. 2003). For example, molecular processes
or states underlying molecular autopoeitic systems can be considered as
dual-aspect entities to avoid the problems of materialism. In addition,
the mind/consciousness/self can be considered as the mental aspect of
the state/process whose physical aspect is brain-body-environment.
According to Lutz and Thompson (2003, p. 48),

Whereas neuroscience to-date has focused mainly on the third-person, neurobe-


havioural side of the explanatory gap, leaving the first-person side to psychology
and philosophy, neurophenomenology employs specific first-person methods in
order to generate original first-person data, which can then be used to guide the
study of physiological processes.

The functional aspect of consciousness (such as detection and


discrimination of color; Vimal 2009b, 2010d) can be somehow spon-
taneously created in the materialistic biogenic-embodied/embedded-
phenomenal framework of autopoiesis/autonomy-radical embodiment-
neurophenomenology (discussed in Varela et al. 1974; Varela 1981,
Emergence in dual-aspect monism 167

1996; Thompson et al. 2001; Maturana 2002 and elaborated on further


in Lyon 2004 and Rudrauf et al. 2003). However, it is unclear how SEs
can arise from non-experiential matter.
The framework of Fingelkurts et al. (2010a) is ontological monism.
They speak about an emergentist monism which states that the rela-
tionship between the mental and the physical (neurophysiological) is
hierarchical and metastable (Fingelkurts et al. 2010c). According to this
view, emergent qualities (conscious mind) necessarily manifest them-
selves when, and only when, appropriate conditions are obtained at the
more basic level (brain). More precisely, within the context of the brain-
mind problem conceptualized within their Operational Architectonics
framework (Fingelkurts and Fingelkurts 2001, 2004, 2005; Fingelkurts
et al. 2009, 2010c), mental spatial-temporal patterns should be consid-
ered supervenient on their lower-order spatial-temporal patterns in the
operational level of brain organization. Emergentism, on the other hand,
usually allows for changes of higher-order phenomena that need not
possess one-on-one, direct linkage with changes at any underlying lower-
order levels. Thus, according to Fingelkurts et al. (2010c) the mental
is ontologically dependent on, yet not reducible to, the physical (neuro-
physiological) level of brain organization. However, it is reducible to the
operational level, which is equivalent to nested hierarchically organized
local electromagnetic brain fields and is constituent of the phenomenal
level (Fingelkurts et al. 2010c; see also brain electrical microstates in
Lehmann, this volume, Chapter 6).
In my view, Emergentism and also Operational Architectonics frame-
works are based on the mysterious and problematic materialistic frame-
work, and hence have the explanatory gap problem and make a
category mistake: how can experiences emerge from non-experiential
matter?
I argue that (1) the DAMv framework has fewer problems (such as
the justifiable brute fact of dual-aspect) compared to other views, and
(2) addresses problems not resolved by the other frameworks, including
the explanatory gap in materialism.
According to Nani and Cavanna (2011, Section 4), Our thesis has
been that the phenomenal transform [qualia], the set of discriminations,
is entailed by that neural activity. It is not caused by that activity but it
is, rather, a simultaneous property of that activity (Edelman 2004).
Although Edelmans thesis is based on materialism, the second sen-
tence can also be interpreted in terms of dual-aspect monism, as the
simultaneous properties of that activity are inseparable mental and phys-
ical aspect of the same NN-state. Moreover, Nani and Cavanna (2011)
commented,
168 Ram L. P. Vimal

If a certain property is necessarily implied by certain physical processes (in such


a way that the latter could not bring about the same effect without the former,
as Edelman claims), then either that very property and those physical processes are
different aspects of the same entity, or that very property is part of the co-occurring
physical processes. (Nani and Cavanna 2011, Section 4, italics are mine)

That italic statement is again consistent with the DAMv framework.


In other words, there is no cross-causation: mind/consciousness does
not cause physical neural activities and vice versa; there is no category
mistake because both physical and mental are inseparable aspects of the
same NN-state. The same-on-same (mental-on-mental or physical-on-
physical) causation is allowed as it does not make a category mistake, but
cross-causation is not allowed because it makes this mistake. Therefore,
consciousness, via mental downward causation, can cause the mental
aspect of a specific behavior, which is then automatically and faithfully
transformed to the related physical aspect of behavior because of the
doctrine of inseparability.
Pan-protopsychism is a view that proposes that: (1) consciousness or
its proto-conscious precursors are somehow built into the structure
of the universe, for example, pan-experiential qualities are embedded in
Planck scale geometry (1033 cm, 1043 s, the lowest level of reality)
as discrete information states, along with other entities that give rise to
the particles, energy[/mass], charge and/or spin of the classical world
(Hameroff and Powell 2009); (2) objective reductions (OR) occurs as
an actual event occurring in a medium of basic field of proto-conscious
experience; and (3) OR are conscious, and convey experiential qualities
and conscious choice (Hameroff and Powell 2009).
According to Hameroff and Powell (2009) proto-conscious experience
is the fundamental property of physical reality, which is accessible to a
quantum process (such as Orchestrated OR: Orch OR) associated with
brain activity (Hameroff 1998). Orch OR theory proposes: (1) an objec-
tive critical threshold for quantum state reduction, which reduces the
quantum computations to classical solutions connecting brain functions
to Planck scale fundamental quantum spacetime geometry; (2) that:
when enough entangled tubulins are superpositioned long enough [avoiding
decoherence] to reach OR threshold (by E = h/t, E is the magnitude of superposi-
tion/separation, h is Plancks constant over 2, and t is the time until reduction),
a conscious event (Whiteheadian occasion of experience) occurs; (Hameroff
and Powell 2009)

and (3) that neuronal-level functions (such as axonal firings, synaptic


transmissions, and dendritic synchrony) orchestrate quantum computa-
tions in the brains microtubule network.
Emergence in dual-aspect monism 169

Furthermore, Hameroff and Powell (2009) defend Neutral Monism,


claiming that matter and mind arise from or reduce to a neutral third
entity quantum spacetime geometry (fine-grained structure of the uni-
verse), and that Orch OR is the psycho-physical bridge between brain pro-
cesses (regulating consciousness) and pan-experiential quantum space-
time geometry (repository of protoconscious experience). A neutral
entity is intrinsically neither mental nor physical (Stubenberg 2010).
In addition, Orch OR events are: (1) transitions in spacetime geome-
try; (2) equivalent to Whiteheads occasions of experience (a moment
of conscious experience, a quantum of consciousness, correspond-
ing to Leibnizs monads, Buddhist-Sarvastivadins transient conscious
moments, or James specious moments); and (3) correlated with EEG
gamma synchrony at 40 Hz. Moreover, Orch OR is the conscious agent,
which operates in microtubules within -synchronized dendrites, gener-
ating 40 conscious moments per second. Consciousness is a sequence
of transitions, of ripples in fundamental spacetime geometry, connected
to the brain through Orch OR (Hameroff and Powell 2009).
However, this view has explanatory gap problems: how can the quan-
tum spacetime geometry be simultaneously a pan-experiential and neu-
tral entity, and how can mind and matter arise from or reduce to the
neutral entity? It seems that Hameroff and Powell (2009, see their
Fig. 1) propose that mind and matter arise from the neutral entity quan-
tum spacetime geometry by means of OR and decoherence mea-
surement, respectively. However, it is still unclear where do mind and
matter come from, and how does matter arise by means of decoherence
measurement.
Koch (2012) proposes Leibnizs monads10 as an alternative to emer-
gence and reductionism. He now believes that consciousness is a fun-
damental, an elementary, property of living matter. It cant be derived
from anything else; it is a simple substance, in Leibnizs words. . . . Any
conscious state is a monad, a unit it cannot be subdivided into compo-
nents that are experienced independently (pp. 119, 125). In the DAMv
framework, to minimize problems, a monad (any conscious state) is con-
sidered a dual-aspect state.

10 Leibnizs monads and parallel (soul experience and bodyrepresentation) duals (Leib-
niz 1714) seem to address the problems of Descartes and Spinoza, namely, the prob-
lematic interaction between mind and matter arising in Descartes framework and the
lack of individuation (individual creatures as merely accidental) inherent in Spinozas
framework. Monads could be the ultimate elements of the universe, human being,
and/or God. Leibnizs monad could be absolutely simple, without parts, and hence
without extension, shape or divisibility . . . subject to neither generation nor corruption
[ . . . ] a monad can only begin by creation and end by annihilation (Rutherford 1995)
(pp. 132133).
170 Ram L. P. Vimal

As per Sayre, the concept of information provides a primitive for the


analysis of both the physical and the mental (Sayre 1976, p. 16). More-
over, Sayre recently proposed that a neutral entity is a mathematical
structure such as information (see Stubenberg 2010). Since a neu-
tral entity is intrinsically neither mental nor physical, information may
qualify for being a neutral entity in Neutral Monism. However, this view
has an explanatory gap problem: how can (1) mind (first-person subjec-
tive experiences within an entity and by the entity, such as the redness
experienced by a trichromat looking at a ripe tomato), (2) matter (objec-
tive third-person appearances of the material entity, such as the appear-
ances of related brain areas activated by long wavelength light reflected
from the ripe tomato) and (3) the matter-in-itself (mind-independent
tomato-in-itself and material properties such as mass, charge, and spin
of elementary particles) arise from or reduce to this neutral entity?
According to Chalmers, protophenomenal properties are the intrin-
sic, nonrelational properties that anchor physical/informational proper-
ties . . . the mere instantiation of such a property does not entail experi-
ence, but instantiation of numerous such properties could do so jointly
(Chalmers 1996, p. 154). Chalmers (1995a) suggested that information
has double aspects (phenomenal/mental and physical aspects), which
might be related to psychophysical laws (Chalmers 1995b), but its nature
and function are unclear.
Tononi (2004) proposed an information integration theory of con-
sciousness, where consciousness corresponds to the capacity of a system
to integrate information. It is based on two attributes of consciousness:
differentiation (the availability of a very large number of conscious expe-
riences) and integration (the unity of each such experience). More-
over,
the quality of consciousness is determined by the informational relationships
among the elements of a complex, which are specified by the values of effective
information among them. [ . . . ] The theory entails that consciousness is a funda-
mental quantity, that it is graded, that it is present in infants and animals, and that
it should be possible to build conscious artifacts [ . . . ] The effective information
matrix defines the set of informational relationships, or qualia space for each
complex. (Tononi, 2004)

However, it is unclear where experiences come from in qualia space


and how a specific experience is matched and selected from innumerable
experiences.
Furthermore, as per Koch (2012),
In our ceaseless quest, Francis and I came upon a much more sophisticated ver-
sion of dual-aspect theory. At the heart lies the concept of integrated information
Emergence in dual-aspect monism 171

formulated by Giulio Tononi [p.124ff ] the way in which integrated information


is generated determines not only how much consciousness a system has, but also
what kind of consciousness it has. Guilios theory does this by introducing the
notion of qualia space, whose dimensionality is identical to the number of dif-
ferent states the system can occupy. [p.130ff ] The theory postulates two sorts of
properties in the universe that cant be reduced to each other the mental and the
physical. They are linked by way of a simple yet sophisticated law, the mathemat-
ics of integrated information [p.130ff ] if it [any system: human, animal, or robot]
has both differentiated and integrated states of information, it feels like something
to be such a system; it has an interior perspective. The complexity and dimen-
sionality of their associated phenomenal experiences might differ vastly [p.131ff ]
the Web may already be sentient. [p.132ff ] By postulating that consciousness
is a fundamental feature of the universe, rather than emerging out of simpler
elements, integrated information theory is an elaborate version of panpsychism.
[p.132ff ] Integrated information is concerned with causal interactions taking
place within the system . . . although the outside world will profoundly shape the
systems makeup via its evolution [p. 132]. (Koch 2012)

However, it is unclear (1) if the integrated information theory (IIT) is


a version of dual-aspect theory; (2) if information is a neutral entity
(neither physical not mental as Sayre proposed: see Stubenberg 2010) or
a dual-aspect entity (as Chalmers proposed) in IIT framework; (3) what
the relationship between the input and the output of the system (i.e.,
the relationship between the system and its surrounding environment)
might be; (4) how IIT accounts for memory and for planning; and (5) if
mental and physical aspects of a conscious state of brain are inseparable.
(6) If integrated information theory is an elaborate version of panpsy-
chism, the seven problems of panpsychism (Vimal 2010d) need to be
addressed.
The DAMv framework can address the above problems by hypoth-
esizing that a state of a neutral entity has inseparable double/
dual aspect with mental and physical aspects latent/hidden. One
could try comparing DAMv and neutral monism with eastern sys-
tems. There are at least six sub-schools of Vedanta (Radhakrish-
nan 1960): (1) Advaita (non-dualism, Sankaracharya: 788820),
(2) Visis..tadvaita (qualified non-dualism, Ramanujacharya: 1017
1137) or cit-acit Visis..tadvaita (mind-matter qualified non-dualism,
Ramanandarcharya: 14001476 and Ramabhadracharya: 1950),
(3) Dvaitadvaita (Nimbarkacharya: 11301200), (4) Dvaita (dualism,
Madhvacharya: 12381317), (5) Shuddhadvaita (pure non-dualism, Val-
labhacharya: 14791531), and (6) Achintya-Bheda-Abheda (Chaitanya
Mahaprabhu, 14861534). The DAMv framework is close to (1) cit-
acit Visis..tadvaita, where cit (consciousness/mind) and acit (matter) are
qualifiers of a nondual entity, and (2) Trika Kashmir Shaivism where
172 Ram L. P. Vimal

Siva is the mental aspect and Sakti is the physical aspect of same state
of primal entity (such as Brahm) (Raina Swami Lakshman Joo 1985).
Kashmir Shaivism seems close to neutral monism: Siva (Purus.a, con-
sciousness, mental aspect) and Sakti (Prakr.ti, Nature, matter, physical
aspect) are two projected aspects of the third transcendental ground
level entity (Brahm, Mahatripurasundar) (personal communication by
S.C. Kak).
The primal neutral entity of Neutral Monism (Stubenberg 2010)
might have various names such as: (1) primal information, (2) aspect-
less unmanifested state of Brahman (also called karan (causal) Brahman)
of Sankaracharyas Advaita (Radhakrishnan 1960), (3) Buddhist empti-
ness (Sunyata) (Nagarjuna and Garfield 1995), (4) Kashmir Shaivisms
Mahatripurasundar/Brahm (Raina Swami Lakshman Joo 1985), and
(5) physics empty-space at the ground state of quantum field (such as
the Higgs field with non-zero strength everywhere) along with quantum
fluctuations (Krauss 2012).
The state of primal entity appears aspectless (or neutral) because its
mental and physical aspects are latent. After cosmic fire (such as the
Big Bang) the manifestation of universe starts from the latent dual-aspect
unmanifested state of primal entity, and then the latent physical and
mental aspects gradually change their degree of dominance, depending
on the levels of entities over about 13.72 billion years of co-evolution;
perhaps, first, the physical aspect (matter-in-itself and its appearances,
such as formation of galaxies, stars, planets) evolved and then after bil-
lions of years (perhaps, about 542 million years ago during the Cambrian
explosion) the mental aspect (consciousness/experiences) co-evolved in
humans/animals. In other words, the mental aspect (from a first-person
perspective) becomes evident or dominant in conscious beings after over
13 billion years of co-evolution, rather than being evident before the
onset of the universe, when the mental aspect was presumably latent.
However, one could argue for a cosmic consciousness, different from
our consciousness, which might be the mental aspect of any state of uni-
verse. As there are certainly innumerable states of the universe, cosmic
consciousness might vary according to these states.
In our conventional daily mind-dependent reality, Neutral Monism
may be unpacked in the DAMv as follows: the state of apparent aspect-
less neutral entity (quantum spacetime geometry or information) pro-
posed by Neutral Monism would have both mental and physical aspects
latent/hidden. These latent aspects become dominant depending on mea-
surements. If it is the subjective first-person measurement, then the
mental aspect of a brain-state shows up as subjective experiences. If
it is the objective third-person measurement (such as in fMRI), then the
Emergence in dual-aspect monism 173

physical aspect of the same brain-state shows up as the appearances of


the correlated neural-network and its activities.

5.2.3 Realization of potential subjective experiences


If we assume that SEs really pre-exist, the hypothesis H1 of the DAMv
framework (Vimal 2008b, 2010c) seems to entail the Type-2 explanatory
gap: how it is possible that our subjective experiences (SEs) (such as
happiness, sadness, painfulness, and similar SEs) were already present in
primal entities in superposed form, whereas there is no shred of evidence
that such SEs were conceived at the onset of universe. To address this
gap, we propose that since there is no evidence that SEs pre-exist in an
realized/actualized form, SEs (and all other physical and mental entities
of universe) potentially pre-exist.
It is noted that the pre-existence of realized/actualized SEs is indeed
a mystery. However, the pre-existence of potential (or possibility of) SEs
(or any entity) is NOT a mystery. If a tree potentially does not pre-
exist in its seed, it would never be realized. The term potential is impor-
tant in quantum superposition where all potential SEs are hypothesized to
be in superposed form in the mental aspect of the state of each entity. It
is different matter how a potentially pre-existed entity (such as a specific
SE) can be actualized, which certainly needs rigorous investigation. One
such investigation is the matching and selection processes, detailed in
(Vimal 2008b, 2010c).

5.3 Explanation of the mysterious emergence via the


matching and selection mechanisms of the DAMv
framework
There are many models for emergence (Broad 1925; McLaughlin 1992;
Bedeau 1997; Freeman 1999; Kim 1999; Chalmers 2006; Fingelkurts
et al. 2010a, 2010c; Freeman and Vitiello 2011). However, SEs emerge
mysteriously in all of them. Emergence can be of two kinds: strong and
weak emergence:

We can say that a high-level phenomenon is strongly emergent with respect to


a low-level domain when the high-level phenomenon arises from the low-level
domain, but truths concerning that phenomenon are not deducible even in princi-
ple from truths in the low-level domain . . . We can say that a high-level phe-
nomenon is weakly emergent with respect to a low-level domain when the
high-level phenomenon arises from the low-level domain, but truths concern-
ing that phenomenon are unexpected given the principles governing the low-level
174 Ram L. P. Vimal

domain . . . My own view is that, relative to the physical domain, there is just one
sort of strongly emergent quality, namely, consciousness. (Chalmers 2006)

Weak emergence is compatible with materialism, but not strong emer-


gence, because SEs cannot be derived from the current laws of physics.
The hypothesis of emergence of SEs is considered as a case of strong
emergence. This can be unpacked in terms of matching and selection
mechanisms. For example, if a specific SE is strongly emergent from the
interaction between (stimulus-dependent or endogenous) feed-forward
signals and cognitive feedback signals in a relevant NN, we need appro-
priate fundamental psychophysical laws (Chalmers 2006). These laws,
in the proposed DAMv framework, might be expressed in the matching
and selection mechanisms to specify a SE. We conclude that what seems
to be a mysterious emergence could be unpacked into pre-existence of
potential properties, matching, and selection mechanisms.
In reductionist approaches, a complex system is considered to be
the sum of its parts or can be reduced to the interactions of their
parts/constituents. Corning (2012) discusses the relationship between
reductionism, synergism, holism, self-organization, emergence, and its
characteristics. The common characteristics of emergence are:

(1) radical novelty (features not previously observed in the system); (2) coher-
ence or correlation (meaning integrated wholes that maintain themselves over
some period of time); (3) a global or macro level (i.e., there is some property
of wholeness); (4) it is the product of a dynamical process (it evolves); and
(5) it is ostensive it can be perceived . . . The mind is an emergent result of
neural activity . . . Emergence requires some form of interaction its not sim-
ply a matter of scale . . . Emergence does not have logical properties; it cannot be
deduced (predicted). (Corning 2012)

Emergence refers to the following:


1. the arising of novel and coherent structures, patterns and properties
during the process of self-organization in complex systems (Goldstein
1999).
2. The higher properties of life are emergent (Wilson 1975).
3. The scientific meaning of emergent, or at least the one I use, assumes
that, while the whole may not be the simple sum of its separate parts,
its behavior can, at least in principle, be understood from the nature
and behavior of its parts plus the knowledge of how all these parts
interact (Crick and Clark 1994).
4. In emergence, interconnected simple units can form complex sys-
tems and give rise to a powerful and integrated whole, without the
need for a central supervision (Rudrauf et al. 2003).
Emergence in dual-aspect monism 175

5. Emergence requires that the ultimate physical micro-entities have


micro-latent causal powers, which manifest themselves only when
the entities are combined in ways that are emergence-engendering, in
addition to the micro-manifest powers that account for their behavior
in other circumstances (Shoemaker 2002).
Let us first consider the emergence of water from the interaction of
hydrogen and oxygen (Vimal 2010g). In the DAMv framework, some of
the properties related to the physical aspect of the state of water may be
somewhat explained using the reductionistic view and some using holistic
mysterious emergence (Corning 2012). However, how do we explain
the mental aspect of the state of this (water) entity? Its liquidness and
its appearance are the SEs constructed by the mind (constructivism).
Emergentists would argue that the doctrine of emergence could explain
SEs, but how it could explain is still the mystery.
In other words, in the DAMv framework, water (with its proper-
ties we know of) potentially pre-exists; when hydrogen and oxygen are
reacted in certain proportion under certain conditions some entity needs
to be assigned to the resultant H2 O. By trial-and-error method (rather
trial-and-success process), evolution, selection, and adaptation assigned
water (with the properties we know of) to H2 O because water fitted the
best. This unpacking principle of emergence is based on (1) the poten-
tial pre-existence of irreducible entities, (2) matching of latent properties
superposed in physical and mental aspects of the state of constituting enti-
ties, and then (3) selecting the best-fitted properties. For example, hydro-
gen is inflammable and oxygen is a life resource for animals (including
humans). One could argue that some of the possible properties of H2 O
can be: (1) fire extinguishing (opposite to inflammable) and essential life
supporting non-toxic properties for animals and other properties, which
belong to water, (2) inflammable and toxic for animals, (3) inflammable,
(4) life supporting non-toxic but not fire-extinguishing, and so on. Evo-
lution might have tried all, but the water-property (1) fitted the best and
hence was selected and was assigned to H2 O.
Another example is the following. One might try to unpack the emer-
gence of the SE, the redness, based on the DAMv framework (Vimal
2008b, 2010c) and using the necessary conditions for SEs (summarized
in Section 5.2.2.5) as follows: (1) at the retinal level, (a) the specificity
of SEs is higher than that of the mental aspect related to external object-
signals because cone signals are specific for vision only and external sig-
nals could be for all senses; (b) information processing is non-conscious
and is perhaps for the functional (such as detection and discrimina-
tion of wavelengths) aspect of consciousness (Vimal 2009b, 2010d); and
(c) SEs do not get actualized because the retina is not awake as there is no
176 Ram L. P. Vimal

projection of the ascending reticular activating system to the retina and


there is no cognitive re-entrant feedback. (2) The specificity of SEs
increases as signals travel from the retinas cones to ganglion cells to
the lateral geniculate nucleus to the visual areas V1 to V2 to V4/V8/VO
to higher areas. (3) At the V4/V8/VO-level, SE becomes specific to red-
ness, perhaps due to (a) V4/V8/VO networks that can be awakened by
projections of the ascending reticular activating system, and (b) there are
re-entrant cognitive feedback signals that interact with feed forward stim-
ulus dependent signals. (4) The first feedback entry related to the men-
tal aspect of the V4/V8/VO-NN-state, when neural signals (its physical
aspect) re-enter in the NN, results in very faint sensation below thresh-
old level. Then repeated re-entry increases the strength of this sensation,
which gets self-transformed in some kind of SE. (5) In the co-evolution
of physical and mental aspects, the natural selection would have selected
redness for long wavelength light. Therefore, eventually an experience
related to redness is selected for a long wavelength light reflected from
say a ripe tomato. (6) However, one could ask further where does this
redness come from? Or how and why the initial feeling of sensation is
generated and how did the transformation of sensation to an experience
occur?
Similarly, all emerged entities including structure and the functional
and experiential (SEs) aspects of consciousness (Vimal 2009b, 2010d)
can be elaborated on. This implies that the same unpacking principle
for emergence holds for all, namely, structure, function, experiences,
and all physical entities including human artifacts. Thus, the mystery of
emergence could be unraveled in this manner to some extent.
In other words, the mysterious strong emergence can be partly unpacked
by the following three premises:
1. All irreducible higher-level entities (and their properties), such as SEs,
potentially pre-exist.
2. Their irreducible physical and mental properties are potentially super-
posed in the respective physical and mental aspects of the states of all
fundamental entities.
3. A specific SE is realized/actualized by the matching and selection mech-
anism as detailed in our hypothesis H1 of the DAMv framework (Vimal
2008b, 2010c).

5.4 Future Researches

5.4.1 Brute fact problem


The dual-aspect monism framework has the problem of dual-aspect
brute fact (that is the way it is!), although it is justified as we
Emergence in dual-aspect monism 177

clearly have NNs in brain (physical aspect) and related subjective expe-
riences (mental aspect); however, it is indeed an assumption. This
assumption is similar to the assumption of God, soul, Brahman, physics
vacuum/empty-space with virtual particles, strings in string theory,
and other fundamental assumptions. Further investigation is needed to
address the brute fact problem. One speculative attempt is as follows.
One could ask: what is the origin of the inseparable mental and physical
aspects of the state of each entity in the DAMv framework? To address
this, let us consider wave-particle duality and brains NN-state.
As per Fingelkurts et al. (2010b), the physical brain produces a highly
structured and dynamic electromagnetic field. If we apply the concept
of wave-particle inseparable dual-aspect of the state of wavicle to brain-
NN-state, then it seems that there are three inseparable aspects of the same
brain-NN-state: (1) physical particle-like NN, (2) wave-like electromag-
netic field generated by activities of the NN in brain, and (3) related
phenomenal subjective experience (SE). The wave-like electromagnetic
field is mind-like as per mind-like nondual monism based on the wave-only
hypothesis (Stapp 2009a, 2009b, 2001). Moreover, as per CEMI field
theory (McFadden 2002a, 2002b, 2006; see also Lehmann, this volume,
Chapter 6), SE is like looking from the inside CEMI field. In the previ-
ous list, one could argue that (2) and (3) can be combined as the mental
aspect of brain-NN state. If this is acceptable, then one could argue that:
(1) the origin of the mental aspect is the wave-aspect of wave-particle
duality (as electromagnetic field radiation is mind-like because a photon
can be anywhere within a field of radius of 186 000 miles in one second
of electromagnetic radiation); and (2) the origin of the physical aspect
is its particle aspect. Thus, in physics, it seems that the mental aspect
is already built-in from the first principles, and we do not have to insert
mental aspect in physics by hand. If this is correct, then the brute
fact problem is addressed. However, one could argue that both wave
and particle aspects of wavicle are physical aspect because energy (E),
frequency of wave (), and mass (m) of particle are related by E = h =
mc2 , where h is the Planck constant and c is the speed of light. Thus, it
is debatable.

5.4.2 Origin of subjective experiences


It is unclear where subjective experiences (SEs) (including conscious
experience related to self) come from. The hypotheses for the origin of
SEs are as follows:
I. All SEs actually pre-exist (Vimal 2009c, 2009d). For example, the
self in living system is the SE of the subject (Bruzzo et al. 2007),
which has been assumed to pre-exist eternally as soul/jiva/ruh in
178 Ram L. P. Vimal

religions. If the pre-existence of self/soul is true, then it can be


interpreted, in the DAMv framework, as (a) the abstract-ego in
von Neumann quantum mechanics (Stapp 2001), if it is indepen-
dent of brain, with its mental aspect dominant and its physical
aspect latent after death. Moreover, (b) when we are alive and
fully awake the mental and physical aspects related to self are
equally dominant. In other words, the physical aspect of the self-
related NN-state is the self-related NN that includes cortical mid-
line structures (Northoff et al. 2004, 2006) and their functional
synchrony (Fingelkurts et al. 2011) and other activities. The men-
tal aspect of the self-related NN-state, which is projected inside
the brain where it roughly seems to be, is the SE of the subject or
self. However, this hypothesis entails Type-2 explanatory gap as
elaborated in Section 5.2.3. The problem of actual pre-existence of
SEs is that there is no empirical evidence because the pre-existed
entity must really pre-exist in at least one of the three realities,
namely, our daily cMDR, samadhi state ultimate mind-dependent
reality (uMDR), or unknown or partly known MIR. The SEs of
subject and objects must satisfy the necessary ingredient of con-
sciousness and self, such as the formation of NNs, wakefulness,
re-entry (Edelman 1993), attention, working memory, four stages
of Schwalbe 1991) self-organization, and so on. Then only an
actual SE can be experienced. Therefore, the hypothesis of poten-
tial pre-existence of SEs is more viable because it does not have
such problems.
Furthermore, DSouza (2009) discussed some debatable (Stenger
2011) evidence of life (and self/soul) after death; in addition, there are
some debatable evidence for life-after-death from near-death experiences,
out-of-body experiences, reincarnation research, nonglossy, hypnosis,
deathbed visions, quantum physics, dream research, after-death com-
munications research (Guggenheim and Guggenheim 1995; Schwartz
and Russek 2001), synchronicity, and remote viewing.
DSouza (2009) has tried best to argue out materialism and to argue
out arguments against life after death based on data and theories related
to (1) near-death experiences, (2) modern (quantum) physics, (3) mod-
ern biology, (4) neuroscience, (5) modern philosophy, (6) morality and
cosmic justice, and (7) social and individual issues. He has shown that the
benefits of the hypothesis of a life after death concern (1) fear of death,
(2) meaning and purpose of life, (3) moral values, and (4) a better,
healthier, and happier life. This hypothesis might be useful in reducing
the fear of death; but the remaining other benefits can be acquired with-
out it. The arguments against materialism are interesting; but the argu-
ments for the life after death are debatable. Stenger (2011) refutes most
Emergence in dual-aspect monism 179

of the claims. Furthermore, Fingelkurts and Fingelkurts (2009) use a


materialistic emergentist metaphysical framework to explain the occur-
rence of religious experiences in brains. In science, (1) there is no evi-
dence for the life after death, god, and soul, and (2) our dead body
disintegrates into its dual-aspect constituents from which the body was
originally formed via reproductive process.
It should be noted that the fundamental metaphysics of most theist reli-
gions are the same: (1) idealism and/or (2) interactive substance dualism.
However, both views have problems.
Although near-death experiences have been reported, DSouzas
(2009) interpretation that the life after death or soul exists after death is
debatable. This interpretation is based on interactive substance dualism
that has problems (Vimal, 2010d). In addition, one could interpret these
data without invoking the interpretation based on substance dualism. For
example, the data can be interpreted using the theist version of the DAMv
framework (Vimal 2008b, 2010c) that has the least number of problems,
or even using the problematic materialism (Blackmore 1993; Blackmore
1996; French 2005; Klemenc-Ketis et al. 2010; Stenger 2011) to some
extent.
The DAMv framework is a middle path between (1) idealism and/or
substance dualism and (2) materialism (mind from matter).
There are two versions of the DAMv framework: theist and atheist.
This is because the theist-atheist phenomenon is genetic (such as the
God gene: Hamer 2005) and/or acquired (such as accidents, how one
is raised, and so on). Therefore, the fundamental truth and (worldly-
local and cosmic-global) justice should be independent of theist-atheist
phenomenon.
Speculations: (1) In an atheist version related to no-self religions such
as Buddhism, after-death karma may be imprinted in some dual-aspect
quantum field entity. (2) In theist religions, karma may be imprinted in
the physical aspect of the state of a dual-aspect quantum entity (such as
subtle body, tachyon) that has soul/jvatman/ruh as its mental aspect. The
tachyon as a mind field, in substance dualism framework, is proposed by
(Hari 2010, 2011). If soul exists after death, then it must be a dual-aspect
entity/field/particle; this new elementary particle or field (or its effects)
still needs to be detected. (3) God/Brahman/Allah is the fundamental
primal dual-aspect entity from which other entities arose via the co-
evolution and co-development of both aspects.
One could argue: who created God? The usual answer (He is
omnipresent, omnipotent, and omniscient, so nobody created Him
because He always existed) will have a hard time to satisfy atheists and
scientists. Some could argue that He is created by the human mind. If
God/Brahman/Allah and soul/Atman/ruh exist, they must be dual-aspect
180 Ram L. P. Vimal

entities. It is argued that Brahman is beyond mind (Adi Sankaracharya


1950; Swami Krishnananda 1983); and therefore, perhaps, He is an
entity in MIR; MIR is either unknown or partly known via our cMDR
and uMDR. The relationships between entities in MIR are presumably
the same as in cMDR and uMDR, and hence these relationships are
invariant over these three realities. Alternatively, one could argue that the
unmanifested state of Brahman is the state of primal aspectless neutral
entity; this has been interpreted as its both mental and physical aspects
being latent in Section 5.2.2.5.
My view is as follows: We do not know if God, soul, and life after
death exist because we do not have scientific evidence and cannot prove
or disprove them. In addition, in cMDR and/or uMDR these concepts
arose from human minds to begin with. Thus, at present time, it is
beyond scientific investigation because they need testable hypotheses that
are acceptable to skeptics. Therefore, the best we can do is (1) to do
rigorous scientific investigations on topics related to life before death
and (2) keep on trying our best in the investigation of life after death,
soul, and God. Both science and religions are certainly needed in our
daily lives because they are beneficial in a complementary manner.
II. Only cardinal SEs actually pre-exist (including the SE of the sub-
ject) and other SEs emerge or are derived from them. For exam-
ple, all colors can be matched psychophysically with three cardi-
nal/primaries (red, green, and blue) (Vimal et al. 1987). However,
the subjective experience of each color is unique and appears irre-
ducible. Moreover, this hypothesis entails Type-2 explanatory gap
as elaborated in hypothesis (I) and Section 5.2.3. Therefore, car-
dinal SEs potentially pre-exist.
III. All SEs emerge or are derived from the interaction of one proto-
experience (such as the self that actually pre-existed) and three
gunas (qualities) of the eastern Vedic system. However, three gunas
(sattva, rajas, and tamas: part of Prakriti) were initially postulated
for emotion related SEs in Vedic system. However, it is unclear
how other SEs can be derived.
IV. The SE of the subject (self-as-the-knower) actually pre-exists, but
the SEs of objects (such as redness) potentially pre-exist, which
somehow emerge or are actualized during the matching and selection
processes. The problem of hypothesis (I) still remains.
V. SEs potentially pre-exist but somehow emerge or are actualized during
the matching and selection processes (Vimal 2008b, 2010c); this
seems somewhat consistent with Atmanspacher (2007). This is
still mysterious, but acceptable for most investigators because one
could argue that every entity that empirically exist must potentially
exist in analogy to a tree that potentially pre-exist in its seed.
Emergence in dual-aspect monism 181

In Pereira Jr.s Triple-Aspect Monism (TAM) framework (Pereira Jr.,


this volume, Chapter 10), the three aspects of each entity-state are
(1) physical-non-mental, (2) informational, or mental-non-conscious,
and (3) conscious mental. In the DAMv framework, they, respectively,
correspond to (1) mind-independent thing-in-itself physical-aspect,
(2) third-person mind-dependent physical-aspect, and (3) the first-
person mind-dependent mental-aspect.
It seems that TAM has combined both MIR and cMDR, that is, its
first aspect is MIR-physical-aspect (MIR-mental-aspect is missing), its
second aspect is cMDR-physical (third-person-mental-non-conscious),
and its third aspect is cMDR-mental (first-person-mental-conscious).
If the missing MIR-mental-aspect were also considered, then there
would be four aspects: two for MIR and two for cMDR. If TAM is
divided into MIR and c-MDR, we get the DAMv framework in MIR
(MIR-physical-aspect: such as mass, charge, spin; MIR-mental-aspect:
such as attractive and repulsive forces) and c-MDR (c-MDR-physical-
aspect: such as third-person-objective NNs and activities, including their
third-person-appearances; c-MDR-mental-aspect: such as first-person-
subjective experiences). Therefore, TAM can be reduced to the DAMv
framework that has only two parameters (mental and physical) for all
entities (including MIR-entities) and hence DAMv is more parsimonious
than TAM after Occams Razor.
In addition, the origin of SEs is potential elementary forms that are
actualized in an individuals mind. For TAM, SEs are affects, feelings and
actions elicited by the reception of a complex of forms by an individual.
Forms are the mental aspect of the Universe. They are fully actualized
only in the consciousness of individuals. However, one needs to elaborate
precisely how the reception of the mental aspect of universe (a complex
of forms) by an individual human subject (such as a trichromat) can elicit
a specific subjective experience such as redness.
VI. None of the previous; there is some unknown mechanism for the
origin of SEs that still needs to be hypothesized and tested.
The degrees of clarity/transparency related to precisely how a specific SE
is actualized decreases from hypotheses (I) to (V). The degrees of mystery
of emergence of SEs also increase from hypotheses (II) to (V). Further
research is needed to test these hypotheses and unpack the mystery of
emergence fully.

5.5 Concluding remarks


1. The mystery of emergence (including the emergence of self [Edel-
man et al. 2011] and consciousness [Allen and Williams 2011] from
182 Ram L. P. Vimal

brain-body-environment interactions) can be partly unpacked by


the hypothesis of the pre-existence of potential properties, and the
matching and selection mechanisms.
2. In the DAMv framework (dual-aspect monism framework with dual-
mode and varying degrees of dominance of aspects, depending on
the levels of entities), we propose the following propositions (37):
3. A SE related to objects occurs in respective NN and is pro-
jected on the objects where it seems to be through the match-
ing and selection mechanisms. In other words, the representation
of objects in the NN generates NN-state that has two inseparable
mental (first-person perspective) and physical (third-person per-
spective) aspects. The physical aspect of NN-state is NN, and
its activities encompassing brain-body-environment and its men-
tal aspect is the SEs of objects and the subject (self). The NN
includes self-related areas (such as brain-stem for protoself, corti-
cal midline structures and other areas for core self and autobio-
graphical self) and emotion-related areas as well in addition to the
brain-areas related to stimulus-dependent feed-forward and cogni-
tive feedback signals. These cognitive feedback signals interact with
the stimulus-dependent feed-forward signals in a re-entrant man-
ner; SE is experienced by the whole NN that includes self-related
areas.
4. Self, as the SE of the subject and/or the awareness of awareness of
objects, is the mental aspect of self-related NN-state that is generated
by the representation due to the re-entrant activities in self-related
NN. In addition, the related physical aspect is the self-related NN
and its activities. If there are objects to be experienced, the self-
related activities need to be synchronized with all other related activ-
ities such as stimuli-related and emotion-related activities. Alterna-
tively, one could argue for global NN that includes NN for stimulus-
representations, and self- and emotion-related NNs; they are bound
through rapid re-entry of signals; see also (Van Gulick 2004, 2006)
for Higher-Order Global States model.
5. Damasio (2010) proposed (a) the two stages of evolutionary devel-
opment of the self: the self-as-knower (I) and the self-as-object (me)
and (b) the three steps of the self-as-knower: protoself, core self, and
autographical self. Since self can be both subject and object of expe-
rience/perception, reflexivism and reflectionism/introspectionism are
not deeply incompatible, rather they reveal different facets of the self
(Chadha 2011).
6. There are two stages of processing: first, there is non-conceptual
or phenomenal awareness/consciousness/experiences, which is then
Emergence in dual-aspect monism 183

followed by conceptual or access awareness when cognitive such as


attentional- and memory-related signals kick in; see also (Prevos
2002a, 2002b; Chadha 2010, 2011; Hanna and Chadha 2011).
7. We do not know if there exists a dual-aspect entity after death that
carries the impressions/traces of our good and bad karmas/actions
(as Buddhism suggests) or that has attributes of Atman/soul/ruh (as
theist religions suggest). We do not yet have proof for the existence
or non-existence of God/Brahman/Allah, soul/Atman/ruh, and/or life
after death in the DAMv (or any other) framework. If the soul exists
after death, then it should be a dual-aspect entity; this entity (or
its effects) still needs to be detected. Therefore, further research is
needed and is beyond the scope of this chapter.

REFERENCES
Allen M. and Williams G. (2011). Consciousness, plasticity, and connectomics:
The role of intersubjectivity in human cognition. Front Psychol 2:20, e-
document, 16 pp. URL: www.frontiersin.org/Consciousness_Research/10
.3389/fpsyg.2011.00020/abstract (accessed February 28, 2013).
Aspect A. (1999). Bells inequality test: More ideal than ever. Nature 398:189
190.
Atmanspacher H. (2007). Contextual emergence from physics to cognitive neu-
roscience. J Consciousness Stud 14(12):1836.
Baars B. J. (2005). Global workspace theory of consciousness: Toward a cognitive
neuroscience of human experience. Prog Brain Res 150: 4553.
Bedeau M. A. (1997). Weak emergence. Philos Perspectives 11:375399.
Bernroider G. and Roy S. (2005). Quantum entanglement of K+ ions, multiple
channel states and the role of noise in the brain. In Stocks N. G., Abbott D.,
and Morse A. P. (eds.) Fluctuations and Noise in Biological, Biophysical, and
Biomedical Systems III, SPIE Conference Proceedings, 584129.
Blackmore S. (1993). Dying to Live: Science and the Near Death Experience. Lon-
don: Grafton.
Blackmore S. J. (1996). Near-death experiences. J Roy Soc Med 89(2):7376.
Bohm D. (1990). A new theory of the relationship of mind and matter. Philos
Psychol 3(2):271286.
Broad C. D. (1925). The Mind and Its Place in Nature. London: Routledge &
Kegan Paul.
Bruzzo A. A. and Vimal R. L. P. (2007). Self: An adaptive pressure arising from
self-organization, chaotic dynamics, and neural Darwinism. J Integr Neurosci
6(4):541566.
Caponigro M., Jiang X., Prakash R., and Vimal R. L. P. (2010). Quantum
entanglement: Can we see the implicate order? Philosophical speculations.
NeuroQuantology 8(3):378389.
Caponigro M. and Vimal R. L. P. (2010). Quantum interpretation of Vedic
theory of mind: An epistemological path and objective reduction of thoughts.
Journal of Consciousness Exploration and Research 1(4):402481.
184 Ram L. P. Vimal

Chadha M. (2010). Perceptual experience and concepts in classical Indian phi-


losophy. In Zalta E. N. (ed.) The Stanford Encyclopedia of Philosophy (Win-
ter 2010 Edition). URL: http://plato.stanford.edu/archives/win2010/entries/
perception-india/ (accessed February 28, 2013).
Chadha M. (2011). Self-awareness: Eliminating the myth of the invisible sub-
ject. Philos East West 61(3):453467.
Chalmers D. J. (1995a). Facing up to the problem of consciousness. J Conscious-
ness Stud 2:200219.
Chalmers D. J. (1995b). The puzzle of conscious experience. Scientific American
Mind, 237(6):6268.
Chalmers D. (1996). The Conscious Mind: in Search of a Fundamental Theory.
Oxford/New York: Oxford University Press.
Chalmers D. J. (2006). Strong and weak emergence. In Clayton P. and Davies
P. (eds.) The Re-emergence of Emergence. New York: Oxford University Press,
pp. 244256.
Corning P. A. (2012). The re-emergence of emergence, and the causal role of
synergy in emergent evolution. Synthese 185(2):295317.
Crick F. and Clark J. (1994). The astonishing hypothesis. J Consciousness Stud
1(1):1016.
Crick F. and Koch C. (1998). Consciousness and neuroscience. Cereb Cortex
8(2):97107.
Crick F. and Koch C. (2003). A framework for consciousness. Nat Neurosci
6(2):119126.
Damasio A. R. (2010). Self Comes to Mind: Constructing the Conscious Brain. New
York: Pantheon.
Damasio A. R. (1999). The Feeling of What Happens: Body and Emotion in the
Making of Consciousness. New York: Harcourt Brace.
Dang-Vu T. T., Schabus M., Desseilles M., Albouy G., Boly M., Darsaud A.,
et al. (2008). Spontaneous neural activity during human slow wave sleep.
Proc Natl Acad Sci USA 105(39):1516015165.
Davia C. J. (2006). Life, catalysis and excitable media: A dynamic sys-
tems approach to metabolism and cognition. In Tuszynski J. (ed.)
The Emerging Physics of Consciousness. Heidelberg: Springer, pp. 229
260.
Dehaene S., Kerszberg M., and Changeux J. P. (1998). A neuronal model
of a global workspace in effortful cognitive tasks. P Natl Acad Sci USA
95(24):1452914534.
DSouza D. (2009). Life after Death: The Evidence. Washington, DC: Regnery
Publishing.
Edelman G. M. (1993). Neural Darwinism: Selection and reentrant signaling in
higher brain function. Neuron 10(2):115125.
Edelman G. M. (2003). Naturalizing consciousness: A theoretical framework.
P Natl Acad Sci USA 100(9): 55205524.
Edelman G. M. (2004). Wider than the Sky: The Phenomenal Gift of Consciousness.
New Haven, CT: Yale University Press.
Edelman G. M., Gally J. A., and Baars B. J. (2011). Biology of consciousness.
Front Psychol 2:4 doi: 10.3389/fpsyg.2011.00004.
Emergence in dual-aspect monism 185

Einstein A., Podolsky B., and Rosen N. (1935). Can quantum-mechanical


description of physical reality be considered complete? Phys Rev Lett 47:777
780.
Feigl H. (1967). The Mental and the Physical, the Essay and a Postscript. Min-
neapolis: University of Minnesota Press.
Fingelkurts A. and Fingelkurts A. (2001). Operational architectonics of the
human brain biopotential field: Towards solving the mind-brain problem.
Brain Mind 2(3):261296.
Fingelkurts A. A. and Fingelkurts A. A. (2004). Making complexity simpler:
Multivariability and metastability in the brain. Int J Neurosci 114(7):843
862.
Fingelkurts A. A. and Fingelkurts A. A. (2005). Mapping of the brain opera-
tional architectonics. In Chen F. J. (ed.) Focus on Brain Mapping Research.
Hauppauge, NY: Nova Science Publishers, pp. 5998.
Fingelkurts A. A. and Fingelkurts A. A. (2009). Is our brain hardwired to produce
God, or is our brain hardwired to perceive God? A systematic review on the
role of the brain in mediating religious experience. Cogn Process 10(4):293
326.
Fingelkurts A. A. and Fingelkurts A. A. (2011). Persistent operational synchrony
within brain default-mode network and self-processing operations in healthy
subjects. Brain Cogn 75(2):7990.
Fingelkurts A. A., Fingelkurts A. A., and Neves C. F. H. (2009). Phenomeno-
logical architecture of a mind and operational architectonics of the brain:
The unified metastable continuum. J New Math Nat Comput 5(1):221244.
Fingelkurts A. A., Fingelkurts A. A., and Neves C. F. H. (2010a). Emergentist
monism, biological realism, operations and brain-mind problem. Phys Life
Rev 7(2):264268.
Fingelkurts A. A., Fingelkurts A. A., and Neves C. F. H. (2010b). Machine
consciousness and artificial thought: An operational architectonics model
guided approach. Brain Res 1428:8092.
Fingelkurts A. A., Fingelkurts A. A., and Neves C. F. H. (2010c). Natural world
physical, brain operational, and mind phenomenal space-time. Phys Life Rev
7(2):195249.
Freeman W. J. (1999). Consciousness, intentionality and causality. J Consciousness
Stud 6:143172.
Freeman W. J. and Vitiello G. (2011). The dissipative brain and non-equilibrium
thermodynamics. J Cosmology 14:44614468.
French C. C. (2005). Near-death experiences in cardiac arrest survivors. In
Laurey S. (ed.) Progress in Brain Research: The Boundaries of Consciousness:
Neurobiology and Neuropathology. Amsterdam: Elsevier, pp. 351367.
Globus G. (2006). The saltatory sheaf-odyssey of a monadologist. NeuroQuan-
tology 4(3):210221.
Goldstein J. (1999). Emergence as a construct: History and issues. Emergence
1(1):4972.
Guggenheim B. and Guggenheim J. (1995). Hello from Heaven: A New Field of
Research-after-Death Communication Confirms That Life and Love Are Eternal.
New York: Bantam.
186 Ram L. P. Vimal

Hamer D. (2005). The God Gene: How Faith Is Hardwired into Our Genes. New
York: Anchor Books.
Hameroff S. (1998). Funda-Mentality: Is the conscious mind subtly linked to
a basic level of the universe? Trends Cogn Sci 2(4):119127.
Hameroff S. and Penrose R. (1998). Quantum computation in brain micro-
tubules? The PenroseHameroff Orch OR model of consciousness. Philos
T Roy Soc A 356:18691896.
Hameroff S. and Powell J. (2009). The conscious connection: A psycho-physical
bridge between brain and pan-experiential quantum geometry. In Skrbina
D. (ed.) Mind That Abides: Panpsychism in the New Millennium. Amsterdam:
John Benjamin, pp. 109127.
Hanna R. and Chadha M. (2011). Non-conceptualism and the problem of per-
ceptual self-knowledge. Eur J Philos 19:184223.
Hari S. (2010). Eccless Mind Field, Bohm-Hiley Active Information, and
Tachyons. Journal of Consciousness Exploration and Research 1(7):850
863.
Hari S. D. (2011). Mind and Tachyons: How Tachyon changes quantum potential
and brain creates mind. NeuroQuantology 9(2), e-document.
Hartline P. H., Vimal R. L. P., King A. T., Kurylo D. D., and Northmore D.
P. (1995). Effects of eye position on auditory localization and neural repre-
sentation of space in superior colliculus of cats. Exp Brain Res 104(3):402
408.
Hiley B. J. and Pylkkanen P. (2005). Can mind affect matter via active informa-
tion? Mind Matter 3(2):727.
t Hooft G. (ed.) (2005). Fifty Years of Yang-Mills Theory. Hackensack, NJ, and
London: World Scientific Publishing.
Kant I. (1929). Critique of Pure Reason, Trans. Smith N. K. London: Macmillan.
Kim J. (1999). Making sense of emergence. Philos Stud 95:336.
Klemenc-Ketis Z., Kersnik J., and Grmec S. (2010). The effect of carbon diox-
ide on near-death experiences in out-of-hospital cardiac arrest survivors: A
prospective observational study. Crit Care 14(2):R56.
Koch C. (2012). Consciousness: Confessions of a Romantic Reductionist. Cambridge,
MA: MIT Press.
Krauss L. M. (2012). A Universe from Nothing: Why There Is Something Rather
than Nothing? New York: Free Press.
Leibniz G. W. (1714). Monadologie (The Monadology: An Edition for Students)
Trans. Rescher N. University of Pittsburgh Press.
Levin J. (2006). What is a phenomenal concept? In Alter T. and Walter S. (eds.)
Phenomenal Concepts and Phenomenal Knowledge. New Essays on Consciousness
and Physicalism. Oxford University Press, pp. 87110.
Levin J. (2008). Taking type-B materialism seriously. Mind Lang 23(4):402
425.
Levine J. (1983). Materialism and qualia: The explanatory gap. Pac Philos Quart
64:354361.
Litt A., Eliasmith C., Kroona F. W., Weinstein S., and Thagarda P. (2006). Is
the brain a quantum computer? Cognitive Sci 30(3):593603.
Loar B. (1990). Phenomenal states. Philosophical Perspectives 4:81108.
Emergence in dual-aspect monism 187

Loar B. (1997). Phenomenal states. In Block N., Flanagan O., and Guzeldere G.
(eds.) The Nature of Consciousness, revised edn. Cambridge, MA: MIT Press,
pp. 597616.
Lutz A. and Thompson E. (2003). Neurophenomenology: Integrating subjec-
tive experience and brain dynamics in the neuroscience of consciousness. J
Consciousness Stud 10(910):3152.
Lyon P. (2004). Autopoiesis and knowing: Reflections on Maturanas biogenic
explanation of cognition. Cybernetics and Human Knowing 11(4):2116.
MacGregor R. J. and Vimal R. L. P. (2008). Consciousness and the structure of
matter. J Integr Neurosci 7(1):75116.
Maturana H. (2002). Autopoiesis, structural coupling and cognition: A history of
these and other notions in the biology of cognition. Cybernetics and Human
Knowing 9(34):534.
McFadden J. (2002a). The conscious electromagnetic information (Cemi) field
theory: The hard problem made easy? J Consciousness Stud 9(8):4560.
McFadden J. (2002b). Synchronous firing and its influence on the brains elec-
tromagnetic field: Evidence for an electromagnetic field theory of conscious-
ness. J Consciousness Stud 9(4):2350.
McFadden J. (2006). The CEMI field theory: Seven clues to the nature of con-
sciousness. In Tuszynski J. A. (ed.) The Emerging Physics of Consciousness.
Heidelberg: Springer, pp. 385404.
McLaughlin B. P. (1992). The rise and fall of British emergentism. In Becker-
mann A., Flohr H., and Kim J. (eds.) Emergence or Reduction? Essays on the
Prospects of Nonreductive Physicalism (Foundations of Communication). Berlin:
De Gruyter, pp. 4993.
Nagarjuna and Garfield J. L. (1995). The Fundamental Wisdom of the Middle
Way: Nagarjunas Mulamadhyamakakarika (translation and commentary by
Garfield J. L.). New York/Oxford: Oxford University Press.
Nani A. and Cavanna A. E. (2011). Brain, consciousness, and causality. J Cos-
mology 14:44724483.
Northoff G. and Bermpohl F. (2004). Cortical midline structures and the self.
Trends Cogn Sci 8(3):102107.
Northoff G., Heinzel A., de Greck M., Bermpohl F., Dobrowolny H., and
Panksepp J. (2006). Self-referential processing in our brain a meta-analysis
of imaging studies on the self. Neuroimage 31(1):440457.
Papineau D. (2006). Phenomenal and Perceptual Concepts. In Alter T. and
Walter S. (eds.) Phenomenal Concepts and Phenomenal Knowledge. New Essays
on Consciousness and Physicalism. Oxford University Press, pp. 111144.
Pereira A., Jr. (2007). Astrocyte-trapped calcium ions: The hypothesis of a
quantum-like conscious protectorate. Quantum Biosystems 2:8092.
Poznanski R. R. (2002). Towards an integrative theory of cognition. J Integr
Neurosci 1(2):145156.
Poznanski R. R. (2009). Model-based neuroimaging for cognitive computing. J
Integr Neurosci 8(3):345369.
Prevos P. (2002a). A persistent self? The Horizon of Reason, weblog post.
URL: http://prevos.net/humanities/philosophy/persistent/ (accessed March
6, 2013).
188 Ram L. P. Vimal

Prevos P. (2002b). The self in Indian philosophy. The Horizon of Reason, weblog
post. URL: http://prevos.net/humanities/philosophy/self/ (accessed February
28, 2013).
Radhakrishnan S. (1960). Brahma Sutra: The Philosophy of Spiritual Life. London:
Ruskin House, George Allen & Unwin Ltd.
Raina Swami Lakshman Joo (1985). Kashmir Shaivism: The Secret Supreme. Sri-
nagar and New York: Universal Shaiva Trust and State University of New
York Press.
Raju P. T. (1985). Structural Depths of Indian Thought (SUNY Series in Philosophy).
New York and New Delhi: State University of New York and South Asian
Publishers.
Rudrauf D., Lutz A., Cosmelli D., Lachaux J. P., and Le Van Quyen M. (2003).
From autopoiesis to neurophenomenology: Francisco Varelas exploration
of the biophysics of being. Biol Res 36(1):2765.
Rutherford D. (1995). Metaphysics: The later period. In Jolley N. (ed.) The
Cambridge Companion to Leibniz. Cambridge University Press.
Sankaracharya A. (1950). The Brihadaranyaka Upanishad, 3rd Edn. Trans.
Madhavananda S. Mayavati, Almora, Himalayas, India: Swami Yogesh-
warananda, Advaita Ashrama.
Sayre K. (1976).Cybernetics and the Philosophy of Mind. Atlantic Highlands:
Humanities Press.
Schwalbe M. L. (1991). The autogenesis of the self. J Theor Soc Behav 21:269
295.
Schwartz G. E. and Russek L. G. (2001). Celebrating Susy Smiths soul: Pre-
liminary evidence for the continuance of Smiths consciousness after her
physical death. J Relig Psychical Res 24(2):8291.
Searle J. (2007). Biological naturalism. In Velmans M. and Schneider S. (eds.)
The Blackwell Companion to Consciousness. Oxford: Blackwell, pp. 325
334.
Searle J. R. (2004). Comments on Noe and Thompson, are there neural corre-
lates of consciousness? J Consciousness Stud 11(1):8082.
Shoemaker S. (2002). Kim on emergence. Philos Stud 58(12):5363.
Skrbina D. (2009). Minds, objects, and relations: Toward a dual-aspect ontology.
In Skrbina D. (ed.) Mind that Abides: Panpsychism in the New Millennium.
Amsterdam: John Benjamins, pp. 361382.
Stapp H. P. (2001). Von Neumanns Formulation of Quantum Theory and
the Role of Mind in Nature. URL: http://arxiv.org/abs/quant-ph/0101118
(accessed February 28, 2013).
Stapp H. P. (2009a). Mind, Matter, and Quantum Mechanics, 3rd Edn. Heidel-
berg: Springer.
Stapp H. P. (2009b). Nondual Quantum Duality. Plenary talk at Marin Confer-
ence Science and Nonduality (Oct 25, 2009). URL: www-physics.lbl.gov/
stapp/NondualQuantumDuality.pdf (accessed February 28, 2013).

Stenger V. J. (2011). Life After Death: Examining the Evidence. In Loftus J. (ed.)
The End of Christianity. Amherst: Prometheus Books, pp. 305332.
Steriade M., McCormick D. A., and Sejnowski T. J. (1993). Thalamocorti-
cal oscillations in the sleeping and aroused brain. Science 262(5134):679
685.
Emergence in dual-aspect monism 189

Stubenberg L. (2010). Neutral Monism. In Zalta E. N. (ed.), The Stanford Ency-


clopedia of Philosophy (Spring 2010 Edition) URL: http://plato.stanford.edu/
archives/spr2010/entries/neutral-monism/ (accessed February 28, 2013).
Swami Krishnananda (1983). The Brihadaranyaka Upanishad. Rishikesh,
Himalayas, India: The Divine Life Society.
Thompson E. and Varela F. J. (2001). Radical embodiment: neural dynamics
and consciousness. Trends Cogn Sci 5(10):418425.
Tononi G. (2004). An information integration theory of consciousness. BMC
Neurosci 5(1):42.
Trehub A. (2007). Space, self, and the theater of consciousness. Conscious Cogn
16:310330.
Van Gulick R. (2004). Higher-order global states (HOGS): an alternative higher-
order model of consciousness. In Gennaro R. (ed.) Higher-order Theories of
Consciousness: An Anthology. Amsterdam: John Benjamins Publishing Com-
pany, pp. 6792.
Van Gulick R. (2006). Mirror, Mirror Is that All? In Kriegel U. and Williford
K. (eds.) Self-Representational Approaches to Consciousness. Cambridge, MA:
MIT Press, pp. 1139.
Varela F. G., Maturana H. R., and Uribe R. (1974). Autopoiesis: the organization
of living systems, its characterization and a model. Curr Mod Biol 5(4):187
196.
Varela F. J. (1981). Autonomy and autopoiesis. In Roth G. and Schwegler H.
(eds.) Self-Organizing Systems: An interdisciplinary Approach. Frankfurt and
New York: Campus, pp. 1424.
Varela F. J. (1996). Neurophenomenology: A methodological remedy to the hard
problem. J Consciousness Stud 3:330350.
Velmans M. (2000). Understanding Consciousness. London: Routledge/Psychology
Press.
Velmans M. (2008). Reflexive Monism. J Consciousness Stud 15(2):550.
Vimal R. L. P. (2008a). Attention and emotion. Annual Review of Biomedical
Sciences (ARBS) 10:84104.
Vimal R. L. P. (2008b). Proto-experiences and subjective experiences: Classical
and quantum concepts. J Integr Neurosci 7(1):4973.
Vimal R. L. P. (2009a). Dual aspect framework for consciousness and its
implications: West meets east for sublimation process. In Derfer G.,
Wang Z., and Weber M. (eds.) The Roar of Awakening A Whiteheadian
Dialogue Between Western Psychotherapies and Eastern Worldviews. Frankfurt
and Lancaster: Ontos, pp. 3970. URL: http://books.google.co.uk/books?
id=D69Li9vJudUC&printsec=frontcover&source=gbs_ge_summary_r&
cad=0&v=onepage&q&f=false (accessed February 28, 2013).
Vimal R. L. P. (2009b). Meanings attributed to the term consciousness: An
overview. J Consciousness Stud 16(5):927.
Vimal R. L. P. (2009c). Subjective experience aspect of consciousness, part I:
Integration of classical, quantum, and subquantum concepts. NeuroQuan-
tology 7(3):390410.
Vimal R. L. P. (2009d). Subjective experience aspect of consciousness, part II:
Integration of classical and quantum concepts for emergence hypothesis.
NeuroQuantology 7(3):411434.
190 Ram L. P. Vimal

Vimal R. L. P. (2010a). Consciousness, non-conscious experiences and func-


tions, proto-experiences and proto-functions, and subjective experiences.
Journal of Consciousness Exploration and Research 1(3):383389.
Vimal R. L. P. (2010b). Interactions among minds/brains: Individual conscious-
ness and inter-subjectivity in dual-aspect framework. Journal of Consciousness
Exploration and Research 1(6):657717.
Vimal R. L. P. (2010c). Matching and selection of a specific subjective experience:
Conjugate matching and subjective experience. J Integr Neurosci 9(2):193
251.
Vimal R. L. P. (2010d). On the quest of defining consciousness. Mind Matter
8(1):93121.
Vimal R. L. P. (2010e). Towards a theory of everything, part I: Introduction
of consciousness in electromagnetic theory, special and general theory of
relativity. NeuroQuantology 8(2):206230.
Vimal R. L. P. (2010f ). Towards a theory of everything, part II: Introduction of
consciousness in Schrodinger equation and standard model using quantum
physics. NeuroQuantology 8(3):304313.
Vimal R. L. P. (2010g). Towards a theory of everything, part III: Introduction of
consciousness in loop quantum gravity and string theory and unification of
experiences with fundamental forces. NeuroQuantology 8(4):571599.
Vimal R. L. P. and Davia C. J. (2008). How long is a piece of time? Phenome-
nal time and quantum coherence Toward a solution. Quantum Biosystems
2:102151.
Vimal R. L. P. and Davia C. J. (2010). Phenomenal time and its biological
correlates. Journal of Consciousness Exploration and Research 1(5):560572.
Vimal R. L. P., Pokorny J. M., and Smith V. C. (1987). Appearance of steadily
viewed light. Vision Res 27(8):13091318.
Vitiello G. (1995). Dissipation and memory capacity in the quantum brain model.
Int J Mod Phys B 9:973989.
Wilson E. O. (1975). Sociobiology: The New Synthesis. Cambridge, MA: Harvard
University Press.
6 Consciousness: Microstates of the brains
electric field as atoms of thought and emotion

Dietrich Lehmann

6.1 Properties of consciousness 192


6.1.1 Brain functions and brain-independent personal agents 192
6.1.2 The two basic and irritating properties of consciousness 192
6.1.2.1 Consciousness as inner aspect of the state of a
complex system 192
6.1.2.2 Consciousness as emergent property 193
6.1.3 Consciousness and the arrow of time in individuals, human
history, and the species 194
6.1.4 Consciousness is state dependent 196
6.1.5 Consciousness is indivisible, but its contents are separable
entities 197
6.1.6 Altering the state of consciousness 197
6.1.7 Localizing consciousness in the brain 198
6.1.8 What is the developmental advantage of consciousness? 199
6.1.9 Why is there consciousness at all? 199
6.1.10 Consciousness and free will 200
6.2 Consciousness and brain electric activity 201
6.2.1 Brain functions and brain mechanisms 201
6.2.2 Brain electric fields and their functional macrostates 202
6.2.3 Discontinuous changes of the brains functional state:
Functional microstates 203
6.2.4 Microstates of the spontaneous stream of consciousness 206
6.2.5 Microstates of input-driven mentation 206
6.2.6 External and internal input of same content is processed in the
same cortical area: Visual-concrete versus abstract thoughts 207
6.2.7 Microstates of emotions 207
6.2.8 Information content versus task effects 208
6.2.9 The momentary microstate determines the fate of incoming
information 209
6.2.10 Split-second units of brain functioning 209
6.2.11 Microstate syntax: From atoms to molecules of thought 209
6.2.12 Microstates in healthy and pathological states of consciousness 210
6.2.13 Microstates and fMRI resting state networks and default states 210
6.2.14 Consciousness and its building blocks 210
6.2.15 Discontinuous brain activity and the stream of consciousness 211

191
192 Dietrich Lehmann

6.1 Properties of consciousness

6.1.1 Brain functions and brain-independent personal agents


The following considerations are based on the concept that consciousness
and its contents are functions of brains.1 The alternative, dualistic con-
cept that in its most radical version assumes that brain-independent
personal agents of non-physical nature control human brain functions is
not considered here.

6.1.2 The two basic and irritating properties of consciousness


Consciousness has two properties that continue to cause controversial
debates:
1. It is the inner aspect of the brain systems functional state, a privileged,
subjective (first-person) experience that is not directly accessible to
third-persons, and
2. It is an emergent property of the brain systems functional state, not
available in its separate parts.
I suspect that the combination of these two vexing properties has given
consciousness a particularly enigmatic standing even though separately,
each of the two properties is commonly observable in other contexts.

6.1.2.1 Consciousness as inner aspect of the state of a complex sys-


tem The inner aspect of the brain systems functional state is the first-
person, privileged, subjective experience of consciousness. The outer
aspect of the systems functional state, the third persons view, is pro-
posed to be the brains electromagnetic field (discussed in Section 6.2 of
this chapter; see also Vimal: Chapter 5, Trehub: Chapter 7 and Pereira
Jr.: Chapter 10, all in this volume).
Perception of the inner aspect of a complex system is no unusual expe-
rience. The inner aspect does not only exist as a consciousness state of
a single person. The inner aspect of a higher-order system is, for any
given individual, available when he/she becomes part of such a higher-
order system, but it is not fully available for those who are not insid-
ers. Depending on the higher-order systems nature and state, the inner
aspect that is experienced by the participating individual will vary. For
example, as a soccer fan during a critical match and as a family member
during a peaceful Christmas gathering, the same person will participate

1 This chapter does not offer a general review. It is centered around the work done by our
research unit, represents my proposals, and specifically draws on results obtained in our
experimental studies.
Microstates of brain electric field: Atoms of thought/emotion 193

in and experience the very different feelings of the respective group. An


exterior observer (the third person) might distinguish these states, but
the experience of how that feels is reserved to the inner aspect, to
the involved participant. An exhaustive description of a foreign society
is incomplete if done by an uninvolved exterior observer; it can become
more complete if there is an embedded observer who over time can
become aware of the inner aspects of the society during different condi-
tions. This ethnological approach has become established in anthro-
pology (Boas 1920; Levi-Strauss 1955) and meanwhile has extended
even to war reporting.
Since the experience of consciousness is privileged, personal, for
me the first-person perspective of brain activity I can only assume
that other people have a corresponding experience. On the other hand,
this leads to the assumption that other living beings also may experience
the inner aspect of their brain functions and where in the order of
species is the limit? Eventually, why should non-living things not have an
inner aspect, even though it might be very rudimentary as it has been
conceived in panpsychism? (See also Vimal, this volume, Chapter 5; also
see the TAM concept as a variety of Proto-panpsychism in Pereira Jr.,
this volume, Chapter 10.)
The emphasis on a privileged first-person experience in fact appears
somewhat exaggerated in view of the fact that already about a hundred
years ago C.G. Jung (1973) acting as a third-person in his association
experiments could identify private conscious and non-conscious personal
first-person preoccupations in his subjects. Moreover, fMRI studies do
mind reading of private thoughts and intended decisions (see, e.g.,
Haynes 2009, 2011). Information thus obtained clearly does not let the
observer directly experience how it feels, but it typically triggers the
recall of related personal experiences and their associated empathy.

6.1.2.2 Consciousness as emergent property The property of


emergence is ubiquitous. Typically, there are no deductive explanations
for emergent properties they simply exist. The often quoted example is
that knowledge of the properties of oxygen and hydrogen atoms cannot
predict the behavior of water molecules.
Thus, emergence is by no means unique to consciousness; only in the
case of consciousness it is more deeply felt to be a problem (see also
Vimal, Chapter 5 of this volume). Even some man-made complex sys-
tems such as robots or the stock market exhibit unpredictable behavior,
but apparently nobody suspects non-natural causes for these observa-
tions. As someone said (I do not remember who): Consciousness in the
brain is as mysterious as speed in a railroad engine; it is automatically
194 Dietrich Lehmann

present as soon as all parts are in place and in the requisite condition
(gasoline, temperature, ignition machinery), and it is not present if not
all requisite conditions are met. Speed (consciousness) is not a property
of the engines (of the brains) individual parts, and is not available if
the functional state of the system (its condition) is not adequate. The
difference between railroad engines that produce speed and brains that
produce consciousness is that we know how to construct railroad engines
and how to put them into the adequate condition, but that we do not know
how to do this for brains. But, this metaphor only concerns the emergent
property of consciousness not the incorporation of consciousness as
measurable configuration and dynamics of the brains electromagnetic
field that is discussed in Section 6.2.
It is reasonable to assume that brains with increasing structural com-
plexity develop higher levels of consciousness, thereby providing the pos-
sibility for richer, more detailed inner aspects.
Only relatively few components of brain work qualify as candidates for
access into consciousness. The vast majority of brain processes run their
course non-consciously (see, e.g., Dehaene and Changeux 2011). For
example, it is not possible to consciously know why something cannot be
remembered and why something else can be remembered, although the
results of these inaccessible processes are available in consciousness as
recall or failure to recall. Thus, the results of the non-conscious processes
qualify as candidates for consciousness, but few actually reach this stage.

6.1.3 Consciousness and the arrow of time in individuals, human


history, and the species
Consciousness has a beginning and an end, like everything else. Con-
sciousness in the individual is a continually re-generated phenomenon; it
is obviously influenced by and depends on numerous external and inter-
nal factors; it inherently waxes and wanes with the circadian wakesleep
cycle. Consciousness develops in every individual, it developed during
the course of history in humans, and it developed over millennia in the
species.
The development of consciousness in an individual shows steps: most
prominent is the observation that a child begins to use the word I
typically not before the second year of age while younger children refer
to themselves by their name.
Childrens understanding that not everything that happens comes from
the outside world grows with development. This growing insight into the
nature of the world is in direct conflict to what the infant probably had
to start learning right after birth: that subjective experience must be
Microstates of brain electric field: Atoms of thought/emotion 195

projected into the outside world, and that this outside world will act in
ways that cannot be directly controlled by the self. But over time, multiple
trial and error experience makes it evident to the child that not everything
that happens is due to independent outside activity: experience shows
that it is not helpful to hit the wall if one happened to bounce ones head
against it. Hence, the laboriously acquired ability to project experience
to the outside world must be restricted to appropriate cases. This reality-
driven distinction is not always self-evident and simple, especially in
stressful conditions: we, as adults, still wrestle with the temptation to
ascribe to the outside world much of what we generate in ourselves.
In the course of individual development, the widening and refining
of the properties of consciousness is evident in the individuals ability
to recognize more of the sources of his/her subjective experience. Con-
scious awareness of the self is formed by interactions with other people;
it seriously deteriorates during social isolation (Reemtsma 1997).
The development of consciousness in the individual reflects the devel-
opment of consciousness in the history of mankind. Not too long ago,
the predominating belief in society was that personal decisions actually
are instilled into people by gods or other exterior, non-human, good
or evil forces, deities, or devils. Homer told the story that the goddess
Athena disguised as Telemachos uncle Mentor walked in front of
Telemachos guiding him on his way into the world. These convictions
about control of personal decisions by exterior forces led to dire conse-
quences. People who were thought to act under the influence of bad forces
often were tortured to rid them of these evil influences. Slowly, insight
grew that motivations develop in the individual itself, even though the
generating mechanisms remained totally obscure for a long time. Even-
tually, the rule of kings was not accepted any more as god-given, and
individual experience moved into the center of attention. But for a long
time philosophers thought about thinking and not about conscious-
ness: In Europe, a need for and thus the use of the word conscious-
ness arose first in the seventeenth century in England, distinguishing
consciousness from conscience. The German word Bewusstsein for
consciousness was introduced in the eighteenth century (Wolff 1720).
Before these times apparently there was no need to name the concept
of a reflective awareness of ones own thoughts. In fact, still today the
same word is used for consciousness and conscience in several European
languages.
Consciousness is detectable in behavior observations in animals. Chim-
panzees, dolphins, and crows display conscious behavior when viewing
themselves in a mirror (Gallup 1970; Prior et al. 2008). They attempt
to remove disfiguring marks (e.g., a mark on the forehead), quite like
196 Dietrich Lehmann

children do it after they have reached a certain age. With increasing


complexity of the brain structures, apparently an increasing ability to
develop consciousness becomes available, up to the possibility of a con-
scious perception of ones own momentary emotional and mental state,
the state of the ego or self in humans.

6.1.4 Consciousness is state dependent


The characteristics of consciousness are brain-state dependent, as are all
other brain functions (Koukkou et al. 1980). The ever-changing func-
tional state of the brain is driven by the continual interactions of the
individual with the world, by recalled memory content, and by develop-
mental and internal clock times (life-long, diurnal and shorter rhythms).
State dependency often is disregarded in discussions about conscious-
ness, thereby limiting realistic assessments. The global functional state
of the brain constrains the characteristics of consciousness not only in
pathological but also in healthy conditions. In normal healthy adults,
this state dependency is evident, for example, when the EEG is domi-
nated by slow frequencies (during sleep and under sedating drugs) and
when the EEG is dominated by fast frequencies (during excitation and
under stimulating drugs). An important case of state dependency of con-
sciousness is the recall of dreams, of the information processing during
sleep (see Koukkou and Lehmann 1983). Other versions of conscious-
ness in healthy humans concern hypnosis (e.g., Katayama et al. 2007;
Cardena et al. 2012), sleep walking that includes meaningful behavior
(e.g., Jacobson et al. 1965), and meditation (e.g., Hebert and Lehmann
1977; Lehmann et al. 2001, 2012; Tei et al. 2009).
In summary, full self-reflecting consciousness is not continually
present. Subjective experience during sleep onset and during sleep, that
is, in hypnagogic hallucinations and dreaming occurs with curtailed con-
sciousness. Two aspects of this constrained state of consciousness stand
out: The dreaming person typically does not reflect or question the his-
tory that led to the momentarily experienced situation, but the dreamer
is only concerned about a way out of it. Typically, there is no looking
back, asking how on earth did I get into this after all? but only the view
into the near future, what to do next, remains during sleep: what am
I going to do now?, exclusively looking forward in time. The dreamer
also does not do any reality checking: flying is experienced as unusual
but without any deeper doubting, meetings with friends who died long
ago produce no surprise. This restriction of the usual wider repertoire
of thought types to a limited awareness of surround and history is sim-
ilar to that in awake states of high arousal-excitation when all capacities
are focused on the issue at hand. In lucid dreaming, self-reflection
Microstates of brain electric field: Atoms of thought/emotion 197

reportedly is wider, but the absence of reality checking as major feature


of dreaming persists.
The state of consciousness and thus, the functional state of the brain
in healthy adults also show much quicker fluctuations in addition to
the diurnal changes mentioned earlier, the fluctuations of attention
in the range of about 26 seconds (see Woodworth and Schlosberg 1954),
the subjective presence of 23 seconds (Poppel 2009), the sequences
of specious moments in the range of seconds during the stream of
consciousness (James 1890), and time packages in the sub-second range
as reviewed in Section 6.2 on microstates.

6.1.5 Consciousness is indivisible, but its contents are separable entities


The momentary experience of conscious awareness appears homoge-
nous, indivisible; there are no separable parts, similar to the electric field
of the brain. In normal wakeful states, there is conscious awareness of
only one situation. Exceptions occur during state changes or in mental
disease.
During the brief transition from sleep to wakefulness, during the
hypnopompic state, a dream experience can continue for a brief moment
in parallel with awareness of surround information; for example, seeing
the face of the alarm clock.
Noteworthy are peculiarities of consciousness in pathological condi-
tions. In schizophrenia, a patient may live in two separate worlds, feeling
little if any conflict between them (double bookkeeping; Bleuler 1911).
The patient may believe that he is the president of the USA and still con-
tinue to do his duty tending the hospital garden.
Split-rain patients (Gazzaniga 2000) typically present with left-
hemispheric (speaking) consciousness. But, the right (most often non-
speaking) hemisphere may execute seemingly meaningful independent
behavior that if the patient is persuaded to explain the behavior is
interpreted by the left hemisphere with confabulations and often far-
fetched associations (for an interesting case report; Mark 1996). The
observation of double conscious processes in a split-brain patient with
bilateral language skills indicates a close relation between human con-
sciousness and language (LeDoux et al. 1977); but language skills are
not a sine qua non condition (Keller 1905).

6.1.6 Altering the state of consciousness


Consciousness is not directly accessible to willful decisions: one cannot
decide to be unconscious, but one can willfully influence it.
198 Dietrich Lehmann

Obviously, consciousness can be altered voluntarily by taking drugs.


In general, many chemical or mechanical (structural) manipulations of
the brain can influence the clarity of consciousness, and chemical, func-
tional, or structural disturbances in specific cortical regions influence the
contents of consciousness.
Behavior as well as functions of consciousness (cognition, memory) can
be influenced by electric or magnetic fields of environmental strength
(Gavalas et al. 1970; Wever 1977; Preece et al. 1998; Trimmel and
Schweiger 1998; see Section 6.2 about the electric field theory of con-
sciousness).
But without such physical manipulations, altered states of conscious-
ness can be attained willfully through training with bodily and/or men-
tal exercises: Getting oneself into a frenzy (the experience of being
beside oneself), for example, in Derwish dance ecstasy; performing
quiet meditation or shamanistic exercises (experiencing all-oneness,
oceanic boundlessness, ego dissolution, letting go); learning of lucid
dreaming. Willfully deciding to accept hypnotic suggestions also results
in altered states of consciousness. These willfully altered states of con-
sciousness are incorporated by specific characteristics of the brain elec-
tric field (for examples see Section 6.2 of this chapter). In fact, even
willfully changing the respiratory cycle may well influence conscious-
ness, since inspiration causes increases, expiration decreases of vigilance
and frequency of EEG waves (Lehmann 1977; Chervin et al. 2005), an
observation that may underlie the tendency to do respiration-attenuating
exercises when initiating meditation.

6.1.7 Localizing consciousness in the brain


A prerequisite for consciousness per se is a functionally intact upper
dorsal pons region (Plum and Posner 1980). A lesion of this small
area abolishes consciousness. For complete consciousness, neural con-
nections must function to and between intact subcortical and cortical
regions where content and handling information is stored and treated
(e.g., Damasio 2000; see also Merker, this volume, Chapter 1). Local-
ized cortical lesions, permanent or temporary, destroy the possibility for
conscious percepts of the particular modality or of the type of conscious
information processing where the lesioned location is crucially involved
in (e.g., Brodmanns area 17 for visual percepts; right hemispheric area
40 for body scheme; prefrontal area 10 for higher functions such as
decision making, personality characteristics). If the cortical areas are
destroyed (apallic syndrome), the patient seems to be conscious but
does not react to information (except for direct spinal cord reflexes). In
Microstates of brain electric field: Atoms of thought/emotion 199

sum, there is quite a bit of information about the localization of contents


of consciousness in the brain but much more still is not known yet.

6.1.8 What is the developmental advantage of consciousness?


Consciousness liberates choice and decision making from automatic-
ity. Automated decisions that occur without consciousness are fast but
rigid automated decisions are not adjusted to novel features and to
contexts of the triggering information. The triggering information may
even inappropriately generalize: Being used to stretch out the right hand
for a handshake when being introduced to a stranger may make a per-
son do it inadvertently even though for the stranger this action may be
insulting.
On the other hand, conscious decisions are slower, but they are flexi-
ble they can be selected from a wide range of possible decisions and can
take into account context information and secondary, specific aspects
of the decision-requiring information so that inappropriate decisions
become less likely. Consciousness apparently makes the inner aspect
of the brain functions available for search. In sum, such wider-choice,
non-automated decision making that is advantageous in novel or complex
situations has the inherent property of being associated with conscious-
ness.
Consciousness also involves disadvantages as experienced by the sub-
ject: Healthy awake humans are condemned to be conscious which
implies to be aware of suffering, and which implies to have to contin-
ually make hypotheses about the meaning of all information that reaches
consciousness (in order to identify food, shelter, sex, and danger).
The inherent characteristic of the mind to have to make hypothe-
ses, combined with the possibility to formulate, transmit, and store the
hypotheses in words, leads to higher and higher levels of abstraction,
eventually to concepts of how unidentified forces may shape the life of
people. However and here I agree with McGinn (1993) and Pinker
(1997) this high level of theorizing might well be an unintended side
effect of the evolution of conscious cogitation; evolution aimed merely
at optimizing the chance of survival through hypothesis building not at
establishing a department of philosophy.

6.1.9 Why is there consciousness at all?


This is one of the grammatically correct but useless questions: Con-
sciousness obviously is present if the human brain system is com-
plete and in the adequate state, just like electricity in a generator or
200 Dietrich Lehmann

gravity in mass. The earth exists. Why it exists is a matter of belief sys-
tems. The question why anything exists can also be formulated correctly
in terms of grammar and syntax, but similarly, there is no meaningful
answer.

6.1.10 Consciousness and free will


Consciousness makes available as inner aspect only a small fraction of the
ongoing brain activity. The information that is available for inner aspects
concerns processes at high levels of integration. The constituting pre-
processing steps and sub-processes that lead to complete perceptions,
emotions, thoughts, or decisions remain inaccessible to consciousness.
This is one major reason why subjectively one is so certain that conscious
decisions are free, that there is free will.
How does the idea of free will come about? The brain has the system-
immanent and life-supporting task and ability to detect relations between
observations, to find explanations, maybe better: theories that will
link experiences into an orderly system. This is a necessary function in
order to foresee and plan for future action in case of danger or need. On
the other hand, this function results in confabulations that are obvious
in pathological conditions, for example, in split-brain patients (Mark
1996; Gazzaniga 2000) or in cases of anosognosia where the patient
may deny the existence of his/her blindness or paralysis, or may report
a confabulated visual percept or limb movement. In summary, humans
automatically produce theories about any information that happens to
become conscious.
The intrinsic human motivation for explaining theories leads to the
rash assumption of the existence of free will since, as said earlier, pre-
processing steps and participating sub-processes of decision making are
not available to consciousness. On the other hand, the ubiquitous need to
make decisions is clear. Also, the strong subjective feeling that decisions
are free is always very obvious, even when deciding between several
brands of spaghetti in a supermarket. That willing is experienced as free
is an unambiguous and clear experience for everybody, supported by the
experience that making decisions often is quite stressful. Thus, undoubt-
edly free will does nevertheless exist as an overwhelming and unavoidable
subjective experience (see also Perlovsky, this volume, Chapter 9).
But, the subjective experience that there is no information available
how a decision came about is not a good reason to believe that there
is anything in the brain that happens without a cause, even though the
non-conscious causal chains in the brain that lead to decision making
must be extremely complex.
Microstates of brain electric field: Atoms of thought/emotion 201

Moreover, it is also not a good reason to believe that, as occasionally


suggested, some random generator is the basic root of free will it is not
convincing at all that randomness could possibly result in meaningful
decision making.
Another point in the free will issue is the question what willing is
supposed to be free from (Charles R. Joyce, personal communication).
The vernacular answer is free from the law of causality. But then, if
free, what is it driven by? Desire? Biographic memory (e.g., Nunn 2005)?
Whatever it might be, it would again be a cause.
Free will can only be truly free if the consequences of the choice
are at least roughly known as for example when one chooses a pack
of spaghetti. Otherwise, it would be a lottery choice. But even in the
lottery scenario, one is not choosing freely but is consciously or non-
consciously influenced by some happenstance details in memory while
seemingly picking something blindly and later finding out what one ended
up with. Even worse, outcomes of conceivable alternative choices remain
in the dark forever.
If one is quite certain that there cannot be a free will, why does
the often unpleasant subjective experience of seemingly free decision
making persist nevertheless? The reason is that abstract knowledge does
not change conscious perception: Even though we know for certain that
the Earth orbits around the sun, when we look out of the window in the
morning, the Sun rises on the horizon. And even though we know to
have drawn two exactly parallel straight lines, as soon as we overlay a star
symbol, these parallel lines appear bent.

6.2 Consciousness and brain electric activity

6.2.1 Brain functions and brain mechanisms


Following the concept that consciousness and its contents are func-
tions/products of brains, the description as complete as possible of
the conscious brains structure and functional mechanisms is to be the
goal of experimental investigations.
Conscious thoughts, ideas, percepts, or recalled events occur in frac-
tions of seconds. An adequate description of the brain mechanisms that
give rise to these subjective experiences will have to assess brain func-
tioning in this brief time range. Measurements of the brains electric and
magnetic field present themselves as adequate material because they offer
a very high time resolution; these data can be analyzed from millisecond
to millisecond.
202 Dietrich Lehmann

15
BODY IMAGE DISTURBANCES
VISUAL HALLUCINATIONS

10

0
5 10 15 20 25 Hz

Fig. 6.1 Power spectra of EEG recordings during times when subjects
signaled experiencing visual hallucinations or body image disturbances.
Mean power (vertical, arbitrary units) per frequency bin (Hz, horizontal)
across the six subjects that had both types of experience after cannabis
ingestion. Dots mark frequency bins that showed significant differences
(single dot p : 0.05; double dot p : 0.025; after Koukkou and Lehmann
1976).

6.2.2 Brain electric fields and their functional macrostates


Brain electric activity conventionally is measured on the head surface
as wave shapes of the electroencephalogram (EEG) or of event-related
potentials (ERP). EEG and ERP data reflect changes of the brains func-
tional state with very high sensitivity. A large repertoire of wave shape-
oriented analysis methods makes it possible to produce finely grained
descriptions of the functional states, to establish taxonomies, and to gain
insights into the factors that contribute to their generation. Spectral anal-
ysis, coherence analysis, wavelet analysis, dimensionality analysis, source
analysis (e.g., LORETA functional tomography), and many others are
available.
Spectral analysis transforms the recorded EEG data into the frequency
domain under the assumption that a sine wave model is an appropriate
basis for assessing EEG information. For a given EEG wave shape record-
ing, the power spectrum describes how the energy of the time series (the
EEG recording) is distributed with wave frequency (Fig. 6.1). The EEG
Microstates of brain electric field: Atoms of thought/emotion 203

power spectra yield crucial information about the state of conscious-


ness in pathological conditions (see, e.g., Cruse et al. 2011; Fellinger
et al. 2011; Goldfine et al. 2011). Elaborations of the frequency analysis
approach proved useful for control of general anesthesia during surgery.
Since wave-shape-based analyses always must analyze a certain extent
of time to assess wave forms, for example, as frequencies, these
approaches are termed macrostate analyses by our group. We note that
the wave-shape-based analysis approaches need to analyze more than a
single time point; for example, a statement about a frequency of 2 Hz
requires to examine 500 ms. Adaptive segmentation of the EEG or ERP
data permit to parse the recordings into macrostate epochs of quasi-stable
wave patterns, typically in the range of seconds, without preselecting the
duration of the time epochs to be analyzed (Bodenstein et al. 1985; Gath
et al. 1991). In contrast, microstate analyses, as reviewed later, analyze
single moments of time.
In almost all healthy awake young adults, a predominance of waves
at frequencies slower than eight per second is not compatible with fully
intact consciousness and intact memory functions. There are rare but
noteworthy exceptions in healthy awake persons: Less than 0.1% show
predominant 45 Hz activity that, however, reacts in the typical way with
a blocking reaction to new and unexpected information (Kuhlo et al.
1969).
Power spectra of EEG wave shape recordings as measures of brain
electric macrostates are rulefully related to many behavioral measures;
for example, level of vigilance, sleep stages, intensity of attention, devel-
opmental age, and so on. EEG wave shape analyses offer insight into the
brain electric mechanisms of altered states of consciousness in healthy
people, for example, during hypnosis (e.g., Cardena et al. 2012) and med-
itation (e.g., Hebert and Lehmann 1977; Lehmann et al. 2001, 2012;
Tei et al. 2009). Even the type of predominant mentation is reflected in
the power spectra (e.g., in studies by our group: Koukkou and Lehmann
1976; Lehmann et al. 1995; Wackermann et al. 2002). Figure 6.1 illus-
trates such a case.

6.2.3 Discontinuous changes of the brains functional


state: Functional microstates
When one measures the brain electric potential values from many loca-
tions on the head surface, a map of the potential distribution can be con-
structed for each time point (Lehmann 1971). Each map is a snapshot
of the momentary functional state of the brain as reflected by its electric
field. At each moment there is a single functional state, however complex
204 Dietrich Lehmann

Fig. 6.2 Sequence of maps of momentary potential distribution on the


head surface during no-task resting, at intervals of 7.8 ms (there were
128 maps per second). The entire sequence covers 85 ms. Head seen
from above, nose up, left ear left. Black negative, white positive in ref-
erence to the mean of all values. Iso-potential lines in steps of 10 mV.
Note the relative stability of the landscape for a few maps, and the quick
change to the next landscape.

the organization of the system (Ashby 1960). The brain potential maps
describe the potential landscape of higher and lower potential values by
iso-potential lines, similar to geographical maps whose iso-altitude lines
describe mountains and valleys. Positive and negative potential values are
defined in reference to the potential value at an arbitrarily preselected
location (the reference location, i.e., the location where the refer-
ence electrode is attached). This arbitrary choice of a reference location
naturally does not influence the measured potential landscape, just as a
rising or falling water level does not alter a geographical landscape. Figure
6.2 shows a sequence of maps of momentary potential distributions. It
is noteworthy that at visual examination, the map series shows no wave
fronts and no wave travelling phenomena, although these aspects are
classical topics in EEG studies. Examination of series of momentary brain
potential maps shows that the mapped landscapes show brief time peri-
ods of quasi-stable spatial configurations that are concatenated by very
rapid changes of landscape (Fig. 6.2). Thus, the map series is reminis-
cent of a volcano landscape with outbreaks once here, then there. This
is illustrated by plots of the locations of extreme potentials over time
(Fig. 6.3). The plots show that these reference electrode-independent
map features occur in restricted sub-areas of the head surface (Lehmann
1971).
Data-driven analysis approaches for brain electric data can parse
the series of momentary maps into temporal segments of quasi-stable
map landscapes (Lehmann and Skrandies 1980; Lehmann et al. 1987;
Pascual-Marqui et al. 1995). We called these segments microstates
(Lehmann et al. 1987). Their mean duration during no-task resting is
in the range of about 100 ms. Microstates also are observed in event-
related potential (ERP) data (Lehmann and Skrandies 1980; Michel and
Lehmann 1993; Koenig et al. 1998; Gianotti et al. 2007). Figure 6.3
shows a series of maps of momentary potential distributions where the
field was mapped at successive time points of maximal field strength, and
where the terminations of identified microstates are marked.
Microstates of brain electric field: Atoms of thought/emotion 205

+ _ _ + _

+ +
+ _ +
_ + _ _ _ +

367.2ms 414.1ms 460.9ms 507.8ms 570.3ms 640.6ms 671.9ms 710.9ms


+ _ + _ + _ +
_

_ + _ + _ + + _

757.8ms 804.7ms 851.6ms 898.4ms 945.3ms 992.2ms 1039.1ms 1156.3ms


+ _ + _

+ _
_
_ + + _ + _ + + _

1265.6ms 1304.7ms 1351.6ms 1398.4ms 1468.8ms 1507.8ms 1539.1ms 1570.3ms

Fig. 6.3 Maps of momentary potential distribution on the head surface


during no-task resting. The displayed maps were selected at successive
times of maximum Global Field Power values (GFP: Lehmann and
Skrandies 1980), that is, at times when the hilliness of a map showed
a higher value than that of the preceding and following map. Head seen
from above (A), nose up (B), left ear left (C); iso-potential lines in steps of
10 microvolt; a straight line connects the location of highest potential (+
mark) with the location of lowest potential ( mark). Stars indicate the
termination of a microstate, that is, a change of the map landscape that
exceeded the set tolerance level. Note that for spontaneous brain activity,
the continually reversing polarity of the brain electric field is irrelevant;
only the spatial configuration is used for the microstate assessment.

The microstates are classified into different classes on the basis of the
spatial configuration of their electric landscapes. Four microstate classes
with specific spatial configurations of their maps of electric potential
distribution (Fig. 6.4) were observed during no-task resting conditions
(Wackermann et al. 1993; Koenig et al. 2002; Britz et al. 2010).
Physics tells us that different potential landscapes on the surface of
the head must have been generated by activity of different geometries
of the generating sources, that is, of different populations of neurons in
the brain. It is reasonable to assume that different neuronal activity will
execute different functions in information processing. Thus, one could
imagine developing a microstate dictionary that describes the function
executed by each type of microstate. Indeed, we found that different
classes of microstates are associated with different subjective experiences
or different types of information processing. We observed the results
during spontaneously occurring thinking as well as during reading of
words displayed on a computer screen. The studies are briefly reviewed
next.
206 Dietrich Lehmann

A B C D
Fig. 6.4 Maps of the potential distribution on the head surface of the
four standard microstate classes during no-task resting, obtained from
496 healthy 6 to 80-year-old subjects (data of Koenig et al. 2002).
Head seen from above, nose up, left ear left; iso-potential lines in equal
microvolt steps; maps are normalized for unity Global Field Power.
Black and white areas indicate opposite polarities, but note that for
spontaneous brain activity, the continually reversing polarity of the brain
electric field is irrelevant; only the spatial configuration is used for the
microstate assessment.

6.2.4 Microstates of the spontaneous stream of consciousness


In a day-dream or mind wandering study, EEG was recorded during
task-free resting from 13 healthy participants. They were instructed to
report briefly what just went through your mind whenever a gentle tone
was sounded as prompt signal (Lehmann et al. 1998). This prompt tone
was presented at random intervals during the continuous EEG record-
ing; about 30 prompts were given to each subject. Independent raters
classified the reports as visual-concrete thoughts (e.g., I saw our lunch
at the beach or abstract thoughts (e.g., I thought about the meaning
of the word theory ). The last microstate before the prompt signal that
elicited the report showed significantly different landscapes of electric
potential distribution for visual-concrete and abstract reports. The last
but one microstate before the prompt signal showed no such difference.
(By definition, the last but one microstate before the prompt must have
had a potential landscape that differed from the landscape of the last
microstate.)

6.2.5 Microstates of input-driven mentation


In another study, input-driven mental states were examined where 25 par-
ticipants silently read nouns from a computer screen while their EEG was
continuously recorded (Koenig et al. 1998). The nouns either were of the
visual-concrete type, naming easily imageable items such as apple or
house, or of the abstract type that hardly elicit images such as belief
Microstates of brain electric field: Atoms of thought/emotion 207

or theory. From time to time, a question mark was shown instead of a


word and thereupon the subject had to speak the last word. Accordingly,
the subjects believed themselves to be in a memory experiment. The
EEG epochs during word display were separately averaged for visual-
concrete and abstract words (event-related potential ERP computation).
Microstate analysis of the ERP map series identified seven microstates
during the 450 ms of word display. Microstate #5 at 286354 ms after
onset of word display showed, across participants, a significantly different
potential landscape after visual-concrete compared to abstract words.

6.2.6 External and internal input of same content is processed in the same
cortical area: Visual-concrete versus abstract thoughts
In common for both studies reviewed previously, when applying a con-
junction analysis we found (Lehmann et al. 2010) that the results for
visual-concrete and abstract thoughts were similar for information gen-
erated interiorly (spontaneous thoughts) and for information presented
from exterior sources (words read on the computer display): (1) The
microstate potential maps of visual-concrete thought content had orien-
tations of the brain electric field axis that were rotated counterclockwise
referred to the microstate potential maps of abstract thought content.
(2) The brain electric gravity center of the microstate maps was more
posterior and more right-hemispheric for visual-concrete thought content
compared to abstract thought content. (3) Subsequent LORETA func-
tional tomography conjunction analyses of the microstate data demon-
strated activation significant in common in the two studies right posterior
for visual-concrete thought content (Brodmann areas 20, 36, 37) and
left anterior for abstract thought content (Brodmann areas 47, 38, 13)
as illustrated in Fig. 6.5.

6.2.7 Microstates of emotions


In a study on processing of emotional information (Gianotti et al. 2007),
21 subjects silently read words displayed one by one on a computer
screen; similar to the earlier study, if a question mark appeared instead
of a word, the subject was to repeat the last word. The words were emo-
tionally positive or negative, for example, joy or death, but were
equalized on the active-passive scale. The grand mean ERP map series
during word presentation (450 ms) was segmented into microstates.
Fourteen microstates were identified. Significant topographical differ-
ences between ERP microstates elicited by positive versus negative words
were seen in three microstates, #4, #7, and #9, at 96122, 184202, and
208 Dietrich Lehmann

Fig. 6.5 Glass brain views of the brain sources that were active dur-
ing microstates associated with spontaneous or induced visual-concrete
imagery and during microstates associated with spontaneous or induced
abstract thought. The displayed localizations reached Fishers p:0.05 in
a conjunction analysis that combined the results of two studies that
investigated spontaneous unrestrained thinking and reading of visual-
concrete or abstract words (after Lehmann et al. 2010).

248274 ms after word onset, respectively. LORETA functional brain


electric tomography in all three microstates showed a more anterior local-
ization of activity for emotional positive than negative words. On the other
hand, in microstates #4 and #9 activity for positive as well as negative
words was clearly dominant in the left hemisphere, but in microstate #7
in the right hemisphere. Thus, the distinction between positive and nega-
tive emotional meaning occurred in the brain several times during the
short word presentation, and was incorporated in microstate time pack-
ages of different spatial organization of brain electric activity.

6.2.8 Information content versus task effects


The participants in the three studies reviewed in the preceding para-
graphs did not know that the goal of the experiments was to distin-
guish brain electric signatures of visual-concrete thoughts from those of
abstract thoughts, or to distinguish signatures of pleasant from those
of unpleasant emotions. Rather, the subjects believed to participate in
Microstates of brain electric field: Atoms of thought/emotion 209

relative simple memory experiments. The observed differences between


spatial distributions of brain electric potential during the critical tem-
poral microstates thus were driven by the information content, and not
by tasks to imagine or to formulate a thought content or to evaluate or
experience an emotion.

6.2.9 The momentary microstate determines the fate of


incoming information
The fact that all brain functions are state-dependent also holds in the
split-second time range. Brain activity in response to incoming informa-
tion differed depending on the microstate class that accidentally existed
at the moment of the arrival of the information (Kondakor et al. 1997;
Britz and Michel 2011).

6.2.10 Split-second units of brain functioning


Based on experimental observations and theoretical considerations, it
was postulated early on that information processing in the brain occurs in
chunks in the time range of subseconds (e.g., Stroud 1955; Allport 1968;
Efron 1970; Blumenthal 1977; DiLollo 1980; Libet et al. 1983; Newell
1992; Baars 2002; Breakspear et al. 2004; Trehub 2007; VanRullen et al.
2007; Baars and Gage 2010; for philosophical conceptualizations, see,
e.g., Whiteheads 1929 actual occasions).
Stepwise changes of brain electric activity in the sub-second range were
observed also using other methods than the presently reviewed func-
tional microstate analysis (e.g., Harter 1967; Pockett 2002b; Freeman
2003; Kenet et al. 2003; Fingelkurts and Fingelkurts 2008; Pockett et al.
2009).

6.2.11 Microstate syntax: From atoms to molecules of thought


Having described basic building blocks of conscious brain functional
states, their temporal structure comes into focus (a recent topic, see
Janoos et al. 2011 with a long history, see Trehub 1969). If a given
microstate class incorporates a type of thought, then the sequence of
microstates of different classes (Wackermann et al. 1993) can be hypoth-
esized to incorporate strategies of cogitation. This microstate syntax
indeed was different in acute schizophrenics (before medication) com-
pared to healthy controls (Lehmann et al. 2005), and in believers in the
paranormal compared to skeptics (Schlegel et al. 2012).
210 Dietrich Lehmann

6.2.12 Microstates in healthy and pathological states of consciousness


In several studies, microstate parameters were reported characterizing
and differentiating various healthy and pathological conditions. They
vary with the level of vigilance (wake, drowsy, rapid-eye movement sleep;
Cantero et al. 1999), in light and deep hypnosis (Katayama et al. 2007),
under centrally active medication (Lehmann et al. 1993; Michel and
Lehmann 1993; Kinoshta et al. 1995; Kikuchi et al. 2007), and in mental
disorders such as Alzheimers (Strik et al. 1997; Dierks et al. 1997),
depression (Strik et al. 1995), panic disorder (Kikuchi et al. 2011), and
schizophrenia (Koukkou et al. 1994; Koenig et al. 1999; Strelets et al.
2003; Irisawa et al. 2006; Kikuchi et al. 2007), where decreased presence
of microstates of an attention-related microstate class was associated with
auditory verbal hallucinations (Kindler et al. (2011). The spatial map of
that latter microstate class (class D of Koenig et al. 2002) correlated with
the dorsal attention-reorientation network, including the superior and
middle frontal gyrus as well as the superior and inferior parietal lobules.

6.2.13 Microstates and fMRI resting state networks and default states
The four typical microstate classes were also observed in studies ana-
lyzing simultaneous recordings of multichannel EEG and of functional
magnet resonance imaging (fMRI) (Britz et al. 2010; Musso et al. 2010)
that demonstrated microstate-associated networks which corresponded
to fMRI-described resting state networks (see also Yuan et al. 2012).

6.2.14 Consciousness and its building blocks


The microstate studies had shown that brain work as a whole is not a
continuous process over time, not a homogenous stream, but that it
consists of distinct temporal packages. This does not directly agree with
the nave subjective impression of a continuously persisting self.
The microstate results support the concept that completed brain func-
tions of higher order such as thoughts are incorporated in temporal units
of brain work, in the microstates (Lehmann et al. 1987, 1998, 2009).
The assumption is that a microstate incorporates a consciously experi-
enced momentary thought that includes its automated sub-processing
steps. It appears that the time window during which spatially distant
brain processes are accepted as part of a momentary microstate lasts
between about 75 and 150 ms. This would constitute the temporal unit
of a conscious self-experience. The time range corresponds to that of
earlier observations and theories which postulated packeted, step-wise
Microstates of brain electric field: Atoms of thought/emotion 211

information processing in the brain as briefly reviewed earlier. The sub


processes would correspond to the momentary contents in Baars (1997)
global workspace. Microstates as temporal building blocks of brain
work incorporate identifiable steps and modes of brain information pro-
cessing. We propose that microstates of the brain electric field are the
valid candidates for atoms of thoughts and emotions.
One can speculate that at each moment in time, the spatial config-
uration of the brain electric field incorporates the co-existence of the
simultaneous neural sub-processes into an indivisible whole indivisible
like consciousness that does not have parts: A homogeneous element
needs to be the outer aspect of what is internally experienced as con-
tinuous stream of conscious experience or as smooth-surface percept.
The brain electric field is an obvious candidate for this outer aspect
(Lehmann 1990). Related proposals were advanced by Pockett (1999,
2002a) and McFadden (2002). Indeed, electric or magnetic fields of
normal environmental strength can influence behavior, cognition, and
memory as reviewed in Section 6.1 of this chapter.
In summary, the claim is that the brain electric field with its dynamic
development over time is the measurable, exterior aspect of a subjects
conscious mental life. Inner aspects can be subjectively experienced as
conscious, complete thoughts with emotional valence during the split-
second, EEG definable, functional microstates of the brain.

6.2.15 Discontinuous brain activity and the stream of consciousness


Subjective experience based on attentive, trained introspection agrees
with the observed brain electrical activity as sequence of distinct
microstates, speaking against the concept of an unstructured stream of
consciousness (see also Kounios and Smith 1995): Examples are sud-
den unrelated intruding thoughts (Klinger 1978; Uleman and Bargh
1989), new ideas that occur out of the blue, or torrents of very dif-
ferent troublesome worries on bad days. Also early on, apparently based
on subjective insight during well-trained meditation, the Buddhist liter-
ature spoke about the impermanence of everything, including the con-
scious mind that continually re-arises in unconnected, distinct events
(in the Asuttava Sutta (12.61), p. 595 in Buddha 2000). A review of
Buddhist teaching (Barendregt 2006) refers to subsecond durations, i.e.,
the time range of microstates. Meditation might produce higher sensitiv-
ity for experiences because it generally lowers the functional connectivity
between brain regions and therefore experiences are handled more inde-
pendently and influence less each other (Lehmann et al. 2012).
212 Dietrich Lehmann

The unawareness of the rapid succession of thoughts in introspection-


nave persons is not comparable to the impossibility to perceive individual
movie frames, or light flashes above flicker fusion frequency, because no
training can overcome those limits. I suppose that one learns early on in
life that the everlasting sequence of new percepts and thoughts that briefly
live in working memory belong to one and the same system, conceivably
backed by the continually updated sensory percept of ones own body
scheme that over time shows only very slow changes. In general, discon-
tinuations in event sequences are not readily perceived as, for example,
in change blindness (e.g., Simons and Chabris 1999). The sequence of
discontinuous visual percepts of a stable surround when one moves the
eyes is a classical case. Or, when viewing a movie, an entire lifetime can
be shown in two hours without anyone feeling that there is something
missing unless the author deliberately aims to interrupt events.

REFERENCES
Allport D. A. (1968). Phenomenal simultaneity and the perceptual moment
hypothesis. Br J Psychol 59(4):395406.
Ashby W. R. (1960). Design for a Brain; The Origin of Adaptive Behavior, 2nd
Edn. New York: John Wiley & Sons, Inc.
Baars B. J. (1997). In the Theater of Consciousness: The Workspace of the Mind. New
York: Oxford University Press.
Baars B. J. (2002). Behaviorism redux? Trends Cogn Sci 6(6):268269.
Baars B. J. and Gage N. M. (2010). Cognition, Brain and Consciousness: An Intro-
duction to Cognitive Neuroscience, 2nd Edn. London: Academic Press.
Barendregt H. (2006). The Abhidhamma model AM0 of consciousness and
some of its consequences. In Kwee M. G. T., Gergen K. J., and Koshikawa
F. (eds.) Buddhist Psychology: Practice, Research and Theory. Taos, NM: Taos
Institute Publishing, pp. 331349.
Buddha (2000). The Connected Discourses of the Buddha: A New Translation of the
Samyutta Nikaya. Trans. Bodhi B. Boston, MA: Wisdom Publications.
Bleuler E. (1911). Dementia Praecox oder Gruppe der Schizophrenien. Leipzig:
Deuticke.
Blumenthal A. L. (1977). The Process of Cognition. Englewood Cliffs, NJ: Prentice-
Hall.
Boas F. (1920). The methods of ethnology. Am Anthropol, New Series 22(4):311
321.
Bodenstein G., Schneider W., and Malsburg C. V. (1985). Computerized
EEG pattern classification by adaptive segmentation and probability-
density-function classification. Description of the method. Comput Biol Med
15(5):297313.
Breakspear M., Williams L. M., and Stam C. J. (2004). Topographic analysis
of phase dynamics in neural systems reveals formation and dissolution of
dynamic cell assemblies. J Comput Neurosci 16:4968.
Microstates of brain electric field: Atoms of thought/emotion 213

Britz J. and Michel C. M. (2011). State-dependent visual processing. Front Psy-


chol 2: article 370. URL: www.frontiersin.org/Perception Science/10.3389/
fpsyg.2011.00370/full (accessed February 28, 2013).
Britz J., van de Ville D., and Michel C. M. (2010). BOLD correlates of EEG
topography reveal rapid state network dynamics. NeuroImage 52(4):1162
1170.
Cantero J. L., Atienza M., Salas R. M., and Gomez C. M. (1999). Brain
spatial microstates of human spontaneous alpha activity in relaxed wake-
fulness, drowsiness period, and REM sleep. Brain Topogr 11(4):257
263.
Cardena E., Lehmann D., Faber P. L., Jonsson P., Milz P., et al. (2012). EEG
sLORETA functional imaging during hypnotic arm levitation and voluntary
arm lifting. Int J Clin Exp Hypn 61(1): 3153.
Chervin R. D., Burns J. W., Ruzicka D. L., and Michael S. (2005). Electroen-
cephalographic changes during respiratory cycles predict sleepiness in sleep
apnea. Am J Respir Crit Care Med 171(6):652658.
Cruse D., Chennu S., Chatelle C., Bekinschtein T. A., Fernandez-Espejo D.,
Pickard J. D., et al. (2011). Bedside detection of awareness in the vegetative
state: A cohort study. Lancet 378(9809):20882094.
Damasio A. R. (2000). The Feeling of What Happens: Body and Emotion in the
Making of Consciousness. London: Vintage Random House.
Dehaene S. and Changeux J.-P. (2011). Experimental and theoretical approaches
to conscious processing. Neuron 70(2):200227.
Dierks T., Jelic V., Julin P., Maurer K., Wahlund L. O., Almkvist O., et al.
(1997). EEG-microstates in mild memory impairment and Alzheimers dis-
ease: Possible association with disturbed information processing. J Neural
Transm 104(45):483495.
DiLollo V. (1980). Temporal integration in visual memory. J Exp Psychol Gener
109:7597.
Efron R. (1970). The minimum duration of a perception. Neuropsychologia
8(1):5763.
Fellinger R., Klimesch W., Schnakers C., Perrin F., Freunberger R., Gruber
W., et al. (2011). Cognitive processes in disorders of consciousness as
revealed by EEG time-frequency analyses. Clin Neurophysiol 122(11):2177
84.
Fingelkurts A. A. and Fingelkurts A. A. (2008). Brain-Mind Operational Archi-
tectonics Imaging: Technical and Methodological Aspects. Open Neuroimag
J 2:7393.
Freeman W. J. (2003). Evidence from human scalp electroencephalograms of
global chaotic itinerancy. Chaos 13:10671077.
Gallup G. G., Jr. (1970). Chimpanzees: Self-recognition. Science 167(3914):86
87.
Gath I., Michaeli A., and Feuerstein C. (1991). A model for dual channel seg-
mentation of the EEG signal. Biol Cybern 64(3):225230.
Gavalas R. J., Walter D. O., Hamer J., and Adey W. R. (1970). Effect of low-level,
low-frequency electric fields on EEG and behavior in Macaca nemestrina.
Brain Res 18(3):491501.
214 Dietrich Lehmann

Gazzaniga M. S. (2000). Cerebral specialization and interhemispheric commu-


nication: Does the corpus callosum enable the human condition? Brain
123(7):12931326.
Gianotti L. R. R., Faber P. L., Pascual-Marqui R. D., Kochi K., and Lehmann D.
(2007). Processing of positive versus negative emotional words is incorpo-
rated in anterior versus posterior brain areas: An ERP microstate (LORETA)
study. Chaos and Complexity Letters 2(23):189211.
Goldfine A. M., Victor J. D., Conte M. M., Bardin J. C., and Schiff N. D. (2011).
Determination of awareness in patients with severe brain injury using EEG
power spectral analysis. Clin Neurophysiol 122(11):21572168.
Harter M. R. (1967). Excitability cycles and cortical scanning: A review of two
hypotheses of central intermittency in perception. Psychol Bull 68(1):4758.
Haynes J. D. (2009). Decoding visual consciousness from human brain signals.
Trends Cogn Sci 13(5):194202.
Haynes J. D. (2011). Decoding and predicting intentions. Ann NY Acad Sci
1224:921.
Hebert R. and Lehmann D. (1977). Theta bursts: An EEG pattern in normal
subjects practicing the transcendental meditation technique. Electroenceph
Clin Neurophysiol 42(3):397405.
Irisawa S., Isotani T., Yagyu T., Morita S., Nishida K., Yamada K., et al. (2006).
Increased omega complexity and decreased microstate duration in nonmed-
icated schizophrenic patients. Neuropsychobiol 54(2):134139.
Jacobson A., Kales A., Lehmann D., and Zweizig J. R. (1965). Somnambulism:
All-night EEG studies. Science 148:975977.
James W. (1890). The Principles of Psychology, Vol. 1. Reprinted. New York: Dover
Publications, 1950.
Janoos F., Machiraju R., Singh S., and Morocz I. A. (2011). Spatio-temporal
models of mental processes from fMRI. Neuroimage 57(2):362377.
Jung C. (1973). Collected Works of C. G. Jung, Vol. 2: Experimental Researches.
Princeton University Press, pp. 3479.
Katayama H., Gianotti L. R. R., Isotani T., Faber P. L., Sasada K., et al. (2007).
Classes of multichannel EEG microstates in light and deep hypnotic condi-
tions. Brain Topography 20(1):714.
Keller H. (1905). The Story of My Life. New York: Doubleday Page.
Kenet T., Bibitchkov D., Tsodyks M., Grinvald A., and Arieli A. (2003).
Spontaneously emerging cortical representations of visual attributes. Nature
425(6961):954956.
Kikuchi M., Koenig T., Munesue T., Hanaoka A., Strik W., Dierks T., et al.
(2011). EEG microstate analysis in drug-naive patients with panic disorder.
PLoS One 6(7):e22912.
Kikuchi M., Koenig T., Wada Y., Higashima M., Koshino Y., et al.(2007). Native
EEG and treatment effects in neuroleptic-nave schizophrenic patients: Time
and frequency domain approaches. Schizophr Res 97(13):163172.
Kindler J., Hubl D., Strik W. K., Dierks T., and Koenig T. (2011). Rest-
ing state EEG in schizophrenia: auditory verbal hallucinations are related
to shortening of specific microstates. Clin Neurophysiol 122(6):1179
1182.
Microstates of brain electric field: Atoms of thought/emotion 215

Kinoshita T., Strik W. K., Michel C. M., Yagyu T., Saito M., and Lehmann
D. (1995). Microstate segmentation of spontaneous multichannel EEG
map series under Diazepam and Sulpiride. Pharmacopsychiatry 28:51
55.
Klinger E. (1978). Modes of normal conscious flow. In Pope K. S. and Singer
J.L. (eds.) The Stream of Consciousness. London: Plenum Press, pp. 91
116.
Koenig T., Kochi K., and Lehmann D. (1998). Event-related electric microstates
of the brain differ between words with visual and abstract meaning. Electroen-
ceph Clin Neurophysiol 106:535546.
Koenig T., Lehmann D., Merlo M. C. G., Kochi K., Hell D., and Koukkou
M. (1999). A deviant EEG brain microstate in acute, neuroleptic-naive
schizophrenics at rest. Europ Arch Psychiat Clin Neurosci 249:205211.
Koenig T., Prichep L. S., Lehmann D., Valdes-Sosa P., Braeker E., Kleinlogel
H., et al. (2002). Millisecond by millisecond, year by year: Normative EEG
microstates and developmental stages. NeuroImage 16:4148.
Kondakor I., Lehmann D., Michel C. M., Brandeis D., Kochi K., and Koenig T.
(1997). Prestimulus EEG microstates influence visual event-related poten-
tial microstates in field maps with 47 channels. J Neural Transm (Gen Sect)
104(23):161173.
Koukkou M. and Lehmann D. (1976). Human EEG spectra before and during
cannabis hallucinations. Biol Psychiat 11(6):663677.
Koukkou M., Lehmann D. (1983). Dreaming: The functional state shift
hypothesis, a neuropsychophysiological model. Brit J Psychiat 142(3):221
231.
Koukkou M., Lehmann D. and Angst J. (eds.) (1980). Functional States of the
Brain: Their Determinants. Amsterdam: Elsevier.
Koukkou M., Lehmann D., Strik W. K., and Merlo M. C. (1994). Maps of
microstates of spontaneous EEG in never-treated acute schizophrenia. Brain
Topography 6(3):251252.
Kounios J. and Smith R. W. (1995). Speed-accuracy decomposition yields a
sudden insight into all-or-none information processing. Acta Psychol 90(1
3):229241.
Kuhlo W., Heintel H., and Vogel F. (1969). The 45 c-sec rhythm. Electroenceph
Clin Neurophysiol 26(6):613618.
LeDoux J. E., Wilson D. H., and Gazzaniga M. S. (1977). A divided mind:
Observations on the conscious properties of the separated hemispheres. Ann
Neurol 2(5):417421.
Lehmann D. (1971). Multichannel topography of human alpha EEG fields. Elec-
troenceph Clin Neurophysiol 31(5):439449.
Lehmann D. (1977). Cortical activity and phases of the respiratory cycle. Proc.
18th Int. Congress, International Society for Neurovegetative Research, Tokyo,
Japan, pp. 8789. URL: http://dx.doi.org/10.5167/uzh-77939 accessed
February 28, 2013).
Lehmann D. (1990). Brain electric microstates and cognition: The atoms of
thought. In John E. R. (ed.) Machinery of the Mind. Boston, MA: Birkhauser,
pp. 209224.
216 Dietrich Lehmann

Lehmann D., Faber P. L., Achermann P., Jeanmonod D., Gianotti L. R. R., and
Pizzagalli D. (2001). Brain sources of EEG gamma frequency during voli-
tionally meditation-induced, altered states of consciousness, and experience
of the self. Psychiatry Res: Neuroimaging 108(2):111121.
Lehmann D., Faber P. L., Galderisi S., Herrmann W. M., Kinoshita T., Koukkou
M., et al. (2005). EEG microstate duration and syntax in acute, medication-
nave, first-episode schizophrenia: A multi-center study. Psychiatry Res Neu-
roimaging 138(2):141156.
Lehmann D., Faber P. L., Tei S., Pascual-Marqui R. D., Milz P., and Kochi
K. (2012). Reduced functional connectivity between cortical sources in five
meditation traditions detected with lagged coherence using EEG tomogra-
phy. Neuroimage 60(2):15741586.
Lehmann D., Grass P., and Meier B. (1995). Spontaneous conscious covert
cognition states and brain electric spectral states in canonical correlations.
Int J Psychophysiol 19(1):4152.
Lehmann D., Ozaki H., and Pal I. (1987). EEG alpha map series: brain micro-
states by space-oriented adaptive segmentation. Electroenceph Clin Neuro-
physiol 67(3):271288.
Lehmann D., Pascual-Marqui R. D., and Michel C. (2009). EEG microstates.
Scholarpedia 4(3):7632. URL: http://goo.gl/uks7i (accessed February 28,
2013).
Lehmann D., Pascual-Marqui R. D., Strik W. K., and Koenig T. (2010). Core
networks for visual-concrete and abstract thought content: A brain electric
microstate analysis. NeuroImage 49(1):10731079.
Lehmann D. and Skrandies W. (1980). Reference-free identification of com-
ponents of checkerboard-evoked multichannel potential fields. Electroenceph
Clin Neurophysiol 48(6):609621.
Lehmann D., Strik W. K., Henggeler B., Koenig T., and Koukkou M. (1998).
Brain electric microstates and momentary conscious mind states as building
blocks of spontaneous thinking: I. Visual imagery and abstract thoughts. Int
J Psychophysiol 29(1):111.
Lehmann D., Wackermann J., Michel C. M., and Koenig T. (1993). Space-
oriented EEG segmentation reveals changes in brain electric field maps
under the influence of a nootropic drug. Psychiatry Res Neuroimaging
50(4):275282.
Levi-Strauss C. (1955). Tristes Tropiques. Paris: Plon.
Libet B., Gleason C. A., Wright E. W., and Pearl D. K. (1983). Time of conscious
intention to act in relation to onset of cerebral activity (readiness-potential).
The unconscious initiation of a freely voluntary act. Brain 106:623
642.
Mark V. (1996). Conflicting communicative behavior in a split-brain patient:
Support for dual consciousness. Chapter 12 in Hameroff S. R., Kaszniak
A. W., and Scott A. C. (eds.) Toward a Science of Consciousness: The First
Tucson Discussions and Debates. Cambridge, MA: MIT Press, pp. 189196.
McFadden J. (2002). Synchronous firing and its influence on the brains electro-
magnetic field: Evidence for an electromagnetic field theory of conscious-
ness. J Consciousness Stud 9(4):2350.
Microstates of brain electric field: Atoms of thought/emotion 217

McGinn C. (1993). Problems in Philosophy: The Limits of Inquiry. Oxford: Black-


well.
Michel C. M. and Lehmann D. (1993). Single doses of piracetam affect 42-
channel event-related potential microstate maps in a cognitive paradigm.
Neuropsychobiol 28(4):212221.
Musso F., Brinkmeyer J., Mobascher A., Warbrick T., and Winterer G. (2010).
Spontaneous brain activity and EEG microstates. A novel EEG/fMRI anal-
ysis approach to explore resting-state networks. Neuroimage 52(4):1149
1161.
Newell A. (1992). Precis of unified theories of cognition. Behav Brain Sci 15:425
492.
Nunn C. (2005). De la Mettries Ghost: The Story of Decisions. New York: Macmil-
lan.
Pascual-Marqui R. D., Michel C. M., and Lehmann D. (1995). Segmentation
of brain electrical activity into microstates: model estimation and validation.
IEEE T Bio-Med Eng 42:658665.
Pinker S. (1997). How the Mind Works. London: Penguin.
Plum F. and Posner J. B. (1980). The Diagnosis of Stupor and Coma, 3rd Edn.
Philadelphia, PA: FA Davis.
Pockett S. (1999). Anesthesia and the electrophysiology of auditory conscious-
ness. Conscious Cogn 8:4561.
Pockett S. (2002a). Difficulties with the electromagnetic field theory of con-
sciousness. J Consciousness Stud 9(4):5156.
Pockett S. (2002b). On subjective back-referral and how long it takes to become
conscious of a stimulus: A reinterpretation of Libets data. Conscious Cogn
11(2):144161.
Pockett S., Bold G. E., and Freeman W. J. (2009). EEG synchrony during a
perceptual-cognitive task: widespread phase synchrony at all frequencies.
Clin Neurophysiol 120(4), 695708.
Poppel E. (2009). Pre-semantically defined temporal windows for cognitive pro-
cessing. Philos Trans R Soc Lond B Biol Sci 364:18871896.
Preece A. W., Wesnes K. A., and Iwi G. R. (1998). The effect of a 50 Hz magnetic
field on cognitive function in humans. Int J Radiat Biol 74(4):463470.
Prior H., Schwarz A., and Gunturkun O. (2008). Mirror-induced behaviour
in the magpie (Pica pica): Evidence of self-recognition. PLoS Biology 6(8):
e202.
Reemtsma J. P. (1997). Im Keller. Hamburg: Hamburger Edition (Hamburger
Institut fur Sozialforschung), (ISBN 3-930908-29-8).
Schlegel F., Lehmann D., Faber P. L., Milz P., and Gianotti L. R. (2012). EEG
microstates during resting represent personality differences. Brain Topogr
25(1):2026.
Simons D. J. and Chabris C. F. (1999). Gorillas in our midst: sustained inatten-
tional blindness for dynamic events. Perception 28:10591074.
Strelets V., Faber P. L., Golikova J., Novototsky-Vlasov V., Koenig T., Gianotti
L. R. R., et al. (2003). Chronic schizophrenics with positive symptomatology
have shortened EEG microstate durations. Clin Neurophysiol 14(11):2043
2051.
218 Dietrich Lehmann

Strik W. K., Chiaramonti R., Muscas G. C., Paganini M., Mueller T. J., Fallgatter
A. J., et al. (1997). Decreased EEG microstate duration and anteriorisation
of the brain electrical fields in mild and moderate dementia of the Alzheimer
type. Psychiatry Res 75(3):183191.
Strik W. K., Dierks T., Becker T., and Lehmann D. (1995). Larger topographical
variance and decreased duration of brain electric microstates in depression.
J Neural Transm (Gen Sect) 99:213222.
Stroud J. M. (1955). The fine structure of psychological time. In Quastler H.
(ed.) Information Theory in Psychology. Glencoe, IL: Free Press.
Tei S., Faber P. L., Lehmann D., Tsujiuchi T., Kumano H., Pascual-Marqui
R. D., et al. (2009). Meditators and non-meditators: EEG source imaging
during resting. Brain Topography 22(3):158165.
Trehub A. (1969). A Markov model for modulation periods in brain output.
Biophys J 9(7):965969.
Trehub A. (2007). Space, self, and the theater of consciousness. Conscious Cogn
16(2):310330.
Trimmel M. and Schweiger E. (1998). Effects of an ELF (50 Hz, 1 mT) electro-
magnetic field (EMF) on concentration in visual attention, perception and
memory including effects of EMF sensitivity. Toxicol Lett 96209; 97:377
382.
Uleman J. S. and Bargh J. A. (eds.) (1989). Unintended Thought. New York:
Guilford Press.
Van Rullen R., Carlson T., and Cavanagh P. (2007). The blinking spotlight of
attention. Proc Natl Acad Sci USA 104(49):1920419209.
Wackermann J., Lehmann D., Michel C. M., and Strik W. K. (1993). Adap-
tive segmentation of spontaneous EEG map series into spatially defined
microstates. Int J Psychophysiol 14(3):269283.
Wackermann J., Putz P., Buchi S., Strauch I., and Lehmann D. (2002). Brain
electrical activity and subjective experience during altered states of con-
sciousness: Ganzfeld and hypnagogic states. Int J Psychophysiol 46:123146.
Wever R. (1977). Effects of low-level, low-frequency fields on human circadian
rhythms. Neurosci Res Program Bull 15(1):3945.
Whitehead A. N. (1929). Process and Reality: An Essay in Cosmology. New York:
Macmillan.
Wolff C. (1720). Vernunfftige Gedancken von Gott, der Welt und der Seele des Men-
schen, auch allen Dingen uberhaupt. Halle, Germany: Rengerische Buchhand-
lung.
Woodworth R. S. and Schlosberg H. (1954). Experimental Psychology. New York:
Holt.
Yuan H., Zotev V., Phillips R., Drevets W. C., and Bodurka J. (2012). Spa-
tiotemporal dynamics of the brain at rest exploring EEG microstates as
electrophysiological signatures of BOLD resting state networks. NeuroImage
60(4):20622072.
7 A foundation for the scientific study
of consciousness

Arnold Trehub

7.1 Dual-aspect monism 219


7.1.1 Bridging principle 220
7.1.2 A working definition of consciousness 222
7.2 The retinoid system 222
7.3 Empirical evidence 225
7.3.1 Perceiving 3D in 2D depictions 225
7.3.2 Distortion of perceived shape 229
7.4 Conclusion 229

7.1 Dual-aspect monism


Each of us holds an inviolable secret the secret of our inner world.
It is inviolable not because we vouch never to reveal it, but because,
try as we may, we are unable to express it in full measure. The inner
world, of course, is our own conscious experience. How can science
explain something that must always remain hidden? Is it possible to
explain consciousness as a natural biological phenomenon? Although the
claim is often made that such an explanation is beyond the grasp of
science, many investigators believe, as I do, that we can provide such an
explanation within the norms of science.
However, there is a peculiar difficulty in dealing with phenomenal con-
sciousness as an object of scientific study because it requires us to sys-
tematically relate third-person descriptions or measures of brain events to
first-person descriptions or measures of phenomenal content. We gener-
ally think of the former as objective descriptions and the latter as subjec-
tive descriptions. Because phenomenal descriptors and physical descrip-
tors occupy separate descriptive domains, one cannot assert a formal
identity when describing any instance of a subjective phenomenal aspect
in terms of an instance of an objective physical aspect, in the language
of science. We are forced into accepting some descriptive slack. On the
assumption that the physical world is all that exists, and if we cannot assert
an identity relationship between a first-person event and a corresponding
third-person event, how can we usefully explain phenomenal experience
219
220 Arnold Trehub

in terms of biophysical processes? I suggest that we proceed on the basis


of the following points:
1. Some descriptions are made public; that is, in the third-person domain
(3 pp).
2. Some descriptions remain private; that is, in the first-person domain
(1 pp).
3. All scientific descriptions are public (3 pp).
4. Phenomenal experience (consciousness) is constituted by brain activity
that, as an object of scientific study, is in the 3 pp domain.
5. All descriptions are selectively mapped to egocentric patterns of brain
activity in the producer of a description and in the consumer of a
description (Trehub 1991, 2007, 2011).
6. The egocentric pattern of brain activity the phenomenal experience
to which a word or image in any description is mapped is the referent
of that word or image.
7. But a description of phenomenal experience (1 pp) cannot be reduced
to a description of the egocentric brain activity by which it is con-
stituted (there can be no identity established between descriptions)
because private events and public events occupy separate descriptive
domains.
It seems to me that this state of affairs is properly captured by the
metaphysical stance of dual-aspect monism (see Fig. 7.1) where private
descriptions and public descriptions are separate accounts of a common
underlying physical reality (Velmans 2009; Pereira et al. 2010). If this
is the case, then to properly conduct a scientific exploration of con-
sciousness we need a bridging principle to systematically relate public
phenomenal descriptions to private phenomenal descriptions.

7.1.1 Bridging principle


Science is a pragmatic enterprise; I think the bar is set too high if we
demand a logical identity relationship between brain processes and the
content of consciousness. The problem we face in arriving at a physi-
cal explanation of consciousness resides in the relationship between the
objective third-person experience and the subjective first-person expe-
rience. It is here that I suggest that simple correlation will not suffice.
I have argued that a bridging principle for the empirical investigation
of consciousness should systematically relate salient analogs of conscious
content to biophysical processes in the brain, and that our scientific objec-
tive should be to develop theoretical models that can be demonstrated
to generate biophysical analogs of subjective experience (conscious
A foundation for the scientific study of consciousness 221

PUBLIC ASPECT (3rd-person events/descriptions)

1 2 n
3 4 5

W W W W W W
1 2 3 4 5 n
I! I! I! I! I! I!

PRIVATE ASPECT (1st-person content/descriptions)

Fig. 7.1 Dual-aspect monism. Circles 1 through n are individual per-


sons. Within the brain of each person is a transparent representation
(w) of the physical world (W) from an egocentric perspective (I!).
W includes all persons. Each person has a phenomenal experience
of the world (W) from his/her privileged egocentric perspective (pri-
vate aspect). At the same time, any person might be included in the
public aspect. Interpersonal communications, including all scientific
contentions, take place in W.

content). The bridging principle that I have proposed (Pereira et al.


2010) is succinctly stated as follows:

For any instance of conscious content, there is a corresponding analog in the biophysical
state of the brain.

In considering the biological foundation of consciousness, I stress cor-


responding analogs in the brain rather than corresponding propositions
because propositions, as such, are symbolic structures without salient
features that are similar to the imagistic features of the contents of con-
sciousness. Conscious contents have qualities, inherent features that can
be shared by analogical events in the brain but not by propositional events
in the brain (Trehub 2007). Notice, however, that inner speech, evoked
222 Arnold Trehub

by sentential propositions, has analogical properties; that is, sub-vocal


auditory images (Trehub 1991).

7.1.2 A working definition of consciousness


What is consciousness? One of the problems in the scientific investiga-
tion of consciousness is the multiplicity of notions about how to define
consciousness (Pereira and Ricke 2009). I have taken the following as my
working definition of consciousness (Trehub 2011):
Consciousness is a transparent brain representation of the world from a privileged ego-
centric perspective.

Brain representations are transparent because they are about ones world
and are not experienced as the activity of ones brain. A conscious brain
representation is privileged because no one other than the owner of the
egocentric space can experience its contents from the same perspective.
These conditions are definitive of subjectivity.

7.2 The retinoid system


In the effort to understand the underlying brain mechanisms of con-
sciousness, the retinoid model seems to provide a particularly fruit-
ful approach. The structural and dynamic properties of this theoreti-
cal model have successfully explained previously inexplicable conscious
experiences and have predicted novel kinds of conscious events (Trehub
1991, 2007, 2013).
We see the world as a stable, coherent arrangement of objects and envi-
ronmental features in a spatially extended layout. But on any given visual
fixation, our window of sharp foveal vision clearly registers a region of
only 25 of the scene in front of us. Saccadic eye movements present us
with a sequence of scattered glimpses of our spatially extended visual
environment where all sharply defined visual stimuli are superposed
on the fovea. How can the visual system disentangle its fovea-centered
images and construct an integrated brain representation of its surround-
ing environment, not in a fovea-centered frame, but in an egocentric
spatial frame? As an answer to this question I proposed the existence
of a dynamic representational system of brain mechanisms that I desig-
nated as the retinoid system (Trehub 1977). Its putative structural and
dynamic properties enable it to register and appropriately integrate dis-
parate foveal stimuli into an egocentric representation of an extended
3D frontal scene, as well as perform many other useful perceptual and
higher cognitive functions. Neuronal details of the retinoid system have
A foundation for the scientific study of consciousness 223

been modeled and tested in computer simulations and psychophysical


experiments (Trehub 1977, 1978, 1991, 2007).
For processes in the visual modality, the retinoid system registers infor-
mation in visual space and projects afferents to higher visual centers. It
organizes successive retinocentric visual inputs into coherent representa-
tions of object layout in 3D space. It also receives input from higher visual
centers and can serve as a visual scratch pad with spatially organized
patterns of excitation stored as short-term memory. The mechanism of
temporary storage is assumed to be in the form of retinotopically and
spatiotopically organized arrays of excitatory autaptic neurons. These
neurons have their own axon collaterals in feedback synapse with their
own dendrites or cell body (van der Loos and Glaser 1972; Lubke
et al. 1996; Tamas et al. 1997). An autaptic cell that receives a transitory
suprathreshold stimulus will continue to fire for some period of time if
it is properly biased by another source of subthreshold excitatory input.
Thus a sheet of autaptic neurons can represent by its sustained discharge
pattern any momentary input pattern for as long as diffuse priming exci-
tation (excitatory bias) is sustained (up to the limit of cell fatigue). If
the priming background input is terminated or sufficiently reduced, the
discharge pattern that represents the stimulus on the retinoid will rapidly
decay (see Trehub 1991, Figure 2.5). The problem of registering and
combining separate foveal stimuli into a proper unified representation
of a larger real-world scene can be solved by a layered system of inter-
connected retinoids acting as a dynamic postretinal 3D buffer (Trehub
1991).
The retinoid theory of consciousness proposes that our entire phenom-
enal world is represented by the patterns of excitation in a spatiotopic 3D
organization of autaptic neurons which are part of the retinoid system.
A key feature of this retinoid space is that it is organized around a fixed
cluster of autaptic cells which constitute the neuronal origin the 0,0,0
(X, Y, Z) coordinate of its volumetric neuronal structure. All phenome-
nal representations are constituted by patterns of autaptic-cell excitation
on the Z-planes of retinoid space (see Fig. 7.2). I have proposed that
the fixed spatial coordinate of origin in the Z-plane structure can be
thought of as ones self-locus in ones phenomenal world, and I designate
this central cluster of neurons as the core self (I!) (Trehub 1991, 2007).
In the retinoid model, retinoid space is the space of all of our conscious
experience; therefore, vision should be understood as only one of the
sensory modalities that project content into our egocentrically organized
phenomenal world. All of our exteroceptive and interoceptive sensory
modalities can contribute to our phenomenal experience, as shown in
Fig. 7.2.
224 Arnold Trehub

3-D RETINOID
(Egocentric space)

Farthest

Self Locus

Nearest
Z-Planes

I-Token

Sensory/Perceptual
I! Needs & Motive
Experience

Plans & Actions Beliefs Recollections

Fig. 7.2 The retinoid system. The self-locus anchors the I-token (I!) to
the retinoid origin of egocentric space. I! has reciprocal synaptic links
to all sensory/cognitive processes. The retinoid system is the brains
substrate for the first-person perspective; that is, subjectivity (Trehub
2007).

In psychology, it is common to refer to a spotlight of attention as


a selective attention function that plays a critical role in our cognitive
activity. But how can the brain actually perform selective attention? I have
proposed the minimal structure and dynamics of a neuronal shift-control
mechanism that can utilize the fixed tonic excitation of the retinoids
self-locus neurons (I!) to move a spotlight of excitation to any targeted
region of retinoid space (e.g., see Trehub 1991, Figures 4.2 and 4.3).
Figure 7.2, the arrows projecting from the self-locus to different regions
of retinoid space represent directed excursions of selective attention. This
projection of self-locus excitation is designated as the heuristic self-locus
A foundation for the scientific study of consciousness 225

(I! ) because it should be understood as an exploratory brain event that


aids in learning, discovery, or problem solving (Trehub 1991, 2007).
The source excitation of the core self (I!) at the 0,0,0 (X,Y,Z) retinoid
origin is sustained during all excursions of the heuristic self-locus (I! ).
We can think of I! as a mobile agent of the core self that can scout any
region of our current phenomenal world for its affordances in preparation
for adaptive action. The heuristic self-locus has an additional important
function; its movement through retinoid space can trace contours of
neuronal excitation that are analogous to the traces of a marker on a
display board. Such traces are imaginative productions that can serve as
internal models to be overtly expressed as useful artifacts.
The schematic drawing in Fig. 7.3 shows the essential difference
between conscious and non-conscious creatures and depicts the nested
relationships that exist between the physical world, the body, brain, and
retinoid space.

7.3 Empirical evidence


One of the interesting properties of the retinoid system is that it can
explain how linear perspective depictions can evoke a phenomenal expe-
rience of depth in a 3D scene on the basis of a 2D drawing.
Perspective drawing as a way of inducing a sense of depth in a display
on a 2D surface is a relatively recent achievement in the history of human
artistic endeavor. It wasnt until the fifteenth century that the geometrical
method of perspective was widely used in drawing and painting. How is
it that a 2D drawing can have a virtual third dimension that appears to
extend into the space in front of the observer?

7.3.1 Perceiving 3D in 2D depictions


The structural and dynamic properties of the putative retinoid system
enable the brain to transform 2D perspective drawings into 3D represen-
tations in retinoid space by Z-plane excursions of the heuristic self-locus
(HSL). As the HSL moves up and through the initial plane of the 2D
representation, it primes successive Z-planes in depth from near to far,
and as each depth plane is excited, objects at corresponding Y-axes are
translated in depth to the primed Z-plane. Two-dimensional scenes that
have visual cues for separating foreground and background can be rep-
resented on separate Z-planes in retinoid space.
Look at Fig. 7.4. If you move your head, the central figure seems to
slide erratically over the background pattern. The 3D retinoid model
explains/predicts this phenomenon on the assumption that our brain
226 Arnold Trehub

WORLD

BODY E2
E1
BRAIN
R1 RETINOID SPACE R2

PHENOMENAL WORLD
E1 E2
1!` 1!`
1!
SELF IMAGE

(A)

UNCONSCIOUS PROCESSORS

ACTION
(A)

Fig. 7.3 Non-conscious creatures and conscious creatures. Non-


conscious creatures: E1 and E2 are discrete events in the physical world.
R1 and R2 are sensory transducers in the body that selectively respond
to E1 and E2. R1 and R2 signal their response to unconscious processing
mechanisms within the brain. These mechanisms then trigger adaptive
actions. Conscious creatures: In addition to the mechanisms described in
A, the brain of a conscious creature has a retinoid system that provides
a holistic volumetric representation of a spatial surround from the priv-
ileged egocentric perspective of the self-locus the core self (I!). For
example, in this case, there is a perspectival representation of E1 and E2
(shown as E1 and E2 ) within the creatures phenomenal world. Neu-
ronal activity in retinoid space is the entire current content of conscious
experience (Trehub 2011).
A foundation for the scientific study of consciousness 227

Fig. 7.4 Illusory experience of a central surface sliding over the back-
ground (after Pinna and Spillmann 2005).

represents the foreground and the background on two different Z-planes


in its egocentric space (see Fig. 7.2). The sliding inner figure is on
a neuronal Z-plane that is closer to the self-locus (in depth) than the
background pattern. Micro-saccades shift the locus of the central (closer)
figure in small erratic steps with respect to the background. This gives
the illusory experience of the central surface sliding over the background
surface, even though both surfaces are presented on the same plane in
the 2D pictorial image.
Figure 7.5 shows an illusory enlargement of size in a 2D perspective
drawing. Two objects that project the same visual angle on the retina can
appear to occupy very different proportions of the visual field if they are
perceived to be at different distances. What happens to the retinotopic
map in primary visual cortex (V1) during the perception of these size
illusions? Using functional magnetic resonance imaging (fMRI), Mur-
ray et al. (2006) show that the brains retinotopic representation of an
228 Arnold Trehub

Adjust

Fig. 7.5 Perspective illusion of size reflected in fMRI (Murray et al.


2006).

objects size changes in accordance with its perceived (phenomenal) size.


A distant object that appears to occupy a larger portion of the visual field
actually activates a larger area in V1 than an object of equal angular size
that is perceived to be closer and smaller. The results demonstrate that
the retinal size of an object and the depth information in a scene are com-
bined early in the human visual system to enlarge the brain representation
of the object.
Viewing a 2D perspective display similar to Fig. 7.5, in a psychophys-
ical test, subjects were instructed to adjust the near disc to match
the size of the far disc. In making the perceptual match with the far
disc, subjects were found to increase the size of the adjustable near disc
approximately 17 percent. When the subjects viewed the same perspec-
tive display while undergoing fMRI imaging, it was found that the area of
V1 activated by the far disc was proportional to the enlargement perceived
in the psychophysical phase of the study. This experiment demonstrates
that the human brain has biological machinery that can transform a 2D
layout of objects in the physical world into a 3D layout in the persons
phenomenal world. In this transformation, an illusory enlargement of
the more distant object in a perspective drawing is reflected in a cor-
responding biophysical enlargement of the brains representation of the
object. This finding is explained/predicted by the retinoid model because
A foundation for the scientific study of consciousness 229

it has the neuronal mechanisms that accomplish this task. When the per-
spective drawing is viewed, the heuristic self-locus traces the converging
perspective lines through the depth of the retinoids Z-planes. As this hap-
pens, the excitation patterns on the depth planes are successively primed
and objects are represented in the retinoids Z-plane space from near
to far. Because of the retinoids size-constancy mechanism, the brains
representation of the far disc in the 2D display is enlarged relative to
the near disc, and this is reflected in the relative size of fMRI activa-
tion in V1. Thus what has been a puzzling illusion is explained in the
retinoid model as the result of the natural operation of a particular kind
of neuronal brain mechanism.

7.3.2 Distortion of perceived shape


The rotated table illusion (Fig. 7.6) illustrates how the projection of a 2D
depiction into the depth planes of retinoid space and the expansion of the
vertical (more distant) dimension of the depiction by the size constancy
mechanism of the retinoid system can distort the perceived shape of
an object. The two tables shown in Fig. 7.6 have the same length and
width, yet the table on the right is perceived as being square relative
to the long rectangular shape of the table on the left. In fact the right-
hand table is simply a drawing of the left-hand table that is rotated 90.
This perceptual distortion happens because the more distant vertical
dimension of each depiction is elongated in our phenomenal experience
due to the successive priming of Z-planes, and the operationally linked
size-constancy expansion as the heuristic self-locus (selective attention)
traces the vertical contours of each table through the depth of retinoid
space.
Taken together, the ability of the retinoid model to explain the evoca-
tion of depth in perspective drawings and the apparent change in shape
in the rotated table illusion, as well as other psychophysical evidence such
as the seeing-more-than-is there (SMTT) phenomenon and the moon
illusion (see Trehub 2007), provide strong empirical evidence in support
of the retinoid model of consciousness.

7.4 Conclusion
I have argued that a solid foundation for the scientific study of conscious-
ness can be built on three general principles. First is the metaphysical
assumption of dual-aspect monism in which private descriptions and
public descriptions are separate accounts of a common underlying real-
ity. Second is the adoption of the bridging principle of corresponding
230 Arnold Trehub

Wr

Lr

Tr

L
Wr

W Lr

Fig. 7.6 Rotated table illusion.

analogs between phenomenal events and biophysical brain events. And


third is the adoption, as a working definition, that consciousness is a
transparent brain representation of the world from a privileged egocen-
tric perspective, that is, subjectivity.
On the basis of the bridging principle and the definition of con-
sciousness stated earlier, we can ask what kind of brain mechanism can
generate neuronal activation patterns that are proper analogs of salient
A foundation for the scientific study of consciousness 231

phenomenal events. My own theoretical proposal has been described


in detail in the retinoid model of consciousness (Trehub 1991, 2007).
According to this model, autaptic-cell activation on the Z-planes of
retinoid space (see Fig. 7.2) is necessary and sufficient for conscious-
ness to occur. Moreover, I claim that spatio-temporal patterns of neu-
ronal activation in retinoid space are the proper biophysical analogs of
conscious content. Justification for this theoretical claim is based on the
fact that many previously inexplicable conscious experiences can now be
explained by the neuronal structure and dynamics of the retinoid model,
and that novel subjective phenomena have been successfully predicted
by the model.
The examples presented in this chapter are but a small sample of empir-
ical findings from natural observations, psychophysical experiments, and
clinical examinations (see, for example, Trehub 1991, 2007). As part
of a large body of supporting evidence, they add strong credence to the
validity of the retinoid model of consciousness.
According to the retinoid model, subjectivity is the hallmark of con-
sciousness. In this view, all competing candidate models of consciousness
must account for the existence of subjectivity. Our pursuit of a standard
theoretical model of consciousness would profit if other proposed theo-
ries were formulated in sufficient detail to explain how subjectivity is a
natural consequence of their theoretical features. We should also expect
a candidate model of consciousness to be described in a way that enables
us to propose empirical tests of its theoretical implications.

REFERENCES
Lubke J., Markram H., Frotscher M., and Sakmann B. (1996). Frequency and
dendritic distribution of autapses established by layer 5 pyramidal neurons
in the developing rat neocortex: Comparison with synaptic innervation of
adjacent neurons of the same class. J Neurosci 16:32093218.
Murray S.O., Boyaci H., and Kersten D. (2006). The representation of perceived
angular size in human primary visual cortex. Nat Neurosci 9:429434.
Pereira Jr. A. and Ricke H. (2009). What is consciousness? Towards a preliminary
definition. J Consciousness Stud 16:2845.
Pereira A., Jr. Edwards J.C.W., Lehmann D., Nunn C., Trehub A., and Vel-
mans M. (2010). Understanding consciousness: A collaborative attempt to
elucidate contemporary theories. J Consciousness Stud 17:213219.
Pinna B. and Spillmann L. (2005). New illusions of sliding motion in depth.
Perception 34:14411458.
Tamas G., Buhl E.H., and Somogyi P. (1997). Massive autaptic self-innervation
of GABAergic neurons in cat visual cortex. J Neurosci 17:63526364.
Trehub A. (1977). Neuronal models for cognitive processes: Networks for learn-
ing, perception and imagination. J Theor Biol 65:141169.
232 Arnold Trehub

Trehub A. (1978). Neuronal model for stereoscopic vision. J Theor Biol 71:479
486.
Trehub A. (1991). The Cognitive Brain. Cambridge, MA: MIT Press.
Trehub A. (2007). Space, self, and the theater of consciousness. Conscious Cogn
16:310330.
Trehub A. (2011). Evolutions gift: Subjectivity and the phenomenal world. Jour-
nal of Cosmology 14:48394847.
Trehub A. (2013). Where am I? Redux. J Consciousness Stud 20(12):207225.
van der Loos H. and Glaser E.M. (1972). Autapses in neocortex cerebri: synapses
between a pyramidal cells axon and its own dendrites. Brain Res 48:355
360.
Velmans M. (2009). Understanding Consciousness, 2nd Edn. New York: Routledge.
8 The proemial synapse:
Consciousness-generating glial-neuronal
units

Bernhard J. Mitterauer

8.1 Introduction 233


8.2 Remarks on the Ego-Thou ontology in Western philosophy 236
8.3 Model of a glutamatergic tripartite synapse 236
8.4 Intersubjective reflection in the synapses of the brain 239
8.4.1 Hypothesis 239
8.4.2 Formal conception of intersubjective reflection 239
8.4.3 Model of a glial-neuronal synaptic unit (GNU) 241
8.4.4 Proemial synapses 242
8.5 Outline of an astrocyte domain organization 244
8.5.1 Formal structure of an astrocyte domain organization 245
8.5.1.1 General considerations 245
8.5.1.2 Development of a tritostructure formalizing an
astrocyte domain organization 246
8.5.2 Rhythmic astrocyte oscillations may program the domain
organization 248
8.6 Generation of intentional programs within the astrocytic syncytium 249
8.6.1 Philosophical remarks 249
8.6.2 Outline of an astrocytic syncytium 250
8.6.3 The formalism of negative language 252
8.6.4 Glial gap junctions could embody negation operators 254
8.7 Astrocytic syncytium as a system of reflection 255
8.8 Intentional reflection by activation and non-activation of astrocytic
connexins 258
8.9 Holistic act of self-reference and metaphysics 259
8.10 Concluding remarks 260

8.1 Introduction
Present philosophical foundations of neuroscience (Bennett and
Hacker 2003) are exclusively based on the functions of the neuronal
system. But the first and elementary philosophical question should be:
Why has nature created our brain with a double cellular structure consist-
ing of both the neuronal and the glial systems? Therefore, a real natural
philosophy of the brain must refer to the structures and functions of both
cell types or systems. Neurophilosophy or philosophy of neuroscience

233
234 Bernhard J. Mitterauer

is the interdisciplinary study of neuroscience and philosophy (Church-


land 2007). It attempts to solve problems in the philosophy of mind
with empirical information from neuroscience or to clarify neuroscientific
results using the conceptual rigor and methods of philosophy of science
(Bennett and Hacker 2003). Unfortunately, most current neurophilo-
sophical approaches focus exclusively on the neuronal system of the
brain.
A fundamental philosophical distinction in brain theory is that of
ontic and epistemic description. This distinction emphasizes whether
we understand the state of a system (and its dynamics) as it is in
itself (ontic) or as it turns out to be due to observation (epistemic)
(Atmanspacher and Rotter 2008). If we scan the brain on the micro-
scopic level, we see a cellular structure composed of neurons and glia
with their pertinent networks. If one describes ontology as the theory of
what there is (Quine 1953), neurophilosophy should refer to the cellular
double structure of the brain, since from a cellular point of view the brain
embodies at least two distinct ontological realms.
As a consequence, a pure neurophilosophical approach to brain theory
is based on an ontological fault in exclusively referring to the neuronal
system in the sense of mono-ontology. However, the distinction of two
ontologies only makes sense if we have strong arguments for a special
role of glia in their interactions with the neuronal component. My core
argument is this: the glial system is essentially responsible for the work-
ing of the brain as a subjective system generating intentional programs,
structuring information, and determining a polyontological architecture
(Mitterauer 1998, 2007, 2010).
One would expect that a theory of consciousness begins with a defini-
tion of consciousness. That is the major hurdle in every study involving
the question of consciousness. However, Vimal (2009) presented a com-
prehensive overview of the current meanings attributed to the term con-
sciousness, providing a valuable basis for interdisciplinary approaches
to a theory of consciousness. Given the fact that subjective systems are
characterized by the ability to generate consciousness, let me start out
with a definition of subjectivity. Subjectivity is a phenomenon that is dis-
tributed over the dialectic antithesis of the Ego as the subjective subject
and the Thou as the objective subject, both of them having a common
mediating environment (Guenther 1976).
I hypothesize that a glial-neuronal synaptic unit (GNU) may embody
a candidate model of subjectivity, capable of generating consciousness in
the sense of a mechanism of Ego-Thou-reflection. In other words, GNUs
may embody ontological loci of inter-subjective reflection. Since each
astrocyte contacting thousands of synapses forms a distinct domain of
The proemial synapse: glial-neuronal units, consciousness 235

glial-neuronal interactions, GNUs are organized into ontological realms


that can also be characterized as Hubs (Pereira and Furlan 2010). On
a higher level of complexity, the astrocytic syncytium may be capable of
integrating these domains into a polyontological pattern in the sense of
a Master Hub (Pereira and Furlan 2010). Importantly, given the fact
that living systems like human beings do not only generate consciousness
but also intentional programs, a brain-oriented theory of consciousness
should basically attempt to show where and how intentional programs
could be generated in the brain (Mitterauer 2007). Here, I propose that
the component of a GNU embodying subjective subjectivity (Ego) may
be intentionally programmed in the astrocytic syncytium.
Glial-neuronal interactions in synaptic units are experimentally well
established. Based on a concept of subjectivity derived from the philoso-
phy of Gotthard Guenther, I interpret the glial component of a synapse
as the embodiment of a subjective subject (Ego) and the neuronal com-
ponent as the embodiment of an objective subject (Thou). In this brain
model, the astrocytic syncytium is responsible for intentional program-
ming, exerting modulatory functions on the neuronal system. Equally
important, the neuronal system is capable of testing glial intentional
programs in the environment. Depending on this perceptional-cognitive
procedure, the glial system revises its original intentional programs, and
so on. Formally speaking, glial-neuronal synaptic interactions may be
based on a special kind of relation, called proemial. In the phase during
which the glial system dominates the neuronal system with its intentions,
the glial system functions as a relator, the neuronal system as a relatum.
If the neuronal system elaborates on not-intended data in the environ-
ment, the relationship is switched such that in this phase the neuronal
system operates as a relator in regard to the glial system, which is now
the relatum. Since an optimization of the glial intentional programming
is necessary, the switched relationship is of a higher order of cognition,
as compared to the original relationship. Here, we may deal with ele-
mentary ontological loci of reflection in the brain. These ontological
loci of synaptic reflection may prelude Ego-Thou interactions within the
brain. Such a polyontological brain model is faced with the problem of
self-consciousness.
Together, this is the aim of the present contribution: instead of attack-
ing the difficult problem of consciousness or self-consciousness directly,
I attempt to formally describe an elementary reflection mechanism of
Ego-Thou intersubjectivity in GNUs and its possible role in the astro-
cytic syncytium. Assuming that the integrating function of self-reference
may never be experimentally detected in the brain, the problem of self-
consciousness is basically a philosophical one.
236 Bernhard J. Mitterauer

8.2 Remarks on the Ego-Thou ontology in


Western philosophy
There is a Platonic tradition in Western philosophy to write a treatise
as a dialog. However, an explicit ontological distinction between subjec-
tive subjectivity (Ego) and objective subjectivity (Thou) that is formally
based was not established. The introduction of Thou in Western philos-
ophy dates back to the late Husserl (1960). However, this Thou is the
external other, not the internal other which he had already recognized
earlier. Since Descartes, the topic of subjectivity is mostly treated as a
general conception based on the dualistic Subject-Object ontology. An
ontological differentiation between the many individual subjects within
this conception of subjectivity has not been elaborated. Let me give some
examples: according to Kant (1976), the Ego is endowed with a general
consciousness (Ichan-sich). This conception culminated in the German
idealistic philosophy, especially in Hegels absolute spirit (Hegel 1965).
However, Hegels most important contribution to a theory of conscious-
ness may be represented by his conception of the objective spirit. It can
explain why human subjects are able to act technically, generating a sec-
ond nature (Guenther 1963). Interestingly, the monads of Leibniz (1965)
represent individual subjective systems, but without windows. There
is no Thou!
Recently, Stawarska (2009) proposed that I-You connectedness can be
fruitfully fleshed out by means of the principle of primordial duality, a
grammatical and philosophical notion irreducible to either fusional one-
ness or impartial multiplicity. Based on the phenomenology of Husserl
(2005), and especially referring to Buber (1970), Stawarska characterizes
her philosophy as Dialectic Phenomenology. Although Bubers philos-
ophy is mainly metaphysical-religious, his succinct statement that the
I-You relationship is at the beginning may challenge brain-theoretical
models or philosophy. However, both Buber and Stawarska do not
present a formal ontological conception of I-You relationships.

8.3 Model of a glutamatergic tripartite synapse


The close morphological relations between astrocytes and synapses as
well as the functional expression of relevant receptors in the astroglial
cells prompted the appearance of a new concept known as the tripartite
synapse, and which I call glial-neuronal synaptic unit (GNU). Araque
et al. (1999) showed that glia respond to neuronal activity with an ele-
vation of their internal Ca2+ concentration which triggers the release
The proemial synapse: glial-neuronal units, consciousness 237

of chemical transmitters from glia themselves, and, in turn, causes feed-


back regulation of neuronal activity and synaptic strength. Although a
true understanding of how the astrocyte interacts with neurons is still
missing, several models have been published (Halassa et al. 2009). Here, I
focus on a modified model proposed by Newman (2005). Figure 8.1 rep-
resents the interaction of the main components of synaptic information
processing as follows: sensori-motoric networks compute an environ-
mental information activating the presynapse (1). The activated presy-
napse releases glutamate (GLU) from vesicles (v) that occupy both post-
synaptic receptors (poR) and receptors on the astrocyte (acR). (For
the sake of clarity, only one receptor is shown). (2) Moreover, gluta-
mate may also activate gap junctions in the astrocytic syncytium lead-
ing to an enhanced spreading of Ca2+ waves (3). In parallel, the occu-
pancy of the astrocytic receptors by glutamate also activates Ca2+ within
the astrocyte (4). This mechanism exerts the production of glutamate
(5) and adenosine triphosphate (ATP) (6) within the astrocyte, now
functioning as gliotransmitters. Whereas the occupancy of extrasynap-
tic pre- and postsynaptic receptors by glutamate is excitatory (7), the
occupancy of these receptors by ATP is inhibitory (8). In addition, neu-
rotransmission is also inactivated by the reuptake of glutamate in the
membrane of the presynapse mediated by transporter molecules (t) (9).
Most important, ATP inhibits the presynaptic terminal via occupancy
of cognate receptors (Haydon and Carmignoto 2006) temporarily turn-
ing off synaptic neurotransmission in the sense of a negative feedback
(10). Finally, synaptic information processing is transmitted to neuronal
networks that can activate the synapse again (11).
In spite of evidence that astrocytes release glutamate by a
Ca2+ -dependent vesicle mechanism that resembles release from neurons,
important differences between glial and neuronal release exist. Glutamate
release from astrocytes occurs at a much slower rate than does release
from neurons, and is probably triggered by smaller increases of cyto-
plasmic Ca2+ . Importantly, we apparently deal with different timescales
of presynaptic and astrocytic glutamate release. Hence, the astrocytic
modulatory function of synaptic neurotransmission may occur within
seconds or minutes (Stellwagen and Malenka 2006). Here, I hypothesize
that the duration from presynaptic activation to the inhibition of synaptic
neurotransmission may also be dependent on the amount of astrocytic
receptors that must be occupied by glutamate. This mechanism may
be based on the occupancy probability of astrocytic receptors by gluta-
mate releases from the presynaptic terminal. In addition, the release of
ATP from astrocytes may also be dependent on a comparable mecha-
nism. Accordingly, in GNUs glia may have a temporal boundary-setting
environmental sensori-motor
information networks
activation
1 3
GLU

presynapse
g.j.
Ca2+
inhibition prR excitation astrocyte
GLU
10 v astrocytic
syncytium
GLU acR 4 intention
t Ca2+ memory
9 2
ATP
negative feedback
7 6
5
poR
GLU
postsynapse excitation
neurotransmission
inhibition
11 8
neuronal
networks

Fig. 8.1 Schematic diagram of possible glial-neuronal interactions at the glutamatergic tripartite synapse (modified after Newman 2005). Sensori-
motor networks compute environmental information activating the presynapse (1). The activated presynapse releases glutamate (GLU) from
vesicles (v) occupying both postsynaptic receptors (poR) and receptors on the astrocyte (acR) (2). GLU also activates gap junctions (gj) in the
astrocytic syncytium, enhancing the spreading of Ca2+ waves (3). In parallel, the occupancy of acR by GLU also activates Ca2+ within the
astrocyte (4). This mechanism exerts the production of GLU (5) and adenosinetriphosphate (ATP) (6) within the astrocyte, now functioning as
gliotransmitters. Whereas the occupancy of the extrasynaptic pre- and postsynaptic receptors by GLU is excitatory (7), the occupancy of these
receptors by ATP is inhibitory (8). In addition, neurotransmission is also inactivated by the reuptake of GLU in the membrane of the presynapse
mediated by transporter molecules (t) (9). ATP inhibits the presynaptic terminal via occupancy of cognate receptors (prR) temporarily turning
off synaptic neurotransmission in the sense of a negative feedback (10). Synaptic information processing is transmitted to neuronal networks
activating the synapse again (11).
The proemial synapse: glial-neuronal units, consciousness 239

function in temporarily turning off synaptic information transmission


(Mitterauer 1998; Auld and Robitaille 2003).

8.4 Intersubjective reflection in the synapses of the brain

8.4.1 Hypothesis
According to Guenther (1976), there are two basic ways in which brain
research can proceed. One can treat the brain as a mere physical piece of
matter. Or, we can investigate how nature has constructed all its compo-
nents, and following which laws or principles behavior is produced. The
second approach in brain theory is faced with both theoretical and tech-
nical obstacles, since it is incapable to unravel how the brain contributes
to the solution of the riddle of subjectivity. Instead of going uphill from
the cellular or molecular level, we may proceed by posing the following
questions: What is the highest achievement of the human brain? Which
role does subjectivity play? How and where does consciousness arise?
Presently, I start out with this question: where and how in the brain
could the basic interplay between the subjective and objective parts of
subjectivity be generated, based on the dialectics of volition and cogni-
tion?
My hypothesis is that the interplay of the subjective subjectivity (Ego)
and the objective subjectivity (Thou), or in other words, the dialectics of
volition and cognition, occurs already on the synaptic level of the brain.
Applying the model of a GNU, a component embodying the subjective
(volitional) subjectivity and a second component embodying the objec-
tive (cognitive) subjectivity can be described. The subjective volitional
functions are formalized as ordered relations (), the objective cogni-
tive functions as exchange relations (). Both synaptic components and
their special types of relations interact in a dialectic manner generating a
cyclic proemial relationship (Guenther 1976). This novel type of rela-
tionship may underlie all consciousness-generating processes in the brain
based on intersubjective reflection.

8.4.2 Formal conception of intersubjective reflection


Before presenting my synaptic model of subjectivity, it is necessary to
outline the formal conception of subjectivity according to Guenther.
Generally speaking, subjectivity is a phenomenon distributed over the
dialectic antithesis of the Ego as the subjective subject and the Thou as
the objective subject, both of them having a common mediating environ-
ment (Guenther 1976). At least from an ontological point of view, classic
240 Bernhard J. Mitterauer

logic does not refer to this ontological differentiation of the concept of


subjectivity in treating subjectivity as a general conception. In addition,
the concept of volition presupposes a logical frame that describes the dis-
tinct domains of relations between subjective subjectivity (Ss ), objective
subjectivity (So ), and objectivity (O). Guenther (1966) provides such a
tool. His propositions are as follows:
If Ss designates a thinking subject and O its object in general (i.e., the
universe), the relation between Ss and O is undoubtedly an ordered one,
because O must be considered the content of the reflective process of
Ss . On the other hand, seen from the viewpoint of Ss , any other subject
(the Thou) is an observed subject having its place in the Universe. But
if So is (part of) the content of the Universe, we again obtain an ordered
relation, now between O and So . This is obviously of a different type. So
is not only the passive (cognitive) object of the reflective process of Ss .
In turn it is in itself an active (volitive) subject that may view the first
subject (and everything else) from its own vantage point. Therefore, So
may assume the role of Ss , thus regulating the original subjective subject
(Ss ) to the position of an objective subject (So ). In other words, the
relation between Ss and So is not an ordered relation, but a completely
symmetrical exchange relation, similar to left and right.
Most important, this is a third type of relation, originally called found-
ing relation, now proemial relationship (Guenther 1976). This type
of relation holds between a member of a relation and the relation itself.
Guenther (1976) describes the general structure of the proemial relation
as follows:
If we let the relator assume the place of a relatum, the exchange is not mutual.
The relator may become a relatum, but not in the relation from which it formerly
established the relationship, but only in a relationship of higher order and vice
versa . . . If:

Ri+1 (xi ,yi )

is given and the relatum (x or y) becomes a relator, we obtain

Ri (xi1 ,yi1 )

where Ri =xi or yi . But if the relator becomes a relatum, we obtain

Ri+2 (xi+1 , yi+1 )

where Ri+1 =xi+1 or yi+1 . The subscript i signifies higher or lower logical orders.

Now, the interplay between a relator and a relatum can be interpreted


as a dialectic process of volition and cognition, concerning the attitude
of a subject to its subjective and objective environment. If a subjective
subject dominates the environment, it is acting in a volitive manner, where
The proemial synapse: glial-neuronal units, consciousness 241

environment

Presynaptic
GT peR Neuron
3
gR
Glia 1 Glia
NT 2 NT
GT 4
gj poR gj
Postsynaptic
Neuron
Glia Glia

Fig. 8.2 Basic pathways of information processing in a glial-neuronal


synapse (modified after Newman, 2005). NT: neurotransmitters; GT:
gliotransmitters; peR: presynaptic receptors; poR: postsynaptic recep-
tors; gR: glial receptors; gj: gap junctions.

the environment represents its cognitive content. In the inverse situation,


the environment dominates the subjective subject, playing a volitive role
in regard to the cognitive content of the subjective subjectivity. I will
now attempt to outline these mutual relations between subjectivity as
cognition and subjectivity as volition via glial-neuronal synapses, the
elementary information-processing devices of the brain.

8.4.3 Model of a glial-neuronal synaptic unit (GNU)


The basic anatomical structure of a glial-neuronal synapse consists of
four components: the presynaptic neuron, the postsynaptic neuron, and
two glial components with a synaptic cleft in between. The glial-neuronal
interactions in such chemical synapses occur via neurotransmitters (NT),
gliotransmitters (GT), and other substances (ions, neuromodulators,
etc.). Although all mechanisms of glial-neuronal interactions are not yet
identified, it is meanwhile clear that glia have a modulatory function
what the efficacy of neuronal information-processing concerns (Aulds
and Robitaille 2003). One can also say that glia exert a spatio-temporal
boundary setting function in synaptic information processing (Mitterauer
1998).
Figure 8.2 focuses on the elementary relations in a GNU. The infor-
mation processing between the four components of the synapse may
242 Bernhard J. Mitterauer

be basically this: neurotransmitters (NT) released from the presynap-


tic neuron occupy glial receptors (gR), embodying an ordered relation
(1). In parallel, NT released from the presynaptic neurons occupy post-
synaptic receptors (poR) and are reuptaken in the presynaptic neuron,
designated as an exchange relation (2). Already activated by NT, glia
release gliotransmitters (GT) that occupy receptors on the presynaptic
neuron (peR), turning off neurotransmission temporarily, in the sense
of an ordered relation (3). In addition, a glial intercellular signalling
through gap junctions (gj) mediated by GT represents an exchange rela-
tion between glial cells (4). (For biological details, see Newman 2005.)

8.4.4 Proemial synapses


Taking a closer look at the types of relations shown in Fig. 8.2, we can see
two exchange and ordered relations each. The relational interplay of these
four relations generates a proemial relationship, but of a special kind,
called cyclic proemial relationship (Kaehr 1978). This type of relation
may be an inevitable prerequisite for any theory of consciousness. Its
formal description is as follows:
Glia (G) dominate the neuronal components (N) by modifying them.
Therefore, G plays the role of a relator (1) and N is the relatum. If
this relationship changes inversely (2, 4), N becomes the relator and G
the relatum (3). Since the proemial relationship is cyclically organized,
GNUs are capable of changing their relational positions in the sense of
an iterative self-reflection mechanism. One can also say that glia exert a
volitive function and the neuronal component represents their cognitive
content and vice versa.
A proemial synapse allows the description of an elementary mechanism
that could explain where and how the subjective subject (Ss ) and the
objective subject (So ) interact. From a structural or topic point of view, I
hypothesized that in the glial networks (syncytia), intentional programs
are generated that must be tested in the neuronal networks whether they
are feasible in the outer environment (Mitterauer 2007). Therefore, glia
can be interpreted as the active, intentional-volitional part of a glial-
neuronal synapse, embodying subjective subjectivity. In contrast, the
computations in the neuronal networks are dependent on both the glial
intentional programs and the environmental information. In other words,
from the perspective of the glial system, the neuronal system embodies
an environment interpreted as an objective subject (So ).
If the glial-neuronal synapse starts out with a glial ordered relation,
it exerts a volitive function. The neuronal component of the synapse is
The proemial synapse: glial-neuronal units, consciousness 243

trying to recognize appropriate objects in the environment. This is basi-


cally cognition. However, the glial volitional system is dependent on the
results of the neuronal cognitive function with regard to its testing of
the feasibility of the glial intentional programs in the environment. Now,
the neuronal part of the synapse activates glial receptors so that it plays an
active volitional role determined by the environmental information. This
change of the synaptic relationship makes glia temporarily to a cognitive
system reflecting the results of the original cognitive computations in
the neuronal system by accepting or rejecting the environmental infor-
mation, holding on to their intentional programs or changing them in
the sense of adaptation. Holding on to the intentional programs can be
interpreted as a kind of radical self-realization. One can also say that if
glia as a relator becomes a relatum, this change of relation establishes a
relationship of a higher logical order.
Maturana (1970) states that the nervous system only interacts with
relations. However, since the functioning of the nervous system is
anatomy bound, these interactions are necessarily mediated by physi-
cal interactions. At least in chemical synapses the types of transmitter
substances may determine the set of synapses that qualitatively cooperate
with or embody reflection domains. Maturana speaks of the domains of
interactions. Most interestingly, he describes in his seminal paper Biol-
ogy of Cognition (1970) an orienting behavior. It consists of an orienter
and an orientee, where the orienter orients the orientee in a common
cognitive domain of interactions and vice versa. This scientific approach
describes or interprets intersubjective communication seemingly compa-
rable to Guenthers theory of intersubjectivity. Unfortunately, it is based
on a classic interpretation of subjectivity, since in Maturanas conception
of subjectivity the observer plays the role of a general subject not refer-
ring to the ontological distinction between subjective subjectivity and
objective subjectivity in the interaction of subjective systems with the
environment. The same may hold true for current approaches to brain
research. Admittedly, the move between the Guenther/Buber dialogical
model and glial-neuronal interactions seems to be a bold connection, but
it may be useful to help us understand how the brain really works.
The basic brain-biological arguments are as follows: glia or astrocytes
do not directly process information from the environment. Astrocytes
modulate the information processing in the neuronal part of synapses.
They form networks and regulate the blood flow of the brain. These
astrocytic functions and structures are experimentally well established.
In contrast, the neuronal part of a glial-neuronal synaptic unit and the
neuronal networks are connected via sense organs with the environ-
ment. These basic differences between structure and function of glial and
244 Bernhard J. Mitterauer

neuronal cell systems may allow the interpretation that the former system
operates more subjectively and the latter system more objectively.
Pereira and Furlan (2010) argued that the astroglial network is the
organisms Master Hub that integrates somatic signals with neuronal
processes to generate subjective feelings. Neuro-astroglial interactions in
the whole brain compose the domain where the knowing and feeling com-
ponents of consciousness get together (Pereira Jr, this volume, Chapter
10). Since orthodox neuroscience does not refer to the functions of the
glial system and its pertinent interactions with the neuronal system, a
distinction between a subjective and an objective component of brain
operations as an organ that embodies and generates subjectivity is not
possible. Moreover, our models open a new window to the study of the
brain basis of the pathophysiology of mental disorders and ethics.

8.5 Outline of an astrocyte domain organization


In all mammals, protoplasmic astrocytes are organized into spatially non-
overlapping domains that encompass both neurons and vasculature. An
astrocyte domain defines a contiguous cohort of synapses that interacts
exclusively with a single astrocyte. Synapses within a particular territory
are thereby linked via a shared astrocyte partner, independent of a neu-
ronal networking (Oberheim et al. 2006). Figure 8.3 shows an outline
of an astrocyte domain organization. An astrocyte (Acx ) contacts the
synapses (Sy) of four neurons (N1 . . . N4 ) via its processes (P1 . . . P4 ).
Each process is equipped with one to four receptor qualities (Rq). For
example, P1 contacts the synapses of N2 exclusively via its receptors of
quality a. P2 has already two receptor qualities available (a, b), P3 three
receptor qualities (a, b, c), and P4 is able to contact the synapses of N1
via four receptor qualities (a, b, c, d). Astrocyte (Acx ) is interconnected
with another astrocyte (Acy ) via gap junctions (g.j.) forming an astro-
cytic network (syncytium). The neurons per se are also interconnected
(neuronal network).
It is experimentally verified that astrocytes can express almost all recep-
tors for important transmitter systems (Kettenmann and Steinhauser
2005). In certain cases, individual astroglial cells express as many as five
different receptor systems linked to Ca2+ mobilization (McCarthy and
Salm 1991). Each astrocyte territory represents an island made up of
many thousands of synapses (about 140 000 in the hippocampal region
of the brain, for instance), whose activity is controlled by that astrocyte
(Santello and Volterra 2010). On the average, human astrocytes extend
40 large processes radially and symmetrically in all directions from the
soma so that each astrocyte supports and modulates the function of
The proemial synapse: glial-neuronal units, consciousness 245

N1 N2

Sy Sy
Rq abcd Rq a
P4 P1
g.j.
Acy Acx
P3 P2

Sy Rq abc Rq ab Sy

N4 N3

Fig. 8.3 Outline of an astrocyte domain organization. An astrocyte


(Acx ) is interconnected via four processes (P1 . . . P4 ) with the synapses
(Sy ) of four neurons (N1 . . . N4 ). Each process is on its endfoot equipped
with receptors for the occupancy with neurotransmitters according to a
combinational rule (shown in Table 8.1). As an example, the receptor
P1 contacting N2 embodies only one receptor quality (Rqa ). P2 contacts
N3 with two different receptor qualities (Rqab ). P3 contacts N4 with
Rqabc and P4 contacts N1 with Rqabcd . This simple diagram represents
an astrocyte domain. Astrocyte (Acx ) is interconnected with Acy via gap
junctions (g.j.).

roughly two million synapses in the cerebral cortex (Oberheim et al.


2006). Astrocytic receptors are mainly located on the endfeet of the pro-
cesses. Here, we apparently deal with a high combinational complexity
of astrocyte-synaptic interactions. As introductory mentioned, the astro-
cyte domain organization is well characterized as Hubs (Pereira and
Furlan 2010).

8.5.1 Formal structure of an astrocyte domain organization


8.5.1.1 General considerations Guenther (1962, 1976) descri-
bed living systems as individual units with a new universal theory of
structure, called morphogrammatics. Accordingly, a theory of struc-
ture should be universal and composed of empty places. Such places
can either be of equal or different quality. They can also stay empty or
be occupied by anything. Based on the principle of identity and differ-
ence, these places or their structure can be analyzed on three levels of
complexity:
1. Protostructure: How many different places are there? This corresponds
to cardinality.
246 Bernhard J. Mitterauer

2. Deuterostructure: How are these places distributed? This corresponds


to distribution.
3. Tritostructure: Where are the individual places located? This corre-
sponds to position.
Since the tritostructure represents the highest complexity, it may
underlie the astrocytic domain organization. Here, the morphograms
are termed tritograms.

8.5.1.2 Development of a tritostructure formalizing an astrocyte


domain organization Figure 8.4 shows the development of tritograms
with n places. The structure for tritograms with length 15 (5 levels)
is represented by a tree. This is the generation rule: a tritogram x with
length n + 1 may be generated from a tritogram y with length n if x is
equal to y on the first n places, for example, 12133 may be generated from
1213 but not from 1212. The numerals are representations of domains
(properties, categories) that should be viewed as place-holders reserved
for domains, for example, 12133 should be read as five places for five
entities, such that the first and the third entity belong to domain one, the
second entity to domain two, and the fourth and fifth entity to domain
three.
Now let us interpret the tritostructure (n = 4) as an astrocyte domain
organization. Table 8.1 shows 15 tritograms each consisting of the same
or different places symbolized as numerals 14. Since the position of the
places is relevant, one can also speak of a qualitative counting of dif-
ferent domains (Thomas 1985). This tritostructure is interpreted as the
formal basis of an astrocyte with 15 processes, each embodying a recep-
tor sheet of identical or different qualitative domains for synaptic infor-
mation processing. These various receptor domains are located on the
endfeet of the astrocytic processes contacting cognate neuronal synapses
and modulating neurotransmission. Most important, it is experimentally
verified that astrocytes display elaborate process extension and retrac-
tion, and likely use the active cytoskeleton for motility (Hirrlinger et al.
2004; Haber et al. 2006). To integrate these experimental results into the
model proposed here, astrocytes may be searching for synapses that are
equipped with neurotransmitter types appropriate for the occupancy of
specific astrocytic receptors in various compositions (Table 8.1). More-
over, this implies that in the whole astrocyte domain not all processes or
receptors are active, leading to breaks in glial-neuronal synaptic inter-
actions. Hence, we deal with a dynamic exchange that occurs between
astrocytes and synapses in the sense of concerted structural plasticity of
glial-neuronal interaction. However, in this context we are faced with
n=1 1

n=2 1 1
1 2

1 1 1 1 1
n=3 1 1 2 2 2
1 2 1 2 3

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
n=4 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2
1 1 2 2 2 1 1 1 2 2 2 3 3 3 3
1 2 1 2 3 1 2 3 1 2 3 1 2 3 4

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
n=5 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
1 1 2 2 2 1 1 1 2 2 2 3 3 3 3 1 1 1 2 2 2 3 3 3 3 1 1 1 2 2 2 3 3 3 3 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4 4 4
1 2 1 2 3 1 2 3 1 2 3 1 2 3 4 1 2 3 1 2 3 1 2 3 4 1 2 3 1 2 3 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 5

Fig. 8.4 Tritogrammatic tree. Generation of 52 tritograms (n = 5) corresponding to 52 astrocytic processes. Each
tritogram represents a qualitative astrocytic receptor sheet. The structure for tritograms with length 15 is represented
by a tree. Generation rule: a tritogram x with length n + 1 may be generated from a tritogram y with length n if x is
equal to y on the first n places, for example, 12133 may be generated from 1213 but not from 1212. The numerals
are representations as places of the same or different qualities interpreted as astrocytic receptors on the endfeet of the
processes. Each tritogram corresponds to an astrocytic process.
248 Bernhard J. Mitterauer

Table 8.1 Tritostructure. Generation of 15 tritograms corresponding to 15


astrocytic processes with 14 different receptor qualities.

receptor qualities
astrocytic
processes

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 2 2 2 2 2 2 2 2 2 2
1 1 2 2 2 1 1 1 2 2 2 3 3 3 3
1 2 1 2 3 1 2 3 1 2 3 1 2 3 4
tritograms [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15]

the issue how and where such motile behavior of astrocytes may be con-
trolled?

8.5.2 Rhythmic astrocyte oscillations may program the


domain organization
For understanding the domain organization of an astrocyte, its rhythmic
contraction waves may be decisive. Astrocytes, when they get swollen
and/or depolarized, can potentially release accumulated K+ , neurotrans-
mitters, neuromodulators (e.g., taurine), and water into interstitial fluid
in a pulsatile manner (Cooper 1995). Such discharge processes represent
mechanisms by which astrocyte networks (syncytia) could influence neu-
ronal firing in a coordinated fashion (Newman and Zahs 1997). More-
over, astrocytes may play a direct role in generating pacemaker rhythms
(Mitterauer et al. 2000; Gourine et al. 2010; Mitterauer 2011).
Parri et al. (2001) showed that astrocytes in situ could act as a primary
source for generating neuronal activity in the mammalian central nervous
system. Slow astrocyte calcium oscillations (every 56 minutes) occur
spontaneously (without prior neuronal activation) and can cause excita-
tions in nearby neurons. Considering experimental findings of the struc-
tural interplay between astrocytes and synapses in hippocampal slices,
dynamic structural changes in astrocytes help control the degree of glial-
neuronal communication (Haber et al. 2006). Since the timescales of
both astrocyte calcium oscillations and morphological changes in astro-
cytes occur within minutes, a pacemaker function may determine the
motility of astrocyte processes and the generation of a structural pattern
of astrocyte-synaptic interactions.
In comparison of the rapid synaptic information processing within
milliseconds, the pulsations and morphological changes of astrocytes are
relatively slow. Thus, it is often argued that glia cannot exert an effect
The proemial synapse: glial-neuronal units, consciousness 249

in synaptic information processing. This argument may be erroneous if


cognitive processes are considered. Cognitive processes, such as thinking
and planning, and so on, occur in a timescale of minutes, hours, days
or weeks, since they need a relatively long time span. I hypothesize that
an astrocyte domain is organized within this long timescale generating
a specific qualitative structure of glial-neuronal information processing.
Recent discoveries begin to paint a new picture of brain function in which
slow-signaling glia modulate fast synaptic transmission and neuronal fir-
ing to impact behavioral output (Halassa et al. 2009).

8.6 Generation of intentional programs within the


astrocytic syncytium

8.6.1 Philosophical remarks


The definition of mind in terms of intentionality that originated in the
Scholastic doctrine of intention (Aquinas 1988) was revived by Brentano
(1995) and has become a characteristic theory of German phenomenol-
ogy. Basically, intention (Lat. intention, from intendere) means to reach
out for something. Intentionality is the modern equivalent of the Scholas-
tic intention representing a property of consciousness, whereby it refers
to or intends an object. The intentional object is not necessarily a real
or existent thing but is merely that which the mental act is about (Runes
1959).
Several schools of thought have formulated the concept of intention-
ality in modern terms. According to Searle (2004), the most common
contemporary philosophical solution of the problem of intentionality is
some form of functionalism. The idea is that intentionality is to be anal-
ysed entirely in terms of causal relations. These causal relations exist
between the environment and the agent and between various events going
on inside the agent. In general, Searle interprets intentionality as repre-
sentation of conditions of satisfaction. Here the brain and the robotic-
oriented approach to intentionality are comparable to that of Searle, but
satisfaction is only a special biological case of intentionality. Therefore,
the concept of feasibility of intentional programs is introduced, since
feasibility is not always accompanied by satisfaction.
In the eliminativist view of intentionality, there really are no intentional
states. A variant of this view is the idea that attributions of intentionality
are always forms of interpretation made by some external observer. An
extreme version of this view is Dennetts conception of the intentional
stance (1978). This conception states that we should not think of people
as literally having beliefs and desires, but rather that this is a useful stance
250 Bernhard J. Mitterauer

to adopt about them for the purpose of predicting their behavior. Bennett
and Hacker (2003) are right in their criticism that Dennett misconstrues
what modern philosophers since Brentano have called intentionality. Of
course, in theoretical neurobiology intention and intentionality implic-
itly play a role, but these conceptions are mostly used undefined (Kelso
2000). Especially in the chaos-theoretical approach to brain function a
definition of these conceptions is as yet not possible (Werner 2004). Here
an attempt is made to define the conception of intentional programs in
terms of the underlying theory of intentionality. Accordingly, an inten-
tional program generates a specific multi-relational structure in an inner
or outer appropriate environment, based on the principle of feasibility of
that program.

8.6.2 Outline of an astrocytic syncytium


A typical feature of macroglial cells, in particular the astrocyte, is that
they establish cell-cell communication in vitro and in situ, through inter-
cellular channels forming specialized membrane areas defined as gap
junctions (Dermietzel and Spray 1998). Different connexins (gap junc-
tion proteins) allow communication between diverse cell populations or
segregation of cells into isolated compartments according to their pat-
tern of connexin expression (Giaume and Theis 2009). Gap junctions
are composed of hemichannels (connexons) that dock to each other via
their extracytoplasmic extremities. Each hemichannel is an oligomer of
six connexin proteins (Cx). In the central nervous system, cell-specific
and developmentally regulated expression of eight connexins has been
demonstrated.
My model focuses on gap junctions between astrocytes, the main glial
cell type besides oligodendrocytes and microglia. Gap junctions are con-
sidered to provide a structural link by which single cells are coupled to
build a functional syncytium with a communication behavior that cannot
be exerted by individual cells. Gap junctions of an astrocytic syncytium
consist of the four identified connexins Cx43, Cx32, Cx26, and Cx45,
forming homotypic (i.e., gap junction channels formed by hemichannels
of the same kind) and heterotypic gap junction channels (i.e., formed by
hemichannels of different kinds). Whereas astrocytes are interconnected
with their neighbors via gap junctions, the interactions of astrocytes
with neurons occur mainly in synapses called tripartite synapses (Araque
et al. 1999).
Figure 8.5 shows a diagrammatic scheme depicting an astrocytic
syncytium. Six astrocytes (Ac1 . . . Ac6 ) are completely interconnected
via 15 gap junctions (g.j.) according to the formula n:2 (n1). Each
The proemial synapse: glial-neuronal units, consciousness 251

Sy

g.j.
Ac1 Ac2

Ac6 Ac3

Ac5 Ac4

Fig. 8.5 Outline of an astrocytic syncytium. Six astrocytes (Ac1 . . . Ac6 )


are interconnected via 15 gap junctions (gj) building a complete syn-
cytium. Each astrocyte contacts a neuronal synapse representing a tri-
partite synapse (for the sake of clarity, only one synaptic contact [Sy] is
shown).

astrocyte contacts a neuronal synapse, building a tripartite synapse in


the sense of a glial-neuronal unit. Admittedly, this simple diagram refers
only to the elementary components and their connections in an astrocytic
syncytium.
Since in the brain each macroscopic gap junction is an aggregate
of many, often hundreds of tightly packed gap junction channels are
observed (Ransom and Ye 2005). The number and composition of gap
junctions can be dynamically regulated at the level of the endoplasmic
reticulum by either upregulating connexin biosynthesis or decreasing the
rate of connexin degradation, and at the cell surface by enhancing gap
junction assembly or reducing connexin degradation.
If gap junctions between astrocytes are frequently coupled within a
timescale of seconds, minutes, or hours, they form plaques. Whereas
rarely coupled or non-coupled gap junctions are endocytosed and
destroyed (Gaietta et al. 2002), it is hypothesized that a plaque embod-
ies a memory structure (Robertson 2002) that may also operate as
an intentional program (Mitterauer 2007) deciding which and where
252 Bernhard J. Mitterauer

Table 8.2 Quadrivalent (n = 4) permutation system arranged in a


lexicographic order.

1 1 1 1 1 1 2 2 2 2 2 2 3 3 3 3 3 3 4 4 4 4 4 4
2 2 3 3 4 4 1 1 3 3 4 4 1 1 2 2 4 4 1 1 2 2 3 3
3 4 2 4 2 3 3 4 1 4 1 3 2 4 1 4 1 2 2 3 1 3 1 2
4 3 4 2 3 2 4 3 4 1 3 1 4 2 4 1 2 1 3 2 3 1 2 1
number of the 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
permutation

This permutation system consists of 24 permutations (1 2 3 4, . . . , 4 3 2 1)


according to the formula n = 4! (factorial) = 1 2 3 4 = 24. The 24 permutations
are lexicographically arranged.

astrocytic receptors in astrocytic-neuronal compartments must be acti-


vated for occupancy with cognate neurotransmitters in the sense of a
readiness pattern. Since gap junctions are either composed of same or
different connexins, they may function as biological devices for the dis-
tinction between identity and difference of the neurotransmitter qualities
in synaptic information processing. This may represent a gap junction
based mechanism for the registration or recognition of qualitative identi-
ties and differences in the sense of an elementary cognitive capability of
the brain.
In undisturbed dynamic compartmentalization, frequently activated
gap junction channels (indicated by arrows) generate a gap junction
plaque that positively feeds back to the pertinent synapses and in this
manner changes the structure of the astrocytic syncytium in the sense of
a dynamic compartmentalization (Dermietzel 1998). Here we deal with
an elementary mechanism of registration or recognition of the various
qualitative identities and differences of our perception of the inner and
outer environment, a basic condition for appropriate behavior.

8.6.3 The formalism of negative language


First of all, when speaking of intentional programs it is necessary to
first define the underlying formalism. According to Guenther (1980), a
negative language can be formalized in an n-valent permutation system.
Generally, a permutation of n things is defined as an ordered arrangement
of all the members of the set taken all at a time according to the formula n!
(! means factorial). Table 8.2 shows a quadrivalent permutation system
in a lexicographic order. It consists of the integers 1, 2, 3, 4. The number
of permutations is 24 (4! = 1.2.3.4 = 24). The permutations of the
elements
The proemial synapse: glial-neuronal units, consciousness 253

Table 8.3 Example of a Hamilton loop generated by a sequence of negation


operators (Guenther 1980).

P N 1. 2. 3. 2. 3. 2. 1. 2. 1. 2. 3. 2. 3. 2. 1. 2. 1. 2. 3. 2. 3. 2. 1. 2. P
1 2 3 4 4 3 2 1 1 2 3 4 4 3 2 1 1 2 3 4 4 3 2 1 1
2 1 1 1 1 1 1 2 3 3 2 2 3 4 4 4 4 4 4 3 2 2 3 3 2
3 3 2 2 3 4 4 4 4 4 4 3 2 2 3 3 2 1 1 1 1 1 1 2 3
4 4 4 3 2 2 3 3 2 1 1 1 1 1 1 2 3 3 2 2 3 4 4 4 4

This first permutation (P = 1 2 3 4) is permutated via a sequence of negation


operators (N1 2 3 . . . 2 1 2) generating all the permutations once until is closed
(1234) in the sense of a Hamilton loop.

1 4
2 to 3
3 2
4 1

can be generated with three different NOT operators N1 , N2 , N3 , that


exchange two adjacent (neighbored) integers (values) by the following
scheme:

1 2; 2 3; 3 4
(N1 ) (N2 ) (N3 )

Generally, the number of negation operators (NOT) is dependent on


the valuedness of the permutation system minus 1. For example, in a
pentavalent permutation system four negation operators (N1 , N2 , N3 ,
N4 ) (n = 5 1 = 4) are at work.
It is possible to form loops, each of which passes through all permuta-
tions of the permutation system once (Hamilton loop). In a quadrivalent
system they are computable (44 Hamilton loops), but in higher valent sys-
tems they are not computable. Table 8.3 shows an example of a Hamilton
loop (Guenther 1980). The first permutation (P = 1234) is permutated
via a sequence of negation operators (N1.2.3 . . . 2.1.2 ) generating all the
permutations once until the loop is closed.
Such permutation systems can be mathematically formalized as nega-
tion networks, called permutographs (Thomas 1982). Already in the
1980s it was shown that the negative language may represent an appro-
priate formal model for a description of intentional programs gener-
ated in neuronal networks of biological brains. Based on this formalism,
254 Bernhard J. Mitterauer

computer systems for robot brains have also been proposed (Mitterauer
1988; Thomas and Mitterauer 1989). Here, it is attempted to further
elaborate on this possible intentional programming in our brains, focus-
ing on glial-neuronal interaction.

8.6.4 Glial gap junctions could embody negation operators


In situ, morphological studies have shown that astrocyte gap junctions
are localized between cell bodies, between processes and cell bodies, and
between astrocytic endfeet that surround brain blood vessels. In vitro,
junctional coupling between astrocytes has also been observed. More-
over, astrocyte-to-oligodendrocyte gap junctions have been identified
between cell bodies, cell bodies and processes, and between astrocyte
processes and the outer myelin sheath. Thus, the astrocytic syncytium
extends to oligodendrocytes, allowing glial cells to form a generalized
glial syncytium, also called panglial syncytium, a large glial network
that extends radially from the spinal cord and brain ventricles, across
gray and white matter regions, to the glia limitans and to the capillary
epithelium.
Ependymal cells are also part of the panglial syncytium. Addition-
ally, activated microglia may also be interconnected with astrocytes via
gap junctions. However, the astrocyte is the linchpin of the panglial syn-
cytium. It is the only cell that interconnects to all other glia. Furthermore,
it is the only one with perisynaptic processes.
Gap junctions are showing properties that differ significantly from
chemical synapses (Zoidl and Dermitzel 2002; Nagy et al. 2004; Rouach
et al. 2004). The following enumeration of gap junctional properties
in glial syncytia may support the hypothesis that gap junctions could
embody negation operators in the sense of a generation of negative lan-
guage in glial syncytia:
First, gap junctions communicate through ion currents in a bidirec-
tional manner, comparable to negation operators defined as exchange
relations. Bidirectional information occurs between astrocytes and
neurons at the synapse. This is primarily chemical and based on neu-
rotransmitters. It is not certain that all glial gap junction communica-
tions are bidirectional due to rectification. This is a poorly understood
area because of extremely severe technical difficulties, especially in vivo
(Perea and Araque 2005). Second, differential levels of connexin expres-
sion reflect region-to-region differences in functional requirements for
different astrocytic gap junctional coupling states. The presence of sev-
eral connexins enables different permeabilities to ions and molecules
and different conductance regulation. Such differences of gap junctional
The proemial synapse: glial-neuronal units, consciousness 255

functions could correspond to the different types of negation operators.


Third, neuronal gap junctions do not form syncytia and are generally
restricted to one synapse. Fourth, processing within a syncytium is driven
by neuronal input and depends on normal neuronal functioning. The
two systems are indivisible. It is important to emphasize that neuronal
activity-dependent gap junctional communication in the astrocytic syn-
cytium is long-term potentiated. This is indicative of a memory system
as proposed in neuronal synaptic activity by Hebb over five decades
ago (1949). Fifth, the diversity of astrocytic gap junctions results in
complex forms of intercellular communication because of the complex
rectification between such numerous combinatorial possibilities. Sixth,
the astrocytic system may normally function to induce precise efferent
(e.g., behaviorally intentional or appropriate motor) neuronal responses.
Admittedly, the testing of this conjecture is also faced with experimental
difficulties.
Now, let us tie gap junctional functions and negative language together.
Negation operators represent exchange relations between adjacent values
or numbers. So they operate like gap junctions bidirectionally. Dependent
on the number of values (n) that constitute a permutation system, the
operation of different negation operators (n 1) is necessary for the
generation of a negative language. With concern to gap junctions, they
also show functional differences basically influenced by the connexins.
Therefore, different types of gap junctions could embody different types
of negation operators. Furthermore, a permutation system represents
like the glial syncytium a closed network generating a negative language.
So we have a biomimetic interpretation of the negative language.

8.7 Astrocytic syncytium as a system of reflection


The formalism of the negative language also allows the interpretation of
the astrocytic syncytium as a system of reflection. Guenther (19811983)
developed matrices consisting of the combinations of all computable
Hamilton loops in a quadrivalent (n = 4) permutation system. Hence, it
is appropriate to speak of Guenther matrices. Table 8.4 gives an example.
Guenther matrices differ from conventional matrices (e.g., from linear
algebra), in that their rows must be read in cycles (i.e., from the extreme
left to the extreme right, then again to the extreme left), and the length of
each row must not remain constant since cycles of various lengths (from
a length of 2 to a length of n!) can be generated.
The Guenther matrix in Table 8.4 consists of 24 Hamilton loops that
are arranged one below the other and to be read in rows. The permu-
tation where the counting starts is stepwise displaced from the extreme
Table 8.4 Guenther matrix consisting of 24 Hamilton loops.

permutations 1 1 1 1 1 1 2 2 2 2 2 2 3 3 3 3 3 3 4 4 4 4 4 4
2 2 3 3 4 4 1 1 3 3 4 4 1 1 2 2 4 4 1 1 2 2 3 3
3 4 2 4 2 3 3 4 1 4 1 3 2 4 1 4 1 2 2 3 1 3 1 2
4 3 4 2 3 2 4 3 4 1 3 1 4 2 4 1 2 1 3 2 3 1 2 1
number of the permutation 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Hamilton loop 1 1 8 24 9 17 16 2 7 23 10 18 15 3 6 22 11 19 14 4 5 21 12 20 13
Hamilton loop 2 24 1 17 8 16 9 23 2 18 7 15 10 22 3 19 6 14 11 21 4 20 5 13 12
Hamilton loop 3 24 17 1 16 8 9 23 18 2 15 7 10 22 19 3 14 6 11 21 20 4 13 5 12
Hamilton loop 4 17 24 16 1 9 5 18 22 15 2 10 7 19 22 14 3 11 6 20 21 13 4 12 5
Hamilton loop 5 17 16 24 9 1 9 18 15 23 10 2 7 19 14 22 11 3 6 20 13 21 12 4 5
Hamilton loop 6 16 17 9 24 8 1 15 18 10 23 7 2 14 19 11 22 6 3 13 20 12 21 5 4
Hamilton loop 7 24 7 23 8 16 15 1 6 22 9 17 14 2 5 21 10 18 13 3 4 20 11 19 12
Hamilton loop 8 23 24 16 7 15 8 22 1 17 6 14 9 21 2 18 5 13 10 20 3 19 4 12 11
Hamilton loop 9 23 16 24 15 7 8 22 17 1 14 6 9 21 18 2 13 5 10 20 19 3 12 4 11
Hamilton loop 10 16 23 15 24 8 7 17 22 14 1 9 6 18 21 13 2 10 5 19 20 12 3 11 4
Hamilton loop 11 16 15 23 8 24 7 17 14 22 9 1 6 18 13 21 10 2 5 19 12 20 11 3 4
Hamilton loop 12 15 16 8 23 7 24 14 19 9 22 6 1 13 18 10 21 5 2 12 19 11 20 4 3
Hamilton loop 13 23 22 6 15 7 14 24 21 5 16 8 13 1 20 4 17 9 12 2 19 3 18 10 11
Hamilton loop 14 22 23 15 6 14 7 21 24 16 5 13 8 20 1 17 4 12 9 19 2 18 9 11 10
Hamilton loop 15 22 15 23 14 6 7 21 16 24 13 5 8 20 17 1 12 4 9 19 18 2 11 3 10
Hamilton loop 16 15 22 14 23 7 6 16 21 13 24 8 5 17 20 12 1 9 4 18 19 11 2 10 3
Hamilton loop 17 15 14 22 7 23 6 16 13 21 8 24 5 17 12 20 9 1 4 18 11 19 10 2 3
Hamilton loop 18 14 15 7 22 6 23 13 16 8 21 5 24 12 17 9 20 4 1 11 18 10 19 3 2
Hamilton loop 19 22 5 21 6 14 13 23 4 20 7 15 12 24 3 19 8 16 11 1 2 18 9 17 10
Hamilton loop 20 5 22 6 21 13 14 4 23 7 20 12 15 3 24 8 19 11 16 2 1 9 18 10 17
Hamilton loop 21 5 6 22 13 21 14 4 7 23 12 20 15 3 8 24 11 19 16 2 9 1 10 18 17
Hamilton loop 22 14 21 13 22 6 5 15 20 12 23 7 4 16 19 11 24 8 3 17 18 10 1 9 2
Hamilton loop 23 14 13 21 6 22 5 15 12 20 7 23 4 16 11 19 8 24 3 17 10 18 9 1 2
Hamilton loop 24 13 14 6 21 5 22 12 15 7 20 4 23 11 16 8 19 3 24 10 17 9 18 2 1

The permutation where the counting starts is stepwise displaced from the extreme left to the extreme right. However, one can start on every
permutation. The matrix shows 24 Hamilton loops.
The proemial synapse: glial-neuronal units, consciousness 257

N1: 1 2 N2: 2 3 N3: 3 4

2 1 3 2 4 3

Fig. 8.6 Negations operate on a cyclic proemial relationship.

left to the extreme right. However, one can start on every permutation.
Biologically speaking, a Guenther matrix formalizes a combinatorics of
cyclic pathways generated in an astrocytic syncytium. The length of a
cycle determines the expansion of an astrocytic domain.
If each astrocyte forms a domain as a glial-neuronal unit (Hub), the
interactions of astrocytes via gap junctions can produce larger domains
(Master Hubs). A Hamilton loop generates a specific cyclic pathway
through the gap junctions of two or more astrocytes dependent on the
valuedness of the permutation system. Note, in a quadrivalent permuta-
tion system (shown in Table 8.4), the Hamilton loops define a complex
combinatorics within a Master Hub. This may be in accordance with
the concept of a dynamic compartmentalization in the glial syncytium
(Dermietzel 1998).
Generally, a loop, or a cycle, can be interpreted as an elementary
reflection system. Importantly, in the generation of a Hamilton loop
based on negative language, a special kind of reflection is hidden, say as in
proemial relations already described in GNUs. At a closer look, negations
operate on a cyclic proemial relationship as illustrated in Fig. 8.6.
In each negation (N1 , N2 , N3 ) first the lower value dominates the
higher one (), then the relation reverses (), since the higher value
now dominates the lower one (). Therefore, a negation operates as a
cyclic proemial relationship. If we assume that the proemial relationship
may underlie all consciousness-generating processes in the brain based on
intersubjective reflection, the astrocytic syncytia can also be interpreted
as elementary consciousness-generating systems.
Mathematically, the Hamilton loop problem is a problem of type NP,
which stands for non-deterministic polynomial time. Cook (1971) was
able to provide a means of demonstrating that certain NP-type problems
are highly unlikely to be solvable by an efficient, polynomial time algo-
rithm. Moreover, Cook (1971) proved that if a particular NP-problem
can be solved by a polynomial time algorithm, every other problem of
type NP can also be solved. Since the number of Hamilton loops is not
computable even in a pentavalent (n = 5) permutation system (Thomas
and Mitterauer 1989), our brain may be endowed with an unimaginable
reflection potency. Of course, the neuronal networks also play a basic
258 Bernhard J. Mitterauer

Table 8.5 Hamilton loop generated by a sequence of negation operators


(N121 . . . 213) that accept (A) and reject (R) permutations (P).

N 1 2 1 2 1 3 1 2 1 2 1 3 1 2 3 2 1 2 1 2 3 2 1 3
A 122312231234122312231234122334231223122334231234
213221322143213221322143213243322132213243322143
R 331133113311331133113311331111113311331111113311P
444444444422444444444422444422444444444422444422

A sequence of negation operators (N) generates a Hamilton loop. In a quadrivalent per-


mutation system (P), each type of negation (N1 , N2 , N3 ) can only accept (A) two values
and must reject (R) the other two. The fat line separates the relevant range of values from
the irrelevant ones.

role in the generation of consciousness on various levels, but this is not


the topic of the present study.

8.8 Intentional reflection by activation and non-activation of


astrocytic connexins
Table 8.5 shows a Hamilton loop in a quadrivalent (n = 4) permutation
system (P) generated by a sequence of negation operators (N121 . . . 213).
The permutation columns are separated by a line indicating that only
these two values are exchanged dependent on the negation operator
applied. Logically speaking, a negation operator can only accept (A) two
different values for negation and must reject (R) the other two values of
a permutation. Here, we deal with the interplay between acceptance and
rejection, typical for living systems. The capacity of acceptance means
realization of an intentional program by rejection of the non-intended
information. One can also say that the capacity of rejection is an index
of subjectivity (Guenther 1962).
Assuming that the values 14 correspond to the four different types
of connexins in the astrocytic syncytium, not all connexins are activated
at a given moment. Therefore, the generation of reflection loops in the
astrocytic syncytium may occur as an interplay between activated and
non-activated connexins, such that a reflection loop generates at least two
separated ontological realms. According to the combinatorics proposed
in an astrocytic syncytium, cycles of various length are permanently gen-
erated, and interpreted as intentional reflection mechanisms. However,
what about the non-activated or rejected parts of the syncytium? And,
which function is capable of integrating the activated and non-activated
systems of the whole brain? Since we are endowed with I-consciousness, a
holistic function must comprise not only relevant ontological realms, but
The proemial synapse: glial-neuronal units, consciousness 259

also seemingly irrelevant ontological realms at a given moment (Mitter-


auer 2010).
Here, the conception of the Master Hub is of special interest. The com-
position of the Master Hub changes at each moment, according to the
dynamics of brain activity. Depending on neuronal input and respective
astroglial responses as well as possible spontaneous intrinsic oscillations,
astroglial gap junction proteins may allow or block the transduction of
calcium waves from cell to cell. Connexins can be conceived of as gates.
Their opening and closing define different circuits that support different
computational processes. The Master Hub integrates patterns from local
neuronal assemblies to a brain-wide network, where it is broadcasted
and made accessible to other local assemblies (Pereira and Furlan 2010).
Despite parallels to my model, a significant difference must be men-
tioned. The Master Hub exerts a holistic function in the brain, which in
my view glial-neuronal networks are per se not able to do.

8.9 Holistic act of self-reference and metaphysics


There are many promising approaches to describe and explain the brain
network connectivity (Baars 2002; Chialvo 2006; Humphries et al. 2006;
Werner 2010). Moreover, since my theory of intentional programming in
glial networks separates ontological realms that are relevant from those
that are irrelevant at a given moment, we are faced with a new problem
of connectivity. Note that I-consciousness requires that the brain also
integrates irrelevant ontological realms. This function may involve an act
of self-reference. Importantly, the ontological situation of relevance and
irrelevance does not represent a dialectical oppposition but a radical gap
that may be bridged by the act of self-reference.
From a cybernetic point of view, Maturana (1970) emphasizes that it is
its circularity that an organism must maintain in order to remain a living
system and to retain its identity through different interactions. Here, I
do not refer to the concept of autopoiesis but to self-reference acting
as a pure function, as formalized by Varela (1974). How and where
this holistic function is generated in the brain remains a mystery and
the main point where experimental brain research ends. This is the edge
where brain philosophy is faced with metaphysical issues. For example,
is the act of self-reference an ontogenetic or (and) evolutive phenomenon
of subjective systems? Or, is it a timeless function at all? Since these issues
cannot be resolved with natural-philosophical methods, the philosophical
discussions may continue going round in circles. This was already the
case when Protagoras discussed with Socrates about the teachability of
260 Bernhard J. Mitterauer

virtue. There was no result until Protagoras realized that the opponents
only repeatedly changed their positions (Plato 1903).
Admittedly, my mainly theoretical interpretation of synaptic informa-
tion processing as a basic mechanism of intersubjective reflection is exper-
imentally not really testable. However, it can demonstrate the limits of
experimental brain research not only concerning consciousness research,
but also in regard to the explanation of the individuality of subjective sys-
tems as animals and humans. As I mention again and again, robotics
may represent an alternative approach (Mitterauer 2000). Because if we
are able to implement principles of subjectivity in a robot brain based
on biomimetic structures and functions, we can learn from its behavior
where we are right and where we are wrong or where we are confronted
with metaphysical limits of our scientific investigations. As a modest step
in this direction, we have proposed a biomathematical model of inten-
tional autonomous multiagent systems (Pfalzgraf and Mitterauer 2005).
It represents a formal interpretation of the components and interactions
in a glial-neuronal synapse, especially considering the role of intentions
in the design of conscious robots.

8.10 Concluding remarks


The present study deals with elementary reflection mechanisms in the
brain focusing on glial-neuronal synaptic units, astrocyte domain orga-
nization and the astrocytic syncytium. These reflection mechanisms
are formally based on a novel relationship, called proemial. The brain
may be composed of many ontological realms consisting of myriads of
glial-neuronal synaptic units with their volitive-intentional and cognitive-
perceptive networks embodying many subjective realities based on Ego-
Thou reflections. Basically, these structures and functions of our brain
enable it to prelude or reflect all possible interactions with the environ-
ment. Here, we may deal with reflection mechanisms that determine
consciousness, but do not reach awareness.
Moreover, the brain may embody a hierarchy of layers in which an
observer-observed relationship exists between layers of the hierarchy
(Baer, this volume Chapter 4). Although the astrocytic domain orga-
nization is formally based on a tree in the sense of hierarchical layers, a
further brain biological elaboration is necessary.

REFERENCES
Aquinas T. St. (1988). In Martin C. (ed.) The Philosophy of Thomas Aquinas:
Introductory Readings. New York: Routledge, pp. 3849.
The proemial synapse: glial-neuronal units, consciousness 261

Araque A., Parpura V., Sanzgiri R. P., and Haydon P. Q. (1999). Tripar-
tite synapses: Glia, the unacknowledged partner. Trends Neurosci 22:208
215.
Atmanspacher H. and Rotter S. (2008). Interpreting neurodynamics: Concepts
and facts. Cogn Neurodyn 2(4):297318.
Auld D. S. and Robitaille R. (2003). Glial cells and neurotransmission: an inclu-
sive view of synaptic function. Neuron 40:389400.
Baars B. J. (2002). The conscious access hypothesis. Trends Cogn Sci 6:4752.
Bennett M. R. and Hacker P. M. S. (2003). Philosophical Foundations of Neuro-
science. Malden, MA: Blackwell.
Brentano F. (1995). Psychology from an Empirical Standpoint. London: Routledge.
Buber M. (1970). I and Thou. New York: Free Press.
Chialvo D. R. (2006). The Brain Near the Edge. 9th Granada Seminar on Compu-
tational Physics, Granada, Spain. URL: http://arxiv.org/pdf/q-bio/0610041.
pdf (accessed February 28, 2013).
Churchland P. (2007). Neurophilosophy at Work. Cambridge University Press.
Cook S. A. (1971). The complexity of theorem-proving procedures. In Proceed-
ings of Third Annual ACM Symposium on Theory of Computing. New York:
ACM, pp. 151158.
Cooper M. S. (1995). Intercellular signalling in neuronal-glial networks. BioSys-
tems 34:6585.
Dennett D. (1978). The Intentional Stance. Cambridge, MA: Little, Brown.
Dermietzel R. (1998). Diversification of gap junction proteins (connexins) in the
central nervous system and the concept of functional compartments. Cell
Biol Int 22:71930.
Dermietzel R. and Spray D. C. (1998). From neuroglue to glia: a prologue. Glia
24:17.
Gaietta G., Deerinck T. J., Adams S. R., Bouwer J., Tour O., Laird D. W.,
et al. (2002). Multicolour and electron microscopic imaging of connexion
trafficking. Science 296:503507.
Giaume C. and Theis M. (2009). Pharmacological and genetic approaches to
study connexion-mediated channels in glial cells of the central nervous sys-
tem. Brain Res Rev 63(12):160176.
Gourine A. V., Kasimov V., Marina N., Tang F., Figueiredo M. F., Lane S.,
et al. (2010). Astrocytes control breathing through pH-dependent release of
ATP. Science 329:571575.
Guenther G. (1962). Cybernetic ontology and transjunctional operations. In
Yovits M. C., Jacobi G. T., and Goldstein G. D. (eds.) Self-Organizing Sys-
tems. Washington DC: Spartan Books, pp. 313392.
Guenther G. (1963). Das Bewutsein der Maschinen. Baden-Baden: Agis-Verlag.
Guenther G. (1966). Superadditivity. Biological Computer Laboratory 3,3. Urbana,
IL: University of Illinois.
Guenther G. (1976). Beitrage zur Grundlegung einer operationsfahigen Dialektik,
Vol. 1. Hamburg: Meiner.
Guenther G. (1980). Martin Heidegger und die Weltgeschichte des Nichts. In
Guenther G. (ed.) Beitrage zur Grundlegung einer operationsfahigen Dialektik.
Hamburg: Meiner.
262 Bernhard J. Mitterauer

Guenther G. (198183). Unpublished Work. Salzburg: Gotthard Guenther


Archives.
Haber M., Zhou L., and Murai K. K. (2006). Cooperative astrocyte and dendritic
spine dynamics at hippocampal excitatory synapses. J Neuosci 26:8887
8891.
Halassa M. M., Fellin T., and Haydon P. G. (2009). Tripartite synapses: roles
for astrocytic purins in the control of synaptic physiology and behavior.
Neuropharm 57:343346.
Haydon P. G. and Carmignoto G. (2006). Astrocyte control of synaptic trans-
mission and neurovascular coupling. Phys Rev 86:10091031.
Hebb D. O. (1949). The Organization of Behaviour. New York: John Wiley &
Sons, Inc.
Hegel G. W. F. (1965). System der Philosophie. Dritter Teil. Die Philosophie des
Geistes, Vol. 10. Stuttgart: Glockner.
Hirrlinger J., Hulsmann S., and Kirchhoff F. (2004). Astroglial processes show
spontaneous motility at active synaptic terminals in situ. Eur J Neurosci
20:22352239.
Humphries M. D., Gurney K., and Prescott T. J. (2006). The brain stem reticular
formation is a small-world, not scale-free network. Proc Roy Soc B 360:1093
1108.
Husserl E. (1960). Cartesian Meditations. Trans. Cairns D. The Hague: Martinus
Nijhoff.
Husserl E. (2005). Phantasy, Image, Consciousness, Memory (18981925). Col-
lected works, Vol. 11, 48. Dordrecht, Springer.
Kaehr R. (1978). Materialien zur Formalisierung der dialektischen Logik und
der Morphogrammatik. In Guenther G. (ed.) Idee und Grundriss einer nicht-
Aristotelischen Logik. Hamburg: Meiner, pp. 1117.
Kant I. (1976). Kritik der reinen Vernunft. Hamburg, Meiner.
Kelso J. A. S. (2000). Fluctuations in the coordination dynamics of brain and
behavior. In Arhem P., Blomberg C., and Liljenstroem H. (eds.) Disor-
der versus Order in Brain Function. Singapore: World Scientific Publishing,
pp. 185203.
Kettenmann H. and Steinhauser C. (2005). Receptors for neurotransmitters and
hormones. In Kettenmann H. and Ransom B. R. (eds.) Neuroglia. Oxford
University Press, pp. 131145.
Leibniz G. (1965). Monadology and Other Philosophical Essays. Trans. Schrecker
P. and Schrecker A. M. New York: Macmillan.
Maturana H. R. (1970). Biology of cognition. Biological Computer Laboratory 9.
Urbana: University of Illinois.
McCarthy K. D. and Salm A. K. (1991). Pharmacologically-distinct subsets of
astroglia can be identified by their calcium response to neuroligands. Neurosci
2/3:32533.
Mitterauer B. J. (1988). Computer System for Simulating Reticular Formation Oper-
ation, United States Patent, 4, 783, 741.
Mitterauer B. J. (1998). An interdisciplinary approach towards a theory of con-
sciousness. BioSystems 45:99121.
The proemial synapse: glial-neuronal units, consciousness 263

Mitterauer B. J. (2000). Some principles for conscious robots. Journal of Intelligent


Systems 10(1):2756.
Mitterauer B. J. (2007). Where and how could intentional programs be generated
in the brain? A hypothetical model based on glial-neuronal interactions.
BioSystems 88:10112.
Mitterauer B. J. (2010). Many realities: Outline of a brain philosophy based on
glial-neuronal interactions. Journal of Intelligent Systems 19(4):33762.
Mitterauer B. J. (2011). The gliocentric hypothesis of the pathophysiology of the
sudden infant death syndrome (SIDS). Med Hyp 76(4):482485.
Mitterauer B., Garvin A. M., and Dirnhofer R. (2000). The sudden infant death
syndrome: A neuromolecular hypothesis. Neuroscientist 6:154158.
Nagy J. I., Dudek F. E., and Rash J. E. (2004). Update on connexins and gap
junctions in neurons and glia in the mammalian nervous system. Brain Res
Rev 47:191215.
Newman E. A. (2005). Glia and synaptic transmission. In Kettenmann H. and
Ransom B. R. (eds.) Neuroglia. Oxford University Press, pp. 355366.
Newman E. A. and Zahs K. R. (1997). Calcium waves in retinal glial cells. Science
275:844846.
Oberheim N. A., Wang X., Goldman S., and Nedergaard M. (2006). Astrocytic
complexity distinguishes the human brain. Trends Neurosci 29:547553.
Parri H. R., Gould T. M., and Crunelli V. (2001). Spontaneous astrocytic Ca2+
oscillations in situ drive NMDAR-mediated neuronal excitation. Nat Neu-
rosci 4:803812.
Perea G. and Araque A. (2005). Glial calcium signalling and neuron-glia com-
munication. Cell Calcium 38:375382.
Pereira A. and Furlan F. A. (2010). Astrocytes and human cognition: modeling
information integration and modulation of neuronal activity. Progr Neurobiol
92:405420.
Pfalzgraf J. and Mitterauer B. (2005). Towards a biomathematical model of inten-
tional autonomous multiagent systems. Lect Notes Comput Sci 3643:577
583.
Plato (1903). Platonis Opera. Burnet J. (ed.). Oxford University Press.
Quine W. (1953). From a Logical Point of View. Cambridge, MA: Harvard Uni-
versity Press.
Ransom B. R. and Ye Z. (2005). Gap junctions and hemichannels. In Ketten-
mann H. and Ransom B. R. (eds.) Neuroglia. Oxford University Press, pp.
177189.
Robertson J. M. (2002). The astrocentric hypothesis: Proposed role of astrocytes
in consciousness and memory function. J Phys 96:251255.
Rouach N., Koulakoff A., and Giaume C. (2004). Neurons set the tone of gap
junctional communication in astrocytic networks. Neurochem Int 45:265
272.
Runes D. D. (1959). Dictionary of Philosophy. Ames: Littlefield, Adams.
Santello M. and Volterra A. (2010). Neuroscience: Astrocytes as aide-memoires.
Nature. 463(7278):169170.
Searle J. R. (2004). Mind: A Brief Introduction. Oxford University Press.
264 Bernhard J. Mitterauer

Stawarska B. (2009). Between You and I: Dialogical Phenomenology. Athens: Ohio


University Press.
Stellwagen D. and Malenka R. C. (2006). Synaptic scaling mediated by glial
TNF-alpha. Nature 440:10541059.
Thomas G. G. (1982). On permutographs. Supplemento ai Rendiconti del Circulo
Matematico di Palermo, Serie II (2):275286.
Thomas G. G. (1985). Introduction to Kenogrammatics. Rendiconti del Circolo
Matematico di Palermo 11:113123.
Thomas G. G. and Mitterauer B. (1989). Computer for Simulating complex pro-
cesses, United States Patent, 4, 829, 451.
Varela F. J. (1974). A calculus of self-reference. Int J Gen Syst 2:524.
Vimal R. L. P. (2009). Meanings attributed to the term consciousness: An
overview. J Consc Stud 16:927.
Werner G. (2004). Siren call of metaphor: Subverting the proper task of system
neuroscience. J Integr Neurosci 3(3):245252.
Werner G. (2010). Fractals in the nervous system: Conceptual implications for
theoretical neuroscience. Front Physiol 1:15.
Zoidl G. and Dermietzel R. (2002). On the search for the electrical synapse: A
glimpse at the future. Cell Tiss Res 310:137142.
9 A cognitive model of language and
conscious processes

Leonid Perlovsky

9.1 Introduction 265


9.2 Consciousness and the unconscious in perception and cognition 267
9.2.1 Closed eyes and brain imaging experiments 267
9.2.2 Dynamic logic and the role of mathematics in the scientific
method 269
9.3 Hierarchy of cognition 271
9.4 Language and cognition 273
9.4.1 The dual hierarchy 273
9.5 Consciousness and the unconscious in the hierarchy 274
9.6 Conscious and unconscious in thinking and conversations 276
9.7 Creativity 277
9.8 Free will versus scientific determinism 278
9.8.1 Reductionism and logic 279
9.8.2 Recent cognitive theories reject reducibility 280
9.8.3 What is free will in the hierarchy of the mind? 280
9.9 Higher cognitive functions: Interaction of conscious and unconscious
mechanisms 283
9.9.1 Self 283
9.9.2 Beautiful and sublime 284
9.9.3 Emotions in language prosody 287
9.9.4 Emotions of cognitive dissonances 289
9.9.5 Musical emotions 291
9.9.6 Emotional consciousness 292
9.10 Future experimental and theoretical research 293

9.1 Introduction
Cognitive modeling of consciousness assumes that conscious process-
ing enables attention to mental processes (Baars 1988; Perlovsky 2006a,
2011). During evolution, it is possible that this ability became adaptive
when the increasing complexity of mental life and its corresponding brain
functions offered choices beyond mere instinctual drives. The evolution
of consciousness consisted of a differentiation of the psyche into vari-
ous aspects. With the emergence of language, human cultural evolution
overtook genetic evolution. Using language, humans have created a large
number of mental representations which are available to consciousness.

265
266 Leonid Perlovsky

How does language interact with cognition? What are functions of con-
scious and unconscious processes in everyday conversations, thinking
processes, and creativity?
Humans subjectively feel conscious most of the time, but most men-
tal operations are unconscious. In this chapter, I consider functions of
conceptual and emotional mechanisms; free will and how it could be
reconciled with scientific determinism; scientific understanding of self,
aesthetic emotions in language and cognition; beautiful, sublime, musical
emotions, their cognitive functions, and cultural evolution.
In simple organisms, only minimal adaptation is required. An instinct
directly wired to action is sufficient for survival, and unconscious pro-
cesses can efficiently allocate resources and will. However, in complex
organisms, various instincts might contradict one another. Undifferenti-
ated unconscious mental functions result in ambivalence and ambiten-
dency; every position entails its own negation, leading to an inhibition.
This inhibition cannot be resolved by unconscious processes that do
not differentiate among alternatives (see Godwin et al., this volume,
Chapter 2). The ability for conscious processing is needed to resolve
these instinctual contradictions, by suppressing some processes and allo-
cating power to others. By differentiating alternatives, consciousness can
direct a psychological function to a goal.
This chapter emphasizes that consciousness is not a single word with a
capital C but a differentiated phenomenon. It appears in the evolution
of life, initially with simple contents, and gradually differentiates. Its
contents become diverse and complex. The biological evolution from
animals to humans and the cultural evolution of humans consist mostly
of a differentiation of the contents of consciousness.
In the pre-human world, the differentiation of psyche and increase of
consciousness (differentiation of its contents) was a slow process. One
reason is possibly a fundamental ambivalence of the value of conscious-
ness for organisms. Whereas consciousness requires differentiation of
mental functions, survival demands unification; all mental mechanisms
of an organism must be coordinated with instinctual drives and among
themselves. Evolutionary increase in differentiation and consciousness
is advantageous only if it is paralleled with unified functioning; in other
words, differentiation must be combined with unity.
This interplay between differentiation and unification suggests that the
evolution of consciousness must be a slow genetic process coordinating
differentiation and unification. Indeed, mental states of animals seem to
be unified. Animals are capable of complex dishonest behavior (e.g.,
when distracting a predator from a nest), but this behavior has devel-
oped with evolution; an individual animal is not making a conscious
decision and (as far as existing data suggest) does not face paralyzing
A cognitive model of language and conscious processes 267

contradictions. Emergence of language and human culture tremendously


speed up the differentiation of human psyche and increased contents
available to consciousness.
We are not conscious about most functioning in the organism. Blood
flow, breathing, and the workings of the heart and stomach are uncon-
scious as long as they work appropriately. The same is true about most
processes in the brain and mind. We are not conscious about individ-
ual neural firings, most retinal signals, and so on. We become conscious
about differentiated concepts. As mentioned previously, consciousness is
useful when alternatives are differentiated, and crisp thoughts are more
accessible to consciousness. In mental functioning, evolutionary direc-
tions and personal goals increase the range of conscious processing, but
this effect is largely unconscious, because direct knowledge of oneself is
limited (this is also discussed by Vimal, this volume, Chapter 5).
This limit creates difficulties for the study of consciousness. For a long
time, it has seemed obvious that consciousness completely pervades our
entire mental life, or at least its main aspects. Now, we know that this idea
is wrong, and the main reason for this misconception has been analyzed
and understood: the mind is conscious only about a small part of its
actions, and it is extremely difficult to notice anything else (compare
the discussions in Vimal, this volume, Chapter 5). Thus, to understand
consciousness it is necessary to consider its cognitive mechanisms, and
to differentiate conscious from unconscious processes.
Although this chapter does not require any knowledge of mathemat-
ics, it refers to a mathematical model of the mind which is based on few
basic principles. This model explains a vast amount of known data and
makes predictions of which some have been experimentally confirmed.
The chapter also describes the contemporary understanding of the mind,
its conscious and unconscious mechanisms, existing experimental confir-
mations of predictions of the theory, as well as predictions of conscious
and unconscious mechanisms that will be tested in the future. It also
discusses why many aspects of consciousness have seemed mysterious,
how we can understand them today, and what would forever remain
mysterious about consciousness.

9.2 Consciousness and the unconscious in


perception and cognition

9.2.1 Closed eyes and brain imaging experiments


Fundamental mechanisms of perception and cognition include men-
tal representations (memories) of objects, concepts, and ideas, forming
an approximate hierarchy from sensory and motor percepts, perceptual
268 Leonid Perlovsky

features, to objects, situations, abstract concepts . . . Perception and cog-


nition consist of matching lower-level neural signals to higher-level ones.
Vimal (this volume, Chapter 5) refers to internal and external signals
rather than to higher- and lower-level ones. In visual perception, reti-
nal signals are matched to neural representations of objects; in higher
cognitions, lower-level recognized perceptions are unified into more gen-
eral, abstract concepts by matching to higher-level representations. This
process of matching bottom-up (BU) and top-down (TD) signals is
a fundamental mental mechanism (Grossberg 1988; Perlovsky 2001,
2006a).
Which parts of perception and cognition mechanisms are accessible to
consciousness? Close your eyes and imagine an object in front of you.
The imagined object is vague-fuzzy, not as crisp as a perception with
opened eyes. And it is less conscious for any less-differentiated experi-
ence. Visual imagination is a TD projection of representations (mental
models in memory) onto the visual cortex. Therefore, this simple exper-
iment demonstrates that mental representations are vague and less con-
scious than perceptions. This experiment became simple after years of
studying neural mechanisms of perception; 30 years ago or more sci-
entists did not notice fundamental vagueness and unconsciousness (or
lesser degree of consciousness) of imaginations. When you open your
eyes, these vague and less conscious representations interact with BU
signals projected onto the visual cortex from the retinas. In these interac-
tions, vague models turn into crisp and conscious perceptions (Perlovsky
2009b). Note that with opened eyes it is virtually impossible to recol-
lect vague images perceived with closed eyes; during usual perception
with opened eyes, we are unconscious about vague representations and
perception processes from vague to crisp.
Recent brain imaging experiments measured many details of this pro-
cess (Bar et al. 2006). They identified the involved brain areas and
determined the timing of their activation. They demonstrated that the
imagined perceptions generated by the TD signals are vague, similar to
the close-open-eye experiment. The process from vague to crisp was
unconscious during its initial part. Conscious perception of an object
occurs when vague projections become crisp and match a crisp image
from the retina. The total perception process takes about 160 ms and
involves thousands of neurons. More than 99 percent of this process is
inaccessible to consciousness. Let us emphasize this fact that most mental
operations are not accessible to subjective consciousness. Consciousness
jumps among tiny islands of conscious states in the ocean of the uncon-
scious, yet subjectively we feel as if we smoothly and continuously glide
among conscious states.
A cognitive model of language and conscious processes 269

9.2.2 Dynamic logic and the role of mathematics in the scientific method
Barsalou (1999) emphasized that representations are distributed in the
brain (e.g., color is stored in a different part of the brain than shape).
During a concrete perception or cognition, the concept-representation
required to match an object or event in the world is reassembled, or a
similar experience is recreated from memory. These processes Barsalou
(1999) called simulators.
A mathematical theory of these simulator processes, modeling the
emergence of concrete and conscious representations from vague, dis-
tributed, and unconscious ones, have been developed in Perlovsky (1987,
2001, 2006a,b, 2007c, 2009b, 2010b), in Ilin and Perlovsky (2010), and
Perlovsky and Ilin (2010a).
Mathematical models are essential for science. Even so, mathemat-
ics by itself does not prove anything about the world either outside or
inside the mind. The power of mathematics and its importance for sci-
ence comes from its use in the scientific method. The scientific method
began from Newtons mathematical models of planetary motions. These
models described the known motions of planets and predicted unknown
phenomena, such as existence and orbit of Pluto. Theoretical predic-
tions of unknown phenomena and confirmations of these predictions by
experimental observations constitute the essence of the scientific method.
Mathematics by itself does not explain nature; the most fundamental
essence of science is scientific intuitions about how the world works.
Whereas it is possible to have many different intuitions about complex
phenomena, mathematics leads to unambiguous predictions that could
be experimentally verified, thus proving or disproving the theory. Intu-
itions about the world and mathematical methods, which describe these
intuitions, explaining vast amount of available knowledge from few first
principles are rare events signaling the coming of a new theory. Math-
ematically explaining vast knowledge from few basic principles is what
Einstein, Poincare, Dirac, and other scientists called the beauty of a
scientific theory, the first proof of its validity (Dirac 1982; Einstein [see
McAllister 1999]; Poincare 2001). The final proofs of a scientific theory
are experimental confirmations of its mathematical predictions.
A mathematical theory predicting the theoretical and experimental
results of Barsalou (1999) and Bar et al. (2006) have been developed in
Perlovsky (1987, 2001, 2006a). This model (or theory) is called dynamic
logic, and its fundamental property is the emergence of concrete and con-
scious representations from vague, distributed, and unconscious ones.
The reason for its name, dynamic logic, is that unlike classical logic
describing static states (e.g., this is a chair), dynamic logic describes
270 Leonid Perlovsky

dynamic processes from vague to crisp, from unconscious to conscious.


Dynamic logic is different from classical logic, yet it is related to classical
logic and to other types of logic (Kovalerchuk et al. 2012; Vityaev et al.
2011). By using models of cognitive mechanisms, dynamic logic over-
came decades of difficulties in artificial intelligence related to computa-
tional complexity of algorithms (Perlovsky et al. 1995; Perlovsky 1994a,b,
1998, 2001, 2007a; Deming and Perlovsky 2007; Perlovsky and Deming
2007; Mayorga and Perlovsky 2008; Kozma et al. 2009). Matching BU
and TD signals in perception and cognition processes requires associat-
ing every BU signal with the corresponding TD signal. Since the 1960s,
mathematical models of this association have required sorting through
various combinations of signals for selecting the most appropriate ones.
This has led to combinatorial (exponential) complexity of computations,
which exceeded a number of all elementary processes in the Universe
by far. Dynamic logic eliminated the need for considering combinations
and made possible to model mathematically fundamental processes in
the mind.
Dynamic logic models the most fundamental human instinctual drive,
the knowledge instinct (KI). KI drives our minds to match BU and
TD signals. Without this match no perception or cognition is possible,
nothing would become conscious, no other instinctual need could be
satisfied. KI is a foundation of all human higher abilities, and I return to
this topic throughout the chapter.
An example of a dynamic logic process during recognition-perception
is illustrated in Fig. 9.1 (this example is described in more details
in Perlovsky 2010b). DL is looking for smile and frown patterns
embedded in a noise background. Figure 9.1a shows the data without
noise, whereas Fig. 9.1b shows the data with noise, that is, as it is actually
measured. Figure 9.1c through 9.1h illustrate the dynamic logic process
from vague to crisp. The experimental work of Bar et al. (2006) has
proved that a similar process occurs in the visual cortex during percep-
tion. Most of this process is not conscious. Only the initial state (b) and
final state (h) can be consciously perceived.
Three types of neural mechanisms have been identified in various brain
processes as tentatively responsible for the dynamic-logic process from
vague to crisp or from unconscious to conscious. First, vague represen-
tations could be similar to images containing only low-spatial frequency
information; high-frequency content increases during the recognition
process (Perlovsky 2001, 2006a; Bar et al. 2006). Second, vague activ-
ity representations could be due to desynchronized neural activity in
the involved brain areas; synchronization increases during the recogni-
tion process (Kveraga et al. 2011). Third, vague activity representations
A cognitive model of language and conscious processes 271

Fig. 9.1 An example of DL perception of smile and frown objects


in noise: (a) true smile and frown patterns are shown without clut-
ter; (b) actual image available for recognition (signal is below noise,
S/N  0.5); (c) an initial fuzzy blob-model, the vagueness corresponds
to uncertainty of knowledge; (d) through (h) show improved model-
representations at various iteration stages (total of 22 iterations). The
improvement over the previous state of the art is 7000 percent in S/N.

could be due to highly chaotic neural activity at the beginning of the


process; transition to lower chaotic states occurs during the recognition
process (Kozma and Freeman 2001). Any of these processes lead from
unconscious to conscious representations.

9.3 Hierarchy of cognition


The mind is organized hierarchically, from sensory and motor signals
to minute perceptions and actions, to representations of objects and
their manipulations, to situations and plans of actions, to more abstract
concepts, and finally higher up in the hierarchy to most general
representation-concepts. The hierarchy is approximate; not every rep-
resentation is strictly above or below every other one, but for brevity I
will refer to a hierarchy. Higher levels contain more general and abstract
representation-concepts, unifying many lower-level and more concrete
representations, as illustrated in Fig. 9.2. The previous Fig. 9.1 illus-
trated a dynamic logic process from lower levels to the level of objects.
A next level, for example a representation of a professors office unifies
lower-level representations of a chair, desk, computer, and so on. Every
next level is built on top of lower levels; consequently, higher-level rep-
resentation are more removed from concrete perceptions of objects, are
272 Leonid Perlovsky

COGNITION

abstract ideas

situations

objects

sensory-motor signals

Fig. 9.2 A hierarchy of cognition (simplified). At every level there are


representation concepts. Lower-level representations send up BU sig-
nals. Higher-level representations send down TD signals. At every level
these signals are matched by a process modeled by dynamic logic.

vaguer and less conscious (less accessible to consciousness). This hypoth-


esis about vagueness and unconsciousness of higher-level representations
has been confirmed experimentally for a lower part of the hierarchy (Bar
et al. 2006; Kveraga et al. 2011; Yardley et al. 2011); confirming it
experimentally for a higher part of the hierarchy is a challenge for future
research. Another challenge is reconciling this idea with our everyday
subjective feelings of crisp and conscious understanding of reality which
I address in this chapter.
Representations at every level have evolved in biological and cultural
evolution with a purpose to unify conscious representations recognized
at a lower level. Similar to the professors office creating a more general
and abstract idea than constituent objects, higher representations create
more abstract and general ideas by unifying those understood at lower
levels. The price paid for generality is an inevitable vagueness of con-
tents of high-level abstract representations. The higher in the hierarchy,
the vaguer and less conscious are conceptual representations. Represen-
tations at the top of mental hierarchy unify the entire life experience. We
perceive them as the meaning and purpose of life. The quotes here are
used to emphasize the fact that these top representations are vague and
unconscious. The next section considers why in subjective consciousness
we feel that we are conscious about the minds contents, and still it is
difficult to discuss the meaning and purpose of life. According to Kant
(1790), the emotions of beautiful and spiritually sublime are related to
these top representations. I return to this discussion later.
A cognitive model of language and conscious processes 273

9.4 Language and cognition

9.4.1 The dual hierarchy


Cognition at lower levels of the minds hierarchy, such as perception of
sensory features and objects, do not require language. This is obvious
because animals without human language can perceive objects. Learning
cognitive and language representations will for shortness be called cog-
nitive and language learning. In the following, I argue that cognitive
learning at higher levels is not possible without language. A neural reason
for this is the mathematical complexity of learning high-level represen-
tations. In previous sections, I discussed that dynamic logic overcame
the complexity of associating BU and TD signals. However, at higher
levels another fundamental difficulty remains. Let us consider learning
situations. For example, perceiving a situation, such as a symphony hall,
is possible by recognizing a scene with an orchestra and rows of chairs
with listeners. But in every symphony hall there are many objects unre-
lated to the symphony hall situation, such as detailed shapes of halls;
patterns on floors, walls, and ceilings; chair shapes, lights, and switches;
or scratches on walls. Humans easily learn to ignore irrelevant objects.
This problem of ignoring irrelevant objects is psychologically and
mathematically insolvable without language. When looking in any direc-
tion, we encounter hundreds of objects. Most of these objects are irrel-
evant for any purpose. In fact, looking into most directions we only
see random irrelevant objects and no situations of any significance.
The number of possible combinations of objects is combinatorially large,
much larger than all elementary particle interactions in the entire history
of the Universe. Previously, I emphasized that this complexity caused
a difficulty for mathematical algorithms relating TD and BU signals,
and dynamic logic overcame this difficulty. Here I emphasize that there
is an even more fundamental problem in the origin of the higher-level
representations. How can one learn which combinations of objects are
just random noise to be ignored (majority) and which constitute situa-
tions worth learning? No amount of experience would be sufficient to
learn this. Learning higher-abstract concepts is even more difficult to
understand.
According to Fontanari and Perlovsky (2004, 2007a, 2008), Perlovsky
(2006a, 2007b, 2009a, 2010b, at press), Tikhanoff et al. (2006), Fonta-
nari et al. (2009), and Perlovsky and Ilin (2010a,b), this fundamental
difficulty of learning higher concepts is overcome using language. Lan-
guage is a hierarchical structure similar to cognition, that is, phrases
made of words are similar to situations made of objects. But there is a
274 Leonid Perlovsky

fundamental difference. Whereas cognition understands the world, lan-


guage is only about language. Whereas learning to understand the world
(cognition) requires real-life experience, learning language requires only
experience with surrounding language. The entire hierarchy from words
for objects to words and phrases for abstract ideas exists ready-made
in a surrounding language. This is why children by the age of five can talk
about virtually everything that exists in the surrounding culture. How-
ever, a child cannot function like an adult. The reason is that cognition
and understanding of the world requires real-life experience. Conscious-
ness about language and consciousness about the world are very different
things. They are often mixed up in discussions about consciousness.
This creates difficulties for understanding consciousness; what could be
understood scientifically may seem mysterious, and real mysteries are
overlooked.
Interaction between language and cognition can be understood accord-
ing to the scheme in Fig. 9.3. In the mind there are two parallel hierar-
chies, language and cognition. Language representations and cognitive
representations are neurally connected. Language is learned at an early
age from surrounding language at all hierarchical levels, as illustrated on
the right side of Fig. 9.3. Learning cognitive situations requires guidance
from language. This learning meets more difficulties than learning lan-
guage, since cognitive situations do not exist in the world ready-made
but have to be discerned and evaluated as useful or useless from life expe-
rience. This is not possible because of practical infinity of combinations
of objects. Therefore, learning the hierarchy of cognitive models from
experience is not possible. It could only be done by combining experi-
ence with language representations. Cognitive situations are learned from
those aspects of experience, which correspond to language representa-
tions (which are learned ready-made, and accumulate millennial cultural
wisdom).

9.5 Consciousness and the unconscious in the hierarchy


The dual hierarchy described sketchily and approximately earlier has
likely evolved on top of the neural mechanism of the mirror neuron
system (Rizzolatti 2005). The mirror neuron system in primates is located
in the same part of the brain where humans have neural mechanisms of
language (Rizzolatti and Arbib 1998; Arbib 2005). Neural connections
(big horizontal arrow in Fig. 9.3) connecting language and cognition
might have existed 25 million years ago, long before language.
A newborn baby does not have in his or her mind representations
of chairs or a word chair. When babies are born there are only
A cognitive model of language and conscious processes 275

COGNITION LANGUAGE SURROUNDING


LANGUAGE
abstract
abstract ideas
words/phrases
language
descriptions
of abstract
thoughts
situations phrases

phrases for
situations

objects words
words for
objects

sensory-motor sensory-motor language


signals language models sounds

Fig. 9.3 The dual hierarchy of language and cognition. Language learn-
ing is grounded in surrounding language at all levels of the hierarchy.
Learning of embodied cognitive models is grounded in direct experience
of sensory-motor perceptions only at the lower levels. At higher levels,
their learning from experience has to be guided by contents of language
models. This connection of language and cognition is motivated by KI
and the corresponding aesthetic emotions. Different emotionalities of
languages produce different cognition and different cultural evolutions.

neural placeholders for future representations, but neural connections


between cognitive and language representations are inborn. By five years
of age children are conscious about language (their language representa-
tions acquire crisp and conscious contents), they can talk about virtually
everything. But their cognitive representations (above objects in the
hierarchy) are mostly vague and unconscious. This is the neural mecha-
nism for what colloquially is called they do not have experience.
Gradually, cognitive representations (above objects) become crisper.
This involves experience with the real world, but experience alone is not
sufficient. Because experience is continuous, the possible amount of
experience is infinite. Language directs attention to those aspects of expe-
rience that should be remembered, understood, and retained in mem-
ory as cognitive representations. Under the guidance of language, with
increasing age humans learn to pay attention to what is meaningful, and
276 Leonid Perlovsky

when needed the representations can become conscious. At the same


time humans learn to ignore what is not essential.
Under the guidance of language, cognitive representations accumulate
experiential contents which have been identified and made conscious in
language and culture and which make up the conscious personal expe-
rience. Most contents of language and culture are not experienced per-
sonally, and the corresponding cognitive representations might remain
vague. People can talk about many more abstract things than they really
understand. At higher levels of the mental hierarchy, we remain like chil-
dren, we can talk about many things which we do not really understand.
Representations at higher levels in the hierarchy can be known through
language, their language representations might be conscious, but the cor-
responding cognitive representations might be outside of consciousness.

9.6 Conscious and unconscious in thinking and conversations


Consciousness is not one unified notion with a capital C, but con-
sists of differentiated, continuously varied processes in the mind and
culture. Increase of consciousness includes several types of processes in
the mind. One is differentiation of contents of mental representations; in
this process more detailed understanding is acquired and more details are
available to consciousness. Another, opposite type of processes consists
of developing more general, more abstract and unified understanding; it
involves higher levels in the hierarchy.
Human learning of cognitive representations continues through life
and is guided by language models. Language is an enabler of cognition.
Yet at the same time language hides from our consciousness the vague-
ness and unconsciousness of cognitive representations. Language plays
a role of eyes for abstract thoughts, but these eyes cannot be closed.
On one hand, learning abstract thoughts and consciousness about them
is only possible due to language, on the other, language blinds our
mind to vagueness of abstract thoughts. Whenever one talks about an
abstract topic, he (or she) might think that the thought is clear and
conscious in his (or her) mind. But we are often conscious about only
language representations in the dual hierarchy. Cognitive representations
may remain vague and unconscious. During conversation and thinking,
the mind smoothly glides among language and cognitive models, being
conscious more often about language than about cognitive understanding
of the world and self. Scientists, engineers, and creative people in gen-
eral are trained to differentiate between their own thoughts and what they
heard from others or read in a book or paper, but usually people do not
consciously notice if they use representations deeply thought through,
A cognitive model of language and conscious processes 277

acquired from personal experience, or what they have read or heard from
TV anchorpersons, teachers, or peers. High in the hierarchy all of us are
like five-year-old children: we talk, but we do not understand; contents of
cognitive representations are vague and unconscious, while due to crisp-
ness of language representations we may remain convinced that these are
our own clear conscious thoughts.
To summarize this argument, abstract ideas cannot be perceived by
senses. Language acts like eyes for abstract concepts. But unlike eyes,
language eyes cannot be closed. We cannot switch off language and
directly experience vagueness and diminished consciousness of cognitive
representations of abstract concepts. This principal difference between
consciousness about language and about cognition creates much misun-
derstandings and wrong mysteries about consciousness (like the word
consciousness itself).
This combination of conscious language and unconscious vague cog-
nition is even more valid about the highest ideas near the top of the
mind hierarchy. Even distinguished scientists and philosophers can talk
at length about the meaning of life, or what is beautiful, or what is spir-
itually sublime, about their belief or disbelief in God, but do they really
understand? All these ideas are related to representations near the top of
mental hierarchy (Perlovsky 2007d, 2008, 2010a,d, 2011, 2012c, at press
c). Nobody can be conscious about contents of cognitive representations
at the highest levels.
This does not mean that conversations or books on this topic are
useless. On the opposite, understanding contents of the highest rep-
resentations is extremely important for everyones life. The more we
understand the better we can achieve what is important in our lives.
And still, as we understand contents of representations at high levels, the
mind would always create still higher representations, whose contents
would forever remain unconscious, hidden. We never become mecha-
nistic automata; there is always room for mystery in human life and
beliefs.

9.7 Creativity
Creativity is an ability to make contents of mental representations more
conscious. In this way, every learning process involves creativity. The
word learning is used when one creates more conscious personal con-
tents, which have already existed in culture and in language, and usually is
directed from language (collective culture) to cognition (personal under-
standing). Creativity is reserved for the personal discovery of contents,
which have not existed in culture and language. Creativity is directed
278 Leonid Perlovsky

from cognition to language, from creating novel cognitive contents to


expressing them in language (or art) and thus making them conscious
for everybody. A process from cognition to language describes the cre-
ativity of writers. The creativity of scientists usually involves, in addition,
mathematical models. The creativity of poets and musicians involves
emotions, whereas the creativity of painters also involves visual imagery.
Later, I discuss more specific neural mechanisms of artistic creativity,
the beautiful, spiritually sublime, music, and poetry. In all cases, the
main aspect of creativity is creating from unconsciousness new conscious
contents.

9.8 Free will versus scientific determinism


The mystery of consciousness is often discussed along with the mystery
of free will. This is ranked as one of the few most important philosophical
problems of all time (Maher 1909). Yet, free will cannot be reconciled
with science. The contradiction between the idea of free will, the subjec-
tive feeling of every human that one possesses free agency, and scientific
determinism is so strong that most contemporary philosophers and sci-
entists do not believe that free will exists (Bering 2010; see also Lehmann,
this volume, Chapter 6). Scientific arguments against the reality of free
will can be summarized as follows. Spiritual events, states, and processes
(the mind) are to be explained based on laws of matter, from material
states, and processes in the brain. Science is causal, future states are
determined by current states, according to the laws of physics. If physical
laws are deterministic then there is no free will, since determinism is
opposite to freedom. If physical laws contain probabilistic elements or
quantum indeterminacy, there is still no free will, since indeterminism
and randomness are also the opposites of freedom (Lim 2008; Bielfeldt
2009).
Free will, however, has a fundamental position in many cultures.
Morality and judicial systems are based on free will. Denying free will
would threaten to destroy the entire social fabric of the society (Rychlak
1983; Glassman 1983). Free will also is a fundamental intuition of self.
Most people on earth would rather part with science than with the idea
of free will (Bering 2010). Most people, including many philosophers
and scientists, refuse to accept that their decisions are governed by the
same laws of nature as a piece of rock by the roadside or a leaf blown
by the wind (e.g., Libet 1999; Velmans 2003). Yet, the reconciliation of
scientific causality and free will has remained an unsolved problem. It is
solved in this section and given references.
A cognitive model of language and conscious processes 279

9.8.1 Reductionism and logic


A fundamental difficulty of reconciling free will and scientific determin-
ism is often formulated as reductionism. If mental processes responsi-
ble for will can be explained scientifically, it is obvious to many scien-
tists that such a biological explanation would be reducible to chemical
processes, further reducible to physical and mathematical formulations,
and human being with free will would obey physical laws similar to a
piece of rock falling under the gravitational force. Let us examine this
argument.
Physical biology has explained the molecular foundations of life, DNA,
and proteins. Cognitive science has explained many mental processes in
terms of material processes in the brain. Yet, molecular biology is far
away from mathematical models relating processes in the mind to DNA
and proteins. Cognitive science is only approaching some of the foun-
dations of perception and simplest actions (Perlovsky 2006a). Nobody
has ever been able to scientifically reduce the highest spiritual processes
and values to the laws of physics. All reductionist arguments and diffi-
culties of free will discussed previously, when applied to highest spiri-
tual processes, have not been based on mathematical predictive models
with experimentally verifiable predictions the essence and hallmark of
science. All of these arguments and doubts were based on logical argu-
ments. Logic has been considered a fundamental aspect of science since
its very beginning and fundamental to human reason during more than
2000 years. Yet, no scientist will consider logical arguments sufficient, in
the absence of predictive scientific models, confirmed by experimental
observations.
In the 1930s Godel (1934), a mathematical logician, discovered the
fundamental deficiencies of logic. These deficiencies of logic are well
known to scientists and are considered among the most fundamen-
tal mathematical results of the twentieth century. Nevertheless, logical
arguments continue to exert powerful influence on scientists and non-
scientists alike. Let me repeat the fact that most scientists to do not believe
in free will. This rejection of fundamental cultural values and an intuition
of self without scientific evidence seem to be a glaring contradiction. Of
course, there have to be equally fundamental psychological reasons for
such rejection, most likely originating in the unconscious. The rest of
this section analyzes these reasons and demonstrates that the discussed
doubts are indeed unfounded. To understand the new arguments, we
will look into the recent evolution of cognitive science and mathematical
models of the mind discussed previously.
280 Leonid Perlovsky

9.8.2 Recent cognitive theories reject reducibility


Attempts to develop mathematical cognitive models have encountered
irresolvable problems for decades, as already discussed. It turned out
these difficulties were manifestations of the fundamental inconsistency of
logic discovered by Godel (Perlovsky 2001). The difficulties of cognitive
science turned out to be related to the most fundamental mathematical
result of the twentieth century, the inconsistency of logic. Recent discov-
eries confirmed in brain imaging experiments proved that the mind does
not work according to logic. Instead, the mind is modeled by dynamic
logic describing processes from vague representations to crisp. Dynamic
logic explains how illogical vague neural processes in the brain lead to
crisp and approximately logical states. While being mostly unconscious,
the mind as accessible to subjective consciousness is entirely conscious and
logical.
This conclusion is fundamental to resolving the mystery of free will
and reductionism. Therefore, let us repeat it. Reductionism has always
been a logical conclusion, rather than a scientifically proven theory. Logic
has been firmly believed by scientists and non-scientists alike, because
all experience subjectively available to consciousness is logical. Illogical
processes in the mind are not accessible to consciousness. Yet, recent
discoveries proved that mind is not a logical system, in fact most pro-
cesses of the mind are illogical. The logical conclusions about scientific
inevitability of reductionism turned out to be wrong illusions of subjec-
tive consciousness. The difficulty is not based on science; the difficulty of
rejecting reducibility is the illusion of logic and subjective consciousness.
Processes at high levels of the hierarchy of mind cannot be scientifically
reduced to physics of elementary particles, on the opposite, mathematical
models of mind prove that high levels of the hierarchy are not conscious
and not logical. The mind is not reducible.

9.8.3 What is free will in the hierarchy of the mind?


At the lower levels of the mind hierarchy we perceive sensory fea-
tures. Consciousness does not play much role in these mechanisms,
and we do not normally experience free will with regard to function-
ing of our sensor systems. Higher up in the hierarchy, the mind per-
ceives objects, still higher up, situations, and abstract concepts. Each next
higher level contains more general and more abstract mental representa-
tions; at every higher level these representations are vaguer and less con-
scious (Perlovsky 2002a, 2006a, 2007b,c, 2008, 2010a,b,c,d; Mayorga
and Perlovsky 2008). At a lower level of perceiving objects, perception
A cognitive model of language and conscious processes 281

mechanisms function autonomously, mostly unconsciously, and free will


is not experienced. At higher levels, say, when planning our life, we expe-
rience intuitions or ideas of free will and of ones self-possessing free
will. Consciousness and free will become essential when thinking about
highest ideas of the meaning of life, of the beautiful and the sublime.
Believing in free will, despite severe limitations of our freedom in real
life, consciously or unconsciously, is extremely important for individual
survival, for achieving higher goals, and for evolution of cultures (Glass-
man 1983; Bielfeldt 2009). In the animal kingdom, belief in free will
acts instinctively, since the animal psyche is unified. Similarly, this ques-
tion did not appear in the mind of our early progenitors. A conscious intu-
ition of free will is a recent cultural achievement. For example, in Homers
Iliad, only Gods possess free will; 100 years later Ulysses demonstrates a
lot of free will (Jaynes 1976). Clearly, a conscious idea of free will is a cul-
tural construct. It became necessary with evolution and differentiation of
consciousness and culture. The majority of cultures existing today have
well-developed ideas about free will and religious and educational sys-
tems for installing these ideas in the minds of every next generation. But
does free will really exist? To answer this question, and even to under-
stand the meaning of really, I will now consider how ideas exist in
culture, and how the existence of ideas in cultural consciousness differs
from ideas in individual cognition (cultural consciousness refers to what
is conscious in cultural practices).
This discussion is directly relevant to Maimonides interpretation of
the original sin (Maimonides, eleventh century, in Levine and Perlovsky
2008, 2010). Adam was expelled from paradise because he did not want
to think but ate from the tree of knowledge to acquire existing knowl-
edge ready-made. In terms of Fig. 9.3, he acquired conscious language
knowledge from surrounding language but not in cognitive representa-
tions from his own experience. This discussion is also directly relevant
to the much-discussed irrational heuristic1 decision-making discovered
by Tversky and Kahneman (1974, 1981, and Noble Prize in Economics
2002). It is different from decision-making based on personal experience
and careful thinking, grounded in learning and driven by the knowl-
edge instinct (Levine and Perlovsky 2008, 2010; Perlovsky et al. 2010).
In those cases when life experience is insufficient and cognitive rep-
resentations are vague, the mind unconsciously switches to crisp and

1 We note that the meaning of the word heuristic changed over centuries. When
Archimedes cried out Eureka! in the streets of his city, Syracuse, he meant a genuinely
creative discovery. Today, especially in cognitive science, after Tversky and Kahneman
(1974), heuristic means readily available knowledge.
282 Leonid Perlovsky

conscious language representations to substitute for the cognitive ones.


This substitution is smooth and unconscious, so that we do not notice
(without specific scientific training and effortful analysis) when our judg-
ments are based on real-life experience or, like Adam, on language-based
knowledge (heuristics). Language-based knowledge accumulates millen-
nial wisdom and could be very good, but it is not the same as personal
cognitive knowledge combining cultural wisdom with life experience. It
might sound tautological that we are conscious only about conscious-
ness, and unconscious about unconsciousness. But it is not a tautology
to say that we have no idea of nearly 99 percent of our mental func-
tioning. Subjective consciousness jumps from one tiny conscious and
logical island in our mind to another one, across an ocean of vague
unconsciousness, yet subjective consciousness keeps us sure that we
are conscious all the time and that logic is a fundamental mechanism
of perception and cognition. Because of this property of consciousness,
even after Godel most scientists retained logical intuitions about the
mind.
Recently, some scientists have claimed experimental scientific evidence
against free will. Libet (1999) has demonstrated that during grasping
an object, EEG signals in the brain corresponding to the initiation of
moving a hand occur almost a half-second before the human subject
experiences a conscious decision to grasp the object. Therefore he claims
that a humans grasping of an object is not governed by free will. Id
like to emphasize that the cognitive theory discussed in this chapter
proves that Libets experimental results have nothing to do with free
will. Many human actions can be performed autonomously, and if under
certain experimental conditions some elementary actions are experienced
as free post-factum, this is not an argument that free will does not
exist at higher levels of mental hierarchy where it is really important for
human life and consciousness. The dual mental hierarchy of language and
cognition combines conscious language representations with vague and
partly conscious cognitive representations; therefore, extrapolating from
elementary actions to higher cognition is scientifically wrong. Making
logical conclusions from previous logical states is a small part of human
thinking process, and identifying free will with this minor part of human
thinking is scientifically wrong.
Return now to the question, does free will really exist? And if it does,
what is it? The free-will versus determinism debate can be formulated
in the framework of classical logic, but this debate does not exist as a
fundamental scientific question. Because of the properties of mental rep-
resentations at high cognitive levels, where free will matters, especially
near the top of the mind hierarchy, the existence of free will cannot be
A cognitive model of language and conscious processes 283

discussed within classical logic. Let us repeat, free will is not about logi-
cally connecting two logical states of mind.
How can the question about free will be answered within the devel-
oped theory of mind? Free will does not exist in inanimate matter. Free
will exists as a cultural concept-representation (in addition it has ancient
animal roots, possibly much older than representations). The contents
of this concept include all related discussions in cultural texts, literature,
poetry, art, in cultural norms. This cultural knowledge gives the basis for
developing corresponding language representations in individual minds;
language representations are mostly conscious. Clearly, individuals differ
by how much cultural contents they acquire from surrounding language
and culture. The dual model suggests that, based on this personal lan-
guage representation of free will, every individual develops his or her
personal cognitive representation of this idea, which assembles his or
her related experiences in real life, language, thinking, and acting into
a coherent whole (Perlovsky 2011). Free will really exists in the ability
to make logical and conscious decisions from unconscious and illogi-
cal contents of cognitive representations. Free will is a mental ability to
increase ones understanding and to make conscious decisions by bringing
unconscious contents into consciousness.

9.9 Higher cognitive functions: Interaction of conscious


and unconscious mechanisms

9.9.1 Self
The culturally acquired knowledge of free will becomes an inseparable
part of intuition of Self. Self is another sometimes controversial topic of
discussion of consciousness. What is Self? Self is more than a concept.
Like all concepts, we understand it due to the corresponding mental
representation. But this representation is likely of a more ancient origin
than most representations in human mind. It belongs to what Jung called
archetypes (1921) and what today we might understand as primordial
neural mechanisms, precursors of representations. The unified percep-
tion of entire organismic functioning is imperative for survival. It existed
unconsciously long before consciousness or representations originated.
With emergence of conscious differentiated perception of ones own func-
tioning, the Self became a representation. Once in simple organisms it has
been an automatic unconscious part of functioning. Today human diverse
knowledge has to be reconciled with unitary understanding of Self, which
has become a complex task imperative for survival. With emergence of
language, and ability to consciously differentiate language representation
284 Leonid Perlovsky

of Self, the more essential has become conscious understanding of Self,


so that an individual cognition can effectively stand against the language
tendency to differentiation. Conscious cultural understanding of Self
became an important part of philosophy and culture. This understand-
ing, as much of cultural understandings, is developed and maintained
in language. Many aspects of Self exist in language representations. In
cognitive representations there normally has to be an experience of unity
along with differentiation; although differentiation is the goal of under-
standing Self, unity must be maintained. Disunity of Self, multiple Selves,
is a severe psychiatric condition impairing functioning; the fact that it
is possible is not mysterious. The mind is extremely complex and its
functioning could go wrong in many ways. More mysterious is the fact
that despite tremendous differentiation of conscious cultural contents,
most people maintain the unity of Self. This mystery is reduced when
we realize that without this very strong intuition human beings rarely
survive and produce children. Development of consciousness includes a
conscious differentiated understanding of Self. But it might be danger-
ous for psychological well-being and should not be undertaken before
a strong unified conscious Self and a will to maintain it are developed,
because potentially it might lead to a psychiatric condition of multiple
selves.

9.9.2 Beautiful and sublime


Aesthetic emotions, since Kant (1790), are understood as emotions
related to knowledge. According to the instinctual-emotional theory of
Grossberg and Levine (1987), emotions are neural signals indicating to
decision-making parts of the brain satisfaction or dissatisfaction of basic
needs. Basic needs are measured by instinctual mechanisms, which are
sensory-like neural mechanisms measuring vital parameters of an organ-
ism. We have a multitude of these sensors in our body, measuring the
pressures of various fluids in multiple parts of the body (say, blood pres-
sure), or the level of sugar in blood (if low it is felt as hunger), and so on;
most of them function unconsciously. This chapter does not differentiate
between emotions, affective feelings, moods, all of these are summarily
called emotions here. Emotions could be unconscious or vague feelings,
or could be conscious feelings; they are not representations, but signals,
indicating proper functioning or motivations to representations. Thus
emotions are different from, but fundamental to and inseparable from,
cognition. This and several following sections consider emotions, espe-
cially less frequently studied emotions, which often do not even have
A cognitive model of language and conscious processes 285

special names, but which comprise a larger part of conscious and uncon-
scious emotional life.
An understanding of ones surroundings is imperative for survival and
for satisfaction of instinctual needs; therefore, the most important and
fundamental instinctual mechanism is the instinct for knowledge (some-
times called the need for understanding), which drives the mind to
improve similarities between cognitive representations and correspond-
ing objects, events, and their language representations (learning language
representations is driven by the instinct for language, Pinker 1994, which
does not connect language to the surrounding world, except surrounding
language). At lower levels of mental hierarchy (say, objects) the knowl-
edge instinct acts automatically and associated emotions of its satisfaction
or dissatisfaction (if minor) are below the threshold of consciousness. At
higher levels these emotions are conscious. At the level of situations, these
emotions become conscious if a situation is not understood or contradicts
expectations (this is a staple of thriller movies). Positive aesthetic emo-
tions may become conscious if we understand something after exerting
significant effort.
Emotional research mostly discusses basic emotions, related to sat-
isfaction of bodily instincts. Basic emotions usually are named by words
(such as rage); there are about 150 English words with emotional con-
notations, but only a few different basic emotions (Ortony and Turner
1990; Izard 1992; Ekman 1999; Russell and Barrett 1999; Lindquist
et al. at press; Petrov et al. at press). Richness of human emotional life
(Cabanac 2002) is mostly due to aesthetic emotions, which are experi-
enced as emotions of beautiful, sublime, as musical emotions, emotions
of cognitive dissonances, and emotions heard in prosody of human voice
(Perlovsky 2006b, 2007b, 2008, 2009a, 2010a,b,c,d,e, 2011, at press
a,d; Perlovsky et al. 2010; Fontanari et al. at press).
Emotions of the beautiful are aesthetic emotions related to satisfaction
of the instinct for knowledge at higher levels of the hierarchy of the mind
(Perlovsky 2001, 2006a, 2010a, 2010b,c,d, 2011). Contents of cognitive
representations at high levels are vague and unconscious. As previously
discussed, these contents are related to the meaning of life, and they
cannot be made conscious; yet the knowledge instinct drives the mind to
a better understanding of these contents. Understanding of the meaning
of life is so important that when we feel these contents becoming a bit
more clarified and conscious, or even when we feel that something like
this really exists, we feel the presence of emotion of the beautiful.
The purpose of art since immemorial times was to penetrate into this
mystery, to use this striving for meanings to invoke the feeling of the
beautiful, and to convince us that the meaning really exists. So it is not
286 Leonid Perlovsky

surprising that sometimes the feeling of meaningfulness and emotions


of the beautiful can be felt in an art museum, or when looking into art
catalogs. Studying and thinking about art might improve the depth and
sophistication of these feelings. For a scientist the meaning of life (and
Universe) might be associated with scientific theories; therefore, a mean-
ingful theory might be perceived as beautiful. Some non-scientists feel
awe and beauty of scientific theories in popular discussions. Writers and
poets create beauty in interaction of language and cognition, and many of
us enjoy it. Sometimes ugly contents of art may clarify what is opposite
of the meaning, and clarify the meaning in this way. Of course, as in any
other field so in art, there are objects acknowledged as distinguished by
influential critics, which create just ugliness having nothing to do with
improving understanding of the meaning. This just means that the final
aesthetic judgment does not belong to authorities in any field. These
judgments are subjective, and everyone should rely on his or her own
feelings about what is beautiful.
A theory of the beautiful summarized here is an extension of Kants
aesthetics (1790). Kant came very close to appreciating the role of uncon-
scious mechanisms. He missed essentially just one mechanism that this
chapter emphasizes, the knowledge instinct. Earlier I discussed that men-
tal representations are purposeful; they evolved in biological and cultural
evolutions with a purpose to unify lower-level representations and thus to
create higher meanings. This purposefulness is related purely to knowl-
edge and in this sense is spiritual. Higher spiritual purposefulness is
the center of the Kantian theory of aesthetics.
Kants aesthetics also connected the beautiful and spiritually sublime,
a foundation of all religions. The knowledge instinct as discussed drives
the mind toward developing representations of understanding life mean-
ing, and satisfaction of this drive is experienced as emotions of the beau-
tiful. At the same time, the knowledge instinct drives the mind toward
developing representations of behavior realizing this meaning and beauty
in ones life, and satisfaction of this drive is experienced as emotions of
the sublime (Perlovsky 2006a, 2010a,b,c,d, 2011, at press c; Levine and
Perlovsky 2008).
Similar to the beautiful, the sublime is a precious emotion, inspiring
achievements in individuals and societies; at the same time representa-
tions involving emotions of sublime are vague and unconscious. Everyone
develops these representations throughout lifetime from personal expe-
rience guided by related language representations. These language rep-
resentations accumulate millennial cultural wisdom that has been made
a part of collective consciousness, and everyones life task is to make it a
part of ones consciousness. This is done by developing contents of the
A cognitive model of language and conscious processes 287

top cognitive behavioral representations, as much as possible, to become


a bit crisp and conscious from their initial vagueness and unconscious-
ness. As discussed for the beautiful, similarly for the sublime, even a
feeling that it is possible to make ones life meaningful fills one with an
empowering emotion of sublime. Spiritual foundation of all religions
is to instill in people beliefs that meaning exists, that everyone has to
strive for it, and to create means for attempting to achieve a meaningful
life. Mother Teresa wrote in her diaries that she felt emotions of sublime
steadily when young; then these emotions were lost and never returned.
She felt this as abandonment by God. For most people a feeling of sub-
lime occurs in a rare fleeting moment. This precious rare moment can
occur during reading a religious book, or a work of literature or science,
or when walking in the field, working on ones callings, say, a scientific
theory. Whereas cognitive contents of top representations are uncertain
and unconscious, associated emotions could be conscious because of
their importance. During most of life, most people are in doubt that life
could be made purposeful and meaningful, because of unconsciousness
of cognitive contents near the top of the hierarchy. Near the top of the
mental hierarchy conceptual and emotional contents are poorly differen-
tiated, and emotional contents can be more available to consciousness.

9.9.3 Emotions in language prosody


Emotions, I repeat, make up a significant part of richness of conscious
experiences. In addition to basic emotions shared with animals, we expe-
rience a virtual infinity of aesthetic emotions related to knowledge (not
necessarily to art museums). This section continues exploring these emo-
tions in sounds of language.
Language sounds are emotional. Consciously and unconsciously these
emotions are present almost all the time in everyones life. In usual con-
versations, these emotions might not be consciously noticeable, but dur-
ing arguments or political speeches, emotions can go strong. As discussed
later, English is possibly the least emotional among Indo-European lan-
guages. For example, in a restaurant frequented by Italians or Russians,
one can easily note a heightened level of emotions among these (and
some other foreigners). As argued later, this is not as one might think,
because of cultural differences, or that Russians are inherently more
emotional than Americans (if any difference exists, Russians care less).
Theoretical and experimental evidence suggests that different languages
maintain different balances between the emotional and conceptual con-
tents (Perlovsky 2007b, 2009a; Perlovsky et al. 2011; Fontanari et al. at
press).
288 Leonid Perlovsky

Emotions in language are carried in its sounds, so-called prosody or


melody of speech (as in songs). Lets return for a moment to the origin of
language from animal cries. Emotionality of voice in primates and other
animals is governed from a single ancient emotional center in the lim-
bic system (Deacon 1989; Lieberman 2000; Mithen 2007). Sounds of
animal cries engage the entire psyche, rather than concepts and emo-
tions separately. An ape or bird seeing danger does not consciously think
about what to say to its fellows. A cry of danger is inseparably and uncon-
sciously fused with recognition of a dangerous situation, and with a com-
mand to oneself and to the entire flock: flee! An evaluation (emotion
of fear), understanding (concept of danger), and behavior (cry and wing
sweep) are not differentiated that is not under separate conscious con-
trol. Conscious and unconscious are not separated. Recognizing danger,
crying, and fleeing is a fused concept-emotion-behavioral synthetic form
of cognition-action. Birds and apes cannot control their larynx muscles
voluntarily.
Origin of language required freeing vocalization from uncontrolled
and unconscious emotional influences. Initial undifferentiated unity of
emotional, conceptual, and behavioral (including voicing) mechanisms
had to separate-differentiate into partially independent systems. Voicing
separated from emotional control due to a separate emotional center in
cortex which controls larynx muscles, and which is partially under voli-
tional conscious control (Deacon 1989; Mithen 2007). In contemporary
languages, the conceptual and emotional mechanisms are significantly
differentiated, compared to animal vocalizations. The languages evolved
toward conscious conceptual contents, while their emotional contents
were reduced, and are partially under conscious control.
Still language sounds maintain emotions affecting conscious as well
as unconscious primordial emotional systems. Let us emphasize again
that unconscious has several meanings, emotionality in voice prosody
might be so low that a speaker and hearer might be unaware of it, or
it might be noticeable but unintended, not under intentional voluntary
control. These various levels of unconscious emotionality are different
from conscious and intentional emotionality in a highly emotional speech,
say, during political rallies, quarrels, in songs, or in poetry.
Intentional, highly emotional uses of language emphasize the impor-
tance of remaining emotionality of language sounds, connecting sounds
and meanings in language. When people want to make sure that political
supporters or opponents, or kids, or spouses . . . really understand what
is meant, they put a lot of emotions into their voices. Language meanings
are outside of language, meanings require referring to objects and events
through cognitive representations, or directly through ancient emotional
A cognitive model of language and conscious processes 289

mechanisms as in crying. If sounds of language are disconnected from


meanings, language is no more an efficient means of communicating or
creating meanings. Sounds of voice in animals are connected to their
meanings directly, uncontrollably, through involuntary emotional mech-
anisms. But voice emotionality in humans is reduced. If it completely
disappears, language may lose any motivation and meaning. Conceptual
conscious content is there, it might be sufficient for a scientist, engineer,
or any person for whom conceptual content is an intimate part of life.
But for most people, even for intellectual people involved mostly with
emotions, mere conceptual contents of speech may not excite cognitive
representations and not connect to concrete meanings. It is even truer
about abstract conceptual contents removed from everyday life. There-
fore, emotions in speech prosody, even if under the level of consciousness,
are essential for connecting language to its meanings in consciousness.
It is interesting to note that contemporary English-language poets,
unlike Shakespeare, Keats, Brown, and most great English-language
poets of the past, often purposefully eliminate emotionality. This trend
might enhance disconnections between sounds and meanings in English
(Perlovsky 2004, 2006a, 2007b, 2008, 2009a, 2010a,b,c; Masataka and
Perlovsky 2012; these publications discuss evolution of conscious and
unconscious mechanisms of language emotionality, touch on artistic
goals of poetry and discuss if reducing emotionality corresponds to poetry
goals). The contemporary tendency toward meaninglessness in culture,
from mass TV to Nobel Prizes for literature, might be a consequence
of this trend expanding from language to collective consciousness. Dif-
ferent languages maintain different emotionalities and affect collective
consciousness in different ways (Perlovsky 2007b,c, 2009a, 2010e, at
press a,d).

9.9.4 Emotions of cognitive dissonances


Near the top of the hierarchy representations are undifferentiated, uni-
fied, and emotions of the beautiful and sublime are undifferentiated.
Lower in the hierarchy cognitive representations are differentiated and
their contents could contradict each other. Any representation contra-
dicts to some extent bodily instinctual drives (otherwise, instincts would
be sufficient and consciousness would not be needed). Also any two rep-
resentations contradict each other to some extent (otherwise, we would
not need two, one would suffice). These contradictions are called cog-
nitive dissonances (CD; Festinger 1957). They lead to a dissatisfaction
of the knowledge instinct. Dissatisfaction of the knowledge instinct, as
discussed, causes aesthetic emotions. These emotions are different from
290 Leonid Perlovsky

basic emotions in principle (Fontanari et al. at press). This could be


illustrated in the following example: a young professor is simultaneously
offered tenured positions in Stanford and in Harvard. Each of these
offers is a great career achievement, and each if offered alone would be
felt with strong positive emotions. These are basic emotions of pride,
feelings of achievement, recognition, financial opportunities. However, a
choice between the two would be very painful. This example is a general
case; a choice between two excellent opportunities could be painful. This
pain is not a basic emotion in principle; it is related to a contradiction in
knowledge.
The ancient Greeks knew that people tend to resolve dissonances by
devaluing a conflicting cognition. In the Aesops fable The Fox and the
Grapes, a fox sees high-hanging grapes. A desire to eat grapes and inability
to reach them are in conflict. The fox overcomes this CD by deciding that
the grapes are sour and not worth eating. Since the 1950s, CD became
a wide and well-studied area of psychology. It is known that tolerating
CD is difficult, and people often make irrational decisions to avoid them.
In 2002 research in CD (Kahneman) was awarded the Nobel Prize in
Economics, emphasizing the importance of this field of research.
Emotions of CD seem to be a next step beyond aesthetic emo-
tions discussed previously. Satisfaction of the knowledge instinct during
perception and cognition corresponds to matching BU and TD signals.
The corresponding emotions emerged evolutionary along with represen-
tations, likely beginning with Amniotes. CD have appeared along with
language. With emergence of language, knowledge accumulated fast,
proto-humans minds did not have evolutionary time to adapt to emerg-
ing contradictions in knowledge, and CD appeared.
Contradictions in knowledge are difficult to tolerate (Festinger 1957;
Fontanari et al. at press) because they contradict the knowledge instincts,
contradict representations at the top of the mind hierarchy, and could
undermine belief in meanings. People tend to resolve CD by devaluing
contradictory cognitions. If a powerful mechanism of resolving CD did
not evolve, they would counter motivations for evolving language, high
cognition, and culture, which would be devalued and would not evolve.
Resolving negative emotions of CD may require making them con-
scious (Festinger 1957; Tversky and Kahneman 1974; Fontanari et al.
at press). Because of recent origins these emotions are usually less con-
scious, there are no words in language for naming these emotions; they
are not naturally conscious. Yet they emotionally color every aspect
of our life. Because of almost innumerate number of combinations of
pieces of knowledge, there is a huge number of these emotions; they
comprise the wealth of human emotional life, and we have to deal with
A cognitive model of language and conscious processes 291

them consciously; otherwise, they undermine the desire to think. Evolu-


tion of language, cognition, and culture has been possible only because
special strong conscious emotions emerged to resolve CD. So, how did
the huge number of emotions to deal with CD evolve?

9.9.5 Musical emotions


A highly cherished part of human consciousness is the wealth of musical
emotions. Aristotle (1995/VI BCE) and Kant (1790) found it difficult
to explain them. Darwin called musical emotions the most mysteri-
ous (abilities) with which (man) is endowed (Darwin 1871). In animal
vocalizations, conceptual and emotional contents are undifferentiated.
Evolution of language required us to separate conceptual and emotional
contents and to enhance the conceptual part. Evolution of language,
as discussed, led to fast evolution of CD and negative emotions, which
demotivated evolution of language and knowledge. Continued evolution
of language and culture required overcoming this barrier. Emotions of
CD had to be brought to consciousness and assimilated by the psyche.
A large number of differentiated emotions were required. This differ-
entiated emotions were created by the emotional part of voice, which
evolved toward music (Perlovsky 2006b, 2008, 2010a,b,c, 2011, at press
a,d; Masataka and Perlovsky 2012).
The number of emotions of CD is combinatorially large, practically
infinite; for this reason musical emotions are sometimes called contin-
uous. This is how we hear these emotions in music, in every musi-
cal phrase of every composer emotions continuously change shades and
could become entirely different within one chord. As culture and knowl-
edge become more complex, so more differentiated emotions are devel-
oped in music. About 2500 years ago the last Biblical prophet Zachariah
forbade assigning every thought to God, and demanded conscious think-
ing; the first Ancient Greek philosopher Thales pronounced know thy-
self. Fundamental contradictions in human psyche started penetrat-
ing into consciousness. This created tremendous CD, whose resolution
required a new type of music. Antiphonal music emerged to help allevi-
ate these dissonances and since then remains the cornerstone of music
for divine service. During Renaissance, human emotionality has been
accepted as an essential part of human consciousness. Accepting emo-
tionality brought new type of CD into human psyche; and new type
of music was developed to alleviate these tensions in consciousness.
Tonal music has been continuously developed for 500 years for creat-
ing new conscious emotions adequate for the evolving consciousness.
This fascinating story of evolution of human consciousness continues
292 Leonid Perlovsky

till today (Perlovsky 2010a,e 2011, at press a,d). J. S. Bachs music


helps resolving contradictions in human consciousness between knowl-
edge of finite life of every material being and intuitions about infinity
of human spirit. Today, popular songs help connect everyones inter-
nal being with multi-faceted and quickly changing cultural knowledge
(Perlovsky 2006b, 2010e; at press a,d). Rap music in style and cog-
nitive function is similar to Ancient Greek dithyrambs. Masataka and
Perlovsky (2012) experimentally validated the theoretical hypothesis that
music reduces CD.
Musical power over human soul and body is due to primordial con-
nections between voice and emotions. The function of music is differen-
tiating emotions for the purpose of restoring the unity of self. Musical
emotions help maintain a sense of purpose and meaning of life in face of
multiplicity of contradictory knowledge, or what is called the synthesis
of differentiated consciousness.

9.9.6 Emotional consciousness


Previous sections considered the wealth of emotionality in human con-
sciousness, which has long remained outside the scope of scientific inves-
tigations. Here I will suggest a possible reason why this has been so.
People become scientists if they have a natural predisposition toward
conceptual thinking. Conceptual differentiation and conceptual under-
standing of the world is an inborn gift of scientific consciousness. There-
fore, research in conceptual mechanisms is natural to scientists. Similarly
people born with gifts of emotional understanding become poets, writers,
composers, or artists but usually not scientists.
Psychological attitudes are much more complicated than this juxta-
position of conceptual-emotional. If this would be so clear, of course
scientists could study emotional mechanisms long ago. The compli-
cation comes from the fact that people with conceptual-scientific type
of consciousness are often unconscious about their emotional side. On
one hand, a conceptual thinker easily manipulates his or her conscious
thoughts, conscious conceptual thinking comes easy, it might seem
obvious and not sufficiently deep. On the other hand, unconscious
emotions penetrate into the depth of psyche and disturb it. Emotions
might be perceived as more deep and genuine. As a result, a person
might under-appreciate his genuine, novel, crisp, and creative conscious
conceptual thoughts and over-appreciate his vague, usual, non-creative
emotional feelings.
This might illustrate how CD prevent evolution of knowledge,
and explain why emotions were considered by scientists as more
A cognitive model of language and conscious processes 293

complicated and mysterious than conceptual mechanisms, and the


wealth of emotional life remained under-researched. This situation is
currently changing.

9.10 Future experimental and theoretical research


Much of the discussions in this chapter are novel results based on recent
mathematical models of mind (Perlovsky et al. 2011). Some of the theo-
retical conclusions have been confirmed in experiments, others are being
studied in experimental laboratories and remain a challenge for future
research. Future theoretical studies should be conducted in close coop-
eration with mathematical modeling and computer simulations. Models
and simulations should always link to experimentally verifiable predic-
tions, and experimental psychology and cognitive science should verify
theoretical predictions. This interaction between theory and experiments
has always been the hallmark of physics, the super-model for science.
The same scientific method should be extended to the rest of science.
Mathematical models forming a foundation for much of this chapter have
been discussed in given references. Here I concentrate on experimental
evidence and future challenges.
A fundamental mechanism of cognition, extending through the mental
hierarchy, is the interaction between BU and TD signals. At every level
of the hierarchy it proceeds through vague-to-crisp and unconscious-
to-conscious processes. Thus cognition consists in making conscious
what used to be less conscious or unconscious. This is especially true for
more creative processes near the top levels of the mind hierarchy. These
processes have been predicted by mathematical models two decades ago.
Recently, they have been confirmed experimentally for a lower part of
the mind hierarchy, for visual perception. The future challenge is to
extend these experimental confirmations throughout the hierarchy to the
representations near the top, which usually involve more emotional than
conceptual understanding, the beautiful, sublime, meaning of life; also
conceptual contents near the top of the hierarchy involve general scientific
theories of universal laws.
Representations of abstract ideas can only be constructed with support
from language. Predictions requiring experimental confirmation include
interaction between cognition and language at various hierarchical levels.
The higher up in the hierarchy, the vaguer and less conscious is the cogni-
tive side of the dual hierarchy; while discussions of these top ideas using
language could be conscious. These predictions require experimental
confirmations.
294 Leonid Perlovsky

The wealth of human emotions are aesthetic emotions related to sat-


isfaction of the knowledge instinct and are fundamentally different from
basic emotions related to satisfaction of bodily instincts. Aesthetic emo-
tions near the top of the mind hierarchy involve satisfaction or dissatisfac-
tion of the need for meaning and purpose of existence, felt as emotions
of the beautiful and sublime. Throughout the entire hierarchy there are
contradictions in knowledge felt as emotions of CD; they are evolution-
ary novel and not always conscious; from the unconscious, they affect
most of our decisions and life. To overcome these unconscious or half-
conscious emotions, they have to be brought into consciousness; this is
accomplished by music. Music creates an infinity of emotions, which
unify psyche differentiated by knowledge. In this way, music satisfies a
need for meanings and unity at the top of the hierarchy. Masataka and
Perlovskys (2012) demonstration that music reduces CD is just a first
step toward an experimental confirmation of these hypotheses.
Understanding among people, even within a single family, requires
understanding that consciousness is not one single unified mechanism,
the same for everybody. Love at first sight is possible because one feels
that another person fills ones psychological void. A scientist may need a
more emotional partner, and vice versa. This initial magnet of opposites
after years often turns into noticing only contradictions, and families fall
apart. Conscious understanding might help. Similarly, understanding
among cultures and nations cannot be achieved without appreciating
differences in consciousness. Assuming that consciousness of a Harvard
professor is typical for the entire humanity is a contemporary post-racial
form of racism. Peoples should appreciate diversities of different cultures
and languages creating diversity of types of consciousness. I would like
to end by reminding that the entire humanity is unified by essentially
similar mechanisms of consciousness. Thus, a so unified future global
culture appreciating differences in consciousness is not impossible.

REFERENCES
Arbib M. A. (2005). From monkey-like action recognition to human language:
An evolutionary framework for neurolinguistics. Behav Brain Sci 28:105
124.
Aristotle (1995). In Barnes J. (ed.) The Complete Works. Princeton University
Press [the revised Oxford translation; original work VI BCE].
Baars B. (1988). A Cognitive Theory of Consciousness. Cambridge University Press.
Bar M., Kassam K. S., Ghuman A. S., Boshyan J., Schmid A. M., Dale A. M.,
et al. (2006). Top-down facilitation of visual recognition. P Natl Acad Sci
USA 103:44954.
Barsalou L. W. (1999). Perceptual symbol systems. Behav Brain Sci 22:577660.
A cognitive model of language and conscious processes 295

Bering J. (2010). Scientists say free will probably doesnt exist, but urge: Dont
stop believing! Sci Am Mind. URL: http://blogs.scientificamerican.com/
bering-in-mind/2010/04/06/scientists-say-free-will-probably-doesnt-exist-
but-urge-dont-stop-believing/ (accessed March 8, 2013).
Bielfeldt D. (2009). Freedom and neurobiology: reflections on free will, language,
and political power. Zygon 44(4):9991002.
Cabanac M. (2002). What is emotion? Behav Process 60:6983.
Darwin C. R. (1871). The Descent Of Man, And Selection In Relation To Sex.
London: John Murray.
Deacon, T. W. (1989). The neural circuitry underlying primate calls and human
language. Human Evolution, 4(5): 367401.
Deming R. W. and Perlovsky L. I. (2007). Concurrent multi-target localization,
data association, and navigation for a swarm of flying sensors. Inform Fusion
8:316330.
Dirac P. A. M. (1982). The Principles of Quantum Mechanics. Oxford University
Press.
Ekman P. (1999). Basic emotions. In Dalgleish T. and Power M. (eds.) Handbook
of Cognition and Emotion. Chichester: John Wiley & Sons, Ltd.
Festinger L. (1957). A Theory of Cognitive Dissonance. Evanston, IL: Row,
Peterson.
Fontanari J. F. and Perlovsky L. I. (2004). Solvable null model for the distribution
of word frequencies. Phys Rev E 70:042901.
Fontanari J. F., Cabanac M., Cabanac M.-C., and Perlovsky L.I. (at press). A
structural model of emotions of cognitive dissonances, Neural Networks.
Fontanari J. F., Tikhanoff V., Cangelosi A., Ilin R., and Perlovsky L. I. (2009).
Cross-situational learning of objectword mapping using Neural Modeling
Fields. Neural Networks 22(56):579585.
Glassman R. (1983). Free will has a neural substrate: critique of Joseph F.
Rychlaks discovering free will and personal responsibility. Zygon 18(1):
1782.
Godel K. (1934). Kurt Godel Collected Works, Vol. I. Feferman S (ed.). New York:
Oxford University Press.
Grossberg S. (1988). Neural Networks and Natural Intelligence. Cambridge, MA:
MIT Press.
Grossberg S. and Levine D. S. (1987). Neural dynamics of attentionally mod-
ulated Pavlovian conditioning: Blocking, interstimulus interval, and sec-
ondary reinforcement. Appl Opt 26(23):50155030.
Ilin R. and Perlovsky L. I. (2010). Cognitively inspired neural network for recog-
nition of situations. Int J Nat Comp Res 1(1):3655.
Izard C. E. (1992). Basic emotions, relations among emotions, and emotion-
cognition relations. Psychol Rev 99:561565.
Jaynes J. (1976). The Origin of Consciousness in the Breakdown of the Bicameral
Mind. Boston, MA: Houghton Mifflin.
Jung C. G. (1921). Psychological Types. In The Collected Works, Vol. 6. Bollingen
Series, Vol. 20. Princeton University Press.
Kant I. (1790). Critique of Judgment. Trans. Bernard J. H. (1914). London:
Macmillan.
296 Leonid Perlovsky

Kovalerchuk B., Perlovsky L., and Wheeler G. (2012). Modeling of phenomena


and dynamic logic of phenomena. J Appl Non-C Log 22(1):5182.
Kozma R. and Freeman W. J. (2001). Chaotic resonance: Methods and appli-
cations for robust classification of noisy and variable patterns. Int J Bifurc
Chaos 10:23072322.
Kozma R., Puljic M., and Perlovsky L. (2009). Modeling goal-oriented decision
making through cognitive phase transitions, New Math Nat Comp 5(1):143
157.
Kveraga K., Ghuman A. S., Kassam K. S., Aminoff M. S., Hamalainen E., et al.
(2011). Early onset of neural synchronization in the contextual associations
network. P Natl Acad Sci USA 108(8):33893394.
Levine D. S. and Perlovsky L. I. (2008). Neuroscientific insights on biblical
myths. Simplifying heuristics versus careful thinking: Scientific analysis of
millennial spiritual issues. Zygon 43(4):797821.
Levine D. S. and Perlovsky L. I. (2010). Emotion in the pursuit of understanding.
Int J Synth Emotions 1(2):111.
Libet B. (1999). Do we have free will? J Consciousness Stud 6(89):4757.
Lieberman, P. (2000). Human Language and Our Reptilian Brain. Cambridge,
MA: Harvard University Press.
Lim D. (2008). Did my neurons make me do it? Philosophical and neurobio-
logical perspectives on moral responsibility and free will. Zygon 43(3):748
753.
Lindquist K. A., Wager T. D., Kober H., Bliss-Moreau E., and Barrett L. F.
(2012). The brain basis of emotion: A meta-analytic review. Behav Brain Sci
35(3):121143.
Maher, M. (1909). Free will. In The Catholic Encyclopedia. New York: Robert
Appleton Company. Retrieved from New Advent: URL: www.newadvent
.org/cathen/06259a.htm (accessed March 8, 2013).
Masataka N. and Perlovsky L. I. (2012). Music can reduce cognitive dis-
sonance. Nature Precedings URL: http://hdl.handle.net/10101/npre.2012
.7080.1 (accessed March 8, 2013).
Mayorga R. and Perlovsky L. I. (eds.) (2008). Sapient Systems. London: Springer.
McAllister J. W. (1999). Beauty and Revolution in Science. Ithaca, NY: Cornell
University Press.
Mithen, S. (2007). The Singing Neanderthals. Cambridge, MA: Harvard Univer-
sity Press.
Ortony A. and Turner T. J. (1990). Whats basic about basic emotions? Psychol
Rev 97:315331.
Perlovsky L. I. (1987). Multiple Sensor Fusion and Neural Networks. DARPA Neural
Network Study. Lexington, MA: MIT/Lincoln Laboratory.
Perlovsky L. I. (1994a). Computational concepts in classification: neural net-
works, statistical pattern recognition, and model based vision. J Math Imag-
ing Vis 4(1):81110.
Perlovsky L. I. (1994b). A model based neural network for transient signal pro-
cessing. Neural Networks 7(3):565572.
Perlovsky L. I. (1998). Cramer-Rao Bounds for the estimation of normal mix-
tures. Pattern Recogn Lett 10:141148.
A cognitive model of language and conscious processes 297

Perlovsky L. I. (2001). Neural Networks and Intellect: Using Model Based Concepts.
New York: Oxford University Press.
Perlovsky L. I. (2006a). Toward physics of the mind: Concepts, emotions, con-
sciousness, and symbols. Phys Life Rev 3(1):2255.
Perlovsky L. I. (2006b). Music the first principle. Music Theatre. URL: www
.ceo.spb.ru/libretto/kon_lan/ogl.shtml (accessed March 8, 2013).
Perlovsky L. I. (2007a). Cognitive high level information fusion. Inform Sciences
177:20992118.
Perlovsky L. I. (2007b). Evolution of languages, consciousness, and cultures.
IEEE Comput Intell M 2(3):2539.
Perlovsky L. I. (2007c). Neural dynamic logic of consciousness: the knowledge
instinct. In Perlovsky L. I. and Kozma R. (eds.) Neurodynamics of Higher-
Level Cognition and Consciousness. Heidelberg: Springer.
Perlovsky L. I. (2008). Music and consciousness. Leonardo 41(4):420421.
Perlovsky L. I. (2009a). Language and emotions: Emotional Sapir-Whorf
Hypothesis. Neural Networks 22(56):518526.
Perlovsky L. I. (2009b). Vague-to-Crisp neural mechanism of perception. IEEE
Trans Neural Network 20(8):13631367.
Perlovsky L. I. (2010a). Intersections of mathematical, cognitive, and aesthetic
theories of mind. Psychol Aesthetics Creativity Arts 4(1):1117.
Perlovsky L. I. (2010b). Neural mechanisms of the mind, Aristotle, Zadeh, and
fMRI. IEEE Trans. Neural Networ 21(5):718733.
Perlovsky L. I. (2010c). The mind is not a kludge. Skeptic 15(3):5155.
Perlovsky L. I. (2010d). Beauty and art. Cognitive function, evolution,
and mathematical models of the mind. WebmedCentral PSYCHOL
2010;1(12):WMC001322. URL: www.webmedcentral.com/article view/
1322 (accessed March 8, 2013).
Perlovsky L. I. (2010e). Musical emotions: functions, origins, evolution. Phys
Life Rev 7(1):227.
Perlovsky L. I. (2011). Consciousness and free will, a scientific pos-
sibility due to advances in cognitive science. WebmedCentral PSY-
CHOL 2011;2(2):WMC001539. URL: www.webmedcentral.com/article
view/1539 (accessed March 8, 2013).
Perlovsky L. I. (at press a). Cognitive function of musical emotions. Psychomusi-
cology.
Perlovsky L. I. (at press b). Mirror neurons, language, and embodied cognition.
Neural Networks.
Perlovsky L. I. (at press c). The cognitive function of emotions of spiritually
sublime. Rev Psychol Front.
Perlovsky L. I. (at press d). Cognitive function of music. Interdiscipl Sci Rev.
Perlovsky L. I., Bonniot-Cabanac M.-C., and Cabanac M. (2010). Curios-
ity and pleasure. WebmedCentral PSYCHOL 2010;1(12):WMC001275.
URL: www.webmedcentral.com/article view/1275 (accessed March 8,
2013).
Perlovsky L. I., Chernick J. A., and Schoendorf W. H. (1995). Multi-sensor ATR
and identification friend or foe using MLANS. Neural Networks 8(7/8):1185
1200.
298 Leonid Perlovsky

Perlovsky L. I. and Deming R. W. (2007). Neural networks for improved tracking.


IEEE Trans Neural Network 18(6):18541857.
Perlovsky L. I., Deming R. W., and Ilin R. (2011). Emotional Cognitive Neural
Algorithms with Engineering Applications. Dynamic Logic: From Vague to Crisp.
Heidelberg: Springer.
Perlovsky L. I. and Ilin R. (2010a). Grounded symbols in the brain, com-
putational foundations for perceptual symbol system. WebmedCentral PSY-
CHOL 2010;1(12):WMC001357. URL: www.webmedcentral.com/article
view/1357 (accessed March 8, 2013).
Perlovsky L. I. and Ilin R. (2010b). Neurally and mathematically motivated
architecture for language and thought. The Open Neuroimaging Journal 4:70
80. URL: www.bentham.org/open/tonij/openaccess2.htm (accessed March
8, 2013).
Perlovsky L. I., Plum C. P., Franchi P. R., Tichovolsky E. J., Choi D. S., and Wei-
jers B. (1997). Einsteinian neural network for spectrum estimation. Neural
Networks 10(9):1541146.
Petrov S., Fontanari F., and Perlovsky L. I. (at press). Categories of emotion
names in web retrieved texts. Cognitive Sci.
Pinker S. (1994). The Language Instinct: How the Mind Creates Language. New
York: William Morrow.
Poincare H. (2001). The Value of Science: Essential Writings of Henri Poincare. New
York: Modern Library, Random House.
Rizzolatti G. (2005). The mirror neuron system and its function in humans. Anat
Embryol 210(56):419s421.
Rizzolatti G. and Arbib M. A. (1998). Language within our grasp. Trends Neurosci
21(5):188194.
Russell J. A. and Barrett L. F. (1999). Core affect, prototypical emotional
episodes, and other things called emotion: Dissecting the elephant. J Pers
Soc Psychol 76:805819.
Rychlak J. (1983). Free will as transcending the unidirectional neural.
Tikhanoff V., Fontanari J. F., Cangelosi A., and Perlovsky L. I. (2006). Language
and cognition integration through modeling field theory: category formation
for symbol grounding. In Book Series in Computer Science, Vol. 4131. Heidel-
berg: Springer, pp. 376 ff.
Tversky A. and Kahneman D. (1974). Judgment under uncertainty: Heuristics
and biases. Science 185:11241131.
Tversky A. and Kahneman D. (1981). The framing of decisions and the ratio-
nality of choice. Science 211:453458.
Velmans M. (2003). Preconscious free will. J Consciousness Stud 10(12):4261.
Vityaev E. E., Perlovsky L. I., Kovalerchuk, B. Y., and Speransky S. O. (2011).
Probabilistic dynamic logic of the mind and cognition, Neuroinformatics
5(1):120.
Yardley H., Perlovsky L. I., and Bar M. (2011). Predictions and incongruency
in object recognition: a cognitive neuroscience perspective. In Weinshall D.,
Anemuller J., and van Gool L. (eds.) Detection and Identification of Rare
Audiovisual Cues. Heidelberg: Springer, pp. 139153.
10 Triple-aspect monism: A conceptual
framework for the science of human
consciousness

Alfredo Pereira Jr.

10.1 Introduction 299


10.2 What makes mental processes conscious? Contributions from
previous chapters 303
10.3 Advancing into conscious territory 308
10.4 TAM: From Aristotle to information theory, and beyond 314
10.5 The dynamics of conscious systems 321
10.6 Inserting feelings in Global Workspace theory 323
10.7 Concluding remarks 328

10.1 Introduction
In this chapter, I tackle the double problem of defining what makes a
natural process mental, and what makes a mental process conscious. My
short answer to the first question is that mental processes are those that
operate with forms (in the Aristotelian sense of the term, discussed in the
following) embedded in material systems and with transmission of such
forms from one system to another. A reformulation of the Aristotelian
approach by Peirce (1931) focuses on chains of signs composing semei-
otic processes. The transmission of forms and/or signs composing mental
processes has been scientifically approached with the use of informa-
tion theory, dating back to Broadbent (1958). In contemporary views, a
similar concept of information flow supporting the formation of knowl-
edge was proposed by Dretske (1981), Skyrms (2008, 2010) and others,
including authors in this volume.
The answer to the second problem cannot be so short and occupies
most of this chapter. Information transmission and respective mental
processes are often unconscious. There are complex chains and loops of

This research benefited from financial support by CNPQ (Brazilian Research National
Funding Agency), FAPESP (Foundation for Support of Research in the State of Sao
Paulo), and POSGRAD-UNESP (Sao Paulo State University Post-Graduation Office).
My thanks to Chris Nunn and Dietrich Lehmann for helping with language, style,
and concepts; Max Velmans, Leonid Perlovsky, Wolfgang Baer, and Ram Vimal for
useful comments; Maria Eunice Quilici Gonzales for our discussion on the nature of
information; and all colleagues who directly or indirectly contributed to this chapter.

299
300 Alfredo Pereira Jr.

information transmission, involving many brain subsystems and aspects


of the environment that remain entirely unconscious. No matter how
complex this kind of process may be, without some extra ingredient it
would not be conscious (considering the referential nucleus of the
term consciousness1 discussed in Pereira and Ricke 2009). What is the
extra ingredient? Before offering a long answer to this crucial question,
I will give a brief preview of my theoretical position and discuss those
taken in the preceding chapters.
Briefly, my position to be called Triple Aspect Monism (TAM) is
that conscious experience is a fundamental aspect of reality, neither sep-
arable from nor reducible to the other two aspects (namely, the physical-
chemical-biological in short, physical and the informational ). In the
following presentation of TAM, I take into account concepts used in
current scientific practice. The physical and the informational aspects
are composed of entities and processes described in the context of their
respective scientific disciplines. The third aspect, conscious experience, is
united to the others and would hopefully be treated scientifically, besides
the existing philosophical, religious, and artistic approaches.
According to TAM, the three aspects belong to one large dynami-
cal system covering the totality of existence which Spinoza and other
philosophers have called Nature. It evolves in time and progressively man-
ifests the three aspects, beginning with the physical, unfolding into the
informational, and finally, under certain circumstances, into conscious
activities. Although fundamental, conscious experience is not primitive:
it arises from elementary potentialities (Vimal 2008 2010; this volume,
Chapter 5) actualized by certain systems (Cottam and Ransom,
Chapter 3; Trehub, Chapter 7; Mitterauer, Chapter 8: all this volume)
that implement certain mechanisms (Lehmann, Chapter 6; Perlovsky,
Chapter 9: both this volume) and that construct models of interaction
with the world (Merker, Chapter 1; Godwin et al., Chapter 2; Baer,
Chapter 4, all this volume). Other entities elsewhere in the universe,
using compatible resources, may also achieve similar outcomes.
The potentialities are embedded in physical stuff, but their intrin-
sic properties are not identical to physical, chemical, or biological

1 Consciousness is a process that occurs in a subject (the living individual) and the subject
has an experience (he/she interacts with the environment, completing action-perception
cycles) and the experience has reportable informational content (information patterns
embodied in brain activity that can be conveyed by means of voluntary motor activity)
(Pereira and Ricke 2009, p. 16).
Please note that being reportable does not mean that first-person experiences would
be fully translated to the third-person perspective. Translation of content from the first
to the third-person perspective is always an approximation.
Triple-aspect monism: A framework consciousness science 301

properties. In the context of modern science, actualizations of poten-


tialities into lived experiences are described by philosophical disciplines
as phenomenology and by human sciences as psychology. Consequently,
when using the concept of supervenience,2 central to contemporary phi-
losophy of mind, TAM implies that conscious experience supervenes
from nature, but not from the physical aspect alone. The corollary of
this implication is that physical, chemical, and biological sciences are
essentially incomplete relatively to an explanation of the conscious mind.
Conscious actualization of forms is conceived as a compositional pro-
cess that includes operations of signal detection, selection, matching, and
integration of patterns in the nervous system (or other systems containing
similar mechanisms) of living individuals and possibly other equivalent
complex systems. TAM assumes as in Panpsychism that elementary
forms that compose conscious episodes are fundamental components of
nature that cannot be explained. However, TAM is not a Panpsychist
doctrine that considers consciousness to be fully achieved in the phys-
ical domain. TAM is a variety of Proto-Panpsychism, stating that the
existence of conscious processing is a natural potentiality that requires
a constructive process involving specific kinds of interactions to gener-
ate episodes to be experienced by some kinds of systems. What can be
explained by a science of consciousness is how adequate mechanisms as
those present in our brains combine these elementary components and
construct the episodes that we experience. The integrative step, involving
the generation of feelings attached to the cognitive content, is especially
important.
My main claim is that what makes information-processing conscious
is the presence of feelings about the content of the information being
processed. Conscious activity is therefore conceived as a composition
of two essential ingredients, information patterns and feelings, presenting
different degrees or modalities according to their participation in each
episode. There is always a feeling attached to conscious contents, such
as the phonemic imagery of a conscious thought or the color associated
with a visual percept. However, the route from detected information to
the feeling about its content is not direct, but mediated by an attribu-
tion of meaning. For instance, the feeling elicited by an act of inner

2 In Davidsons formulation, supervenience might be taken to mean that there cannot


be two events alike in all physical processes, but differing in some mental respects, or
than an object cannot alter in some mental respect without altering in some physical
respect (Davidson 1980, p. 214). The related concept of physical realizationism (no
mental properties can have nonphysical realizations, as formulated by Kim 1998) is
compatible with functionalism (the thesis that a system can alter in some physical respects
without altering in some mental respects).
302 Alfredo Pereira Jr.

speech depends on the meaning attributed to the phonemic image, and


the feeling of a color depends on the whole visual scene.3 Without any
feeling, an experience is considered to be unconscious (for a discussion
of unconscious experiences, see Nixon 2010). To make things a bit more
complicated, the relation of meanings with feelings is not univocal. Mean-
ings without feelings may be unconscious. For TAM, conscious activity
extends beyond the attribution of meanings to information patterns (for a
discussion, see Nunn 2007, 2010), also requiring feelings (as proposed
by Harnad and Scherzer 2008). In sum, the attribution of meaning is
considered to be necessary but not sufficient for conscious processes.
The conceptual connection of conscious processing with feeling has
a tradition, ranging from the Leibnizian commentator Tetens to Peirce.
For Tetens, a feeling is whatever is directly and immediately in con-
sciousness at any instant, just as it is (Tetens in Houser 1983, p. 331).
In this essay I use Tetens concept in a particular sense. What I call a
sensitive feeling refers to the experience of states of the body, for exam-
ple, feeling hunger and thirst, heat or cold, and pain or pleasure. What I
call an affective feeling refers to experiences elicited by the content of
information, for example, feeling happy or sad about something, inter-
ested or bored of something, loving or hating something. A consequence
of this thesis is that intelligent machines that are neither sensitive to the
workings of their inner mechanisms (e.g., feeling pain if a gear is broken)
nor structurally affected by the information patterns they process (e.g.,
feeling sad if there is a shortage of energy supply) cannot be considered
to be conscious, although it is possible to imagine that nothing prevents
them to develop these capabilities.4 In living individuals, both kinds of
feeling are mediated by the autonomous nervous system and signaling
molecules and ions (in the cerebrospinal fluid and in the blood), from
cerebral, immune, and endocrine systems to the whole body of the living
individual.
Feelings are closely related to affects and emotions. Affective states,
including mood, are general mental states elicited by the activation of
core brain circuits that may remain largely unattended. Emotions can be
conceived as psychophysiological phenomena related to the expression

3 My thanks to Max Velmans (personal communication) for raising the issue of cognitive
and perceptual feelings.
4 In the current machine modeling of consciousness paradigm, a machine is consid-
ered to be conscious if possessing control mechanisms with the ability to model the
world and themselves (Aleksander 2007, p. 97). Contributing authors do not deny
the importance of emotional processes in conscious machines, but their models do not
relate consciousness with subjective feelings neither include hypotheses about how to pro-
vide their prototypes with mechanisms able to generate feelings about the information
they process.
Triple-aspect monism: A framework consciousness science 303

of feelings in a biosocial context. These expressions can occur uncon-


sciously before and/or after the feeling, for example, a hormonal release
that precedes fear or stressing somatic events that follow psychologi-
cal trauma. While the existence of feelings always implies some (even
the slightest) degree of consciousness being instantiated, emotions in
the sense of somatic or motor activities are not necessarily accompa-
nied by conscious experiences. However, emotional experiences in social
contexts as loving or hating somebody are typically conscious, and
frequently associated with attended cognitive contents (Almada et al.
2012).
To scientifically account for feelings, I use the psycho-physiological
approach of Panksepp (2005), as well as the interdisciplinary approach
of psycho-neuro-immuno-endochrinology applied to mind-body issues
(Walach et al. 2012), and take a look at how Global Workspace theory in
particular may incorporate the presence of feelings in conscious episodes.
The conscious focus of attention is proposed to be determined by the
matching of feelings with cognitive images and symbolic representations.
It is always accompanied by peripheral feelings, considered to be con-
scious to some degree even if not attended (Carrara-Augustenborg and
Pereira 2012).
In my Concluding remarks (Section 10.7), I argue that TAM opens new
opportunities of dialogue between consciousness science, philosophical,
and religious traditions. Besides straightforward influences from Aristotle
and Spinoza, TAM is close to Hegels triadic dialectics and Peirces semi-
otic triad of Firstness, Secondness, and Thirdness, while giving these
idealist philosophies a twist: an inversion of order regarding the two
first aspects of the triad, from Mind-Nature to Nature-Mind (as orig-
inally proposed by Marx in his criticism of Hegel). TAM is also close
to the existential interpretation of Husserlian phenomenology by Martin
Heidegger and Maurice Merleau-Ponty, as well as to current views of
embedded and embodied mentality in the cognitive sciences. On the
religion side, TAMs world picture has similarities with the symbolic tree
of life from the Kabala, some branches of Indian philosophy, as well
as the Christian concept of the Holy Trinity, among other possibilities.
Considering these similarities, TAM is likely to interest a wide range of
theoreticians wanting an integrative view of consciousness.

10.2 What makes mental processes conscious?


Contributions from previous chapters
In the philosophical literature, many ideas have been advanced about
what makes mental processes conscious. A traditional approach, derived
304 Alfredo Pereira Jr.

from the epistemology of classical physics, is to restrict the existence of


secondary qualities to the mind of the observer. With Logical Empiri-
cism, these qualities were considered to be sense data upon which our
experience of the world is constructed, leading to the contemporary con-
cept of qualia (Crane 2000) as an index of consciousness. However, the
term qualia is descriptive rather than explanatory, and thus no advance
when it comes to looking for a basis for useful operational applications.
The concept of subjectivity is also insufficient from this point of view.
Although it is often stated that consciousness is subjective experience,
the expression is not sufficiently precise to provide a useful conceptual
foundation for a science of consciousness.
In a classical paper, Nagel (1974) argued that conscious experience
is what is it like to be. This property was the basis for the concept of
phenomenal consciousness, discussed by Chalmers (1996), leading to
his view of Property Dualism. Velmans (2009) gives it an epistemolog-
ical status, in the sense that the same unitary reality can be known from
first- or third-person perspectives, giving rise to two kinds of views; the
mental and physical worldviews.5
The concept of consciousness as a transparent representation of the
world from an egocentric three-dimensional perspective proposed by
Trehub (this volume, Chapter 7), aptly translates the essence of a first-
person perspective to a neural network model. This empirically well-
supported conjecture offers a good basis on which to build a scientific
understanding of consciousness, albeit one that needs to be comple-
mented by further understandings of how the contents of conscious
experiences are constructed.
The existence of similarities among physical, informational, and con-
scious mental aspects supports Monism, the view that they belong to one
unitary reality. A first similarity (or state space homeomorphism, as
proposed by Fell 2004) is among conscious mental forms and patterns of
brain activity. This kind of isomorphism was defended by Kohler (1965)
in terms of a correspondence of properties of conscious fields ( gestalts)
and macroscopic electromagnetic fields in the brain. More recently,

5 The distinction between first-second- and third-person perspectives has a pragmatic


value but lacks an ontological justification. In fact, the first-person perspective is the basis
upon which all knowledge is constructed. In this chapter, I attempt to move beyond the
epistemological distinction and ground the distinction of perspectives on an ontologi-
cal framework. The first-person perspective is the perspective that arises for conscious
systems and contains the second- and third-person perspectives. These result from the
operations of conscious systems interacting with other aspects of reality, for example,
physical systems (third-person perspective) or other conscious systems (second-person
perspective).
Triple-aspect monism: A framework consciousness science 305

Pockett (2000) and McFadden (2002) have argued that the brains elec-
tromagnetic field is the basis of phenomenal consciousness. A refinement
of this kind of approach is Microstate theory (Lehmann, this volume,
Chapter 6), stating that elementary units of thought are incorporated in
split-second spatial patterns of the brains electric field. Microstate the-
ory offers, for the methodology of consciousness science, a sophisticated
treatment of EEG data, allowing the identification of patterns of activity
that characterize normal and abnormal mental functioning.
A second relation within the brain-mind-world triangle is between
knowledge and the world. This conception has faraway roots, in the
Aristotelian theory of truth, which was conceived as a correspondence
of mental forms and forms of beings in the world. Merker (this volume,
Chapter 1) draws the picture of a brain locked in the skull that is nev-
ertheless able to support mental activity that reflects the world outside
itself, by means of interactions between the neural model of the body
and the neural model of the world in the brain.
Consciousness appears as a process that has physical causes inside the
skull, but also as with Leibniz monads a capacity to reflect the whole
world. The relation of mental and (other) natural forms is not a simple
one; there are forms in someones mind that do not occur exactly alike in
the world, and there are forms in the world that do not occur exactly alike
in ones mind. Assuming a single (first-person OR third-person) perspec-
tive, we tend to conceive mind to be contained in nature, or nature to
be contained in a mind. This apparent paradox solved in a practical
fashion by Aleksander (2005) is illustrated in Fig. 10.1. TAM theoreti-
cally overcomes this paradox by means of the concept of an evolutionary
process where and when possibilities are progressively actualized. In the
domain of possibilities, any mind is contained in nature in the sense that
all mental forms, including those that refer to natural impossibilities, are
produced by combinations of natural forms. In the domain of actualities,
nature is contained in minds, in the sense that all ideas/concepts about
nature belong to someones mind, which also has ideas/concepts other
than those about nature.
In this epistemological framework, the physical world is both a mind-
independent or noumenal affair, and a phenomenal representation from
the third-person perspective of natural sciences (such as physics, chem-
istry, biology, and sociology). According to the latter, it is described by
means of categories such as matter/energy, forces, space-time, laws of
nature, chemical properties of matter/energy (as in the Periodic Table),
biological properties (genomes, proteomes, metabolomes), biological
populations, and their ecological interactions. All these concepts are (like
any other concepts) mental entities, but (unlike other concepts) believed
306 Alfredo Pereira Jr.

Nature Mind

Mind Nature

Fig. 10.1 The apparent mind and nature paradox: In the noumenal
domain (nature) a phenomenal domain (mind) emerges (left circle).
The phenomenal world contains a representation of the noumenal (right
circle). The domains are not mutually exclusive, but reflect each other
(as in Velmans 1990, 2008). This is, at the same time, a neo-Aristotelian
and post-Kantian view of the relations of mind and nature.

to correspond at least partially to realities out there in the world; that is, it
is assumed that our mind-dependent physics reveals mind-independent
features of the world.6
Viewing consciousness as an important factor in the interaction of
the living individual with the world raises the issue of biological and
behavioral functions of conscious processing. Although agreeing with
Chalmers (1996) that the consideration of functions cannot explain the
phenomenal aspect of conscious experiences, the role of consciousness in
the control of actions, guiding the living individual towards biological fit-
ness and adaptation, is of central importance for the scientific approach.
This guidance also includes reporting of conscious experiences by exper-
imental subjects, to establish a connection between properties of stimuli,

6 The dispute of epistemological Idealism against Realism may lead to a long series of
philosophical arguments. I assume, with Trehub (this volume, Chapter 7), that our
perceptual representations are most frequently transparent, revealing veridical features
of the world. When looking at a street scene through the glass window we usually assume
that the medium is transparent, and what we see is veridical. It would not be impossible
that the glass is actually an opaque, high-tech screen and that the street scene is generated
by a computer. Even though it may be a great scientific fiction, in everyday life this
possibility is not plausible. In the context of the study of communication media, it
is meaningful to say with theorists like Marshall McLuhan that the medium is
the message, but direct perception with its transparency is still a player in the
technological game (e.g., direct perception is necessary to watch TV and be affected by
intrinsic messages of this communication medium).
Triple-aspect monism: A framework consciousness science 307

registers of brain activity and the corresponding conscious experiences.


There is an epistemological bootstrapping at work here, since conscious
reports are necessary for a science of consciousness to establish a relation
between conscious experiences and the rest of the world.
Goodwin et al. (this volume, Chapter 2) report a series of experimental
results that define with precision the kinds of process for which conscious
control is necessary. Critics of the functional role of consciousness for
action, like Rosenthal (2008), argue that it is always possible to conjecture
that reports or any other form of behavior actually derive from uncon-
scious processes. However, as long as there are biological constraints
on the coordination of action by means of unconscious processes for
instance, if systemic muscle coordination to produce meaningful linguis-
tic utterances cannot occur unconsciously those conjectures can be
ruled out by scientific evidence.
A question that inevitably arises from this picture is: How does the
conscious mind shape the world? In an embodied and embedded cog-
nitive approach, the workings of mind over matter must be situated in a
biological, historical, and social context, considering that the powers of
consciousness are mediated through bodies and tools. This issue can also
be extended to the discussion of the foundations of physics. In quantum
theory, the question of whether consciousness is in some sense necessary
to collapse the wavefunction remains unresolved. Idealist philosophers
of nature say yes, assuming that the wave is a mental process that
becomes physically real only after the collapse. Realists, on the other
hand, believe that the waves have physical existence and the collapse is
derived from the interaction of the quantum system with a macroscopic
apparatus (Zurek 2003). The latter explanation has been criticized on the
basis that the macro apparatus when translated to a quantum descrip-
tion would also be in superposed states, and therefore would not be in
a condition to cause decoherence of the quantum system. Looking for a
better approach, Baer (this volume, Chapter 4) discusses the existence
of looping processes between minds and the material world.
Another question that can be posed is about what kinds of mental
process would convey conscious representations. Perlovsky (this volume)
argues that consciousness is the outcome of a matching of patterns pro-
ducing crisp representations. One of the advantages of his approach is
that such a dynamical process can help to explain the complexities of the
human mind, extending from the role of language to aesthetical appraisal
and religious faith. Perlovsky proposes a cognitive theory that explains
the formation of conscious attended representations, but leaves open
the explanation of conscious non-attended and non-representational
states.
308 Alfredo Pereira Jr.

Returning to the question of what makes mental processes conscious,


it seems fair to say that the preceding chapters made progress towards a
satisfactory answer. In summary, our contributors to this volume made
efforts to support the following statements:
1. Conscious contents are the result of dynamical interactions between
the body map and the world map within the brains reality model
(Merker, Chapter 1);
2. Specific modalities of skeletal muscle coordination are biological func-
tions that require conscious processing (Goodwin, Gazzaley, and
Morsella, Chapter 2);
3. A scientific approach to conscious processing involves the considera-
tion of a multi-scale hierarchy (Cottam and Ransom, Chapter 3);
4. In cognitive cycles, conscious processes transform the physical world
(Baer, Chapter 4);
5. The matching of stimulus-dependent feed-forward signals with feed-
back signals selects conscious contents (Vimal, Chapter 5);
6. Conscious cognitive operations are supported by brain microstates
and processes (Lehmann, Chapter 6);
7. The first-person perspective is based on a brain mechanism such as
the retinoid system (Trehub, Chapter 7);
8. Cognitive consciousness is composed of crisp representations gener-
ated by matching of patterns (Perlovsky, Chapter 9).
However, an important dimension of mental activity, essential to define
the conscious modality, is still missing: the feeling. In agreement with
Damasios brief and elegant definition, and additionally to the previously
identified referential nucleus (see Note 1), conscious experience could
be summarized as the feeling of what happens (Damasio 2000). This
dimension is closely related to an issue treated by Mitterauer in this
volume (Chapter 8).

10.3 Advancing into conscious territory


In both introspective views and qualitative researches in the human sci-
ences, affective and emotional states are integrated with cognition (Tha-
gard, 2006) and appear to be constitutive of the identity of the conscious-
living individual. High-Order Thought theories of consciousness also
consider the reflexive act of the conscious subject recognizing his/her
own states as necessary for the very existence of consciousness. However,
these theories require too much; they require that for being conscious of
something a subject has to think at least an unconscious thought
about his/her own mental states about that something (Rosenthal 2002).
They neglect the possibility of self-awareness (in the broad sense of the
Triple-aspect monism: A framework consciousness science 309

conscious individual being aware of what mental activities belong to


himself/herself ) being achieved by means of a weaker requirement: the
subject just feeling himself/herself as the subject of experiences. The
feeling, although not being a clear and distinct idea in the sense of
Descartes, can convey for example, by means of a somatic mark, as pro-
posed by Damasio (1994) a sense of the informational event belonging
to the subject. This view implies a Spinozian concept of the role of the
body7 in personal identity (Spinoza 1677), thus overcoming Descartes
restriction of mental operations to the closed domain of a thinking sub-
stance (res cogitans).
In classical phenomenological theories (Husserl 1913), conscious
experience is conceived as two-sided: it has an objective side composed
by cognitive mental patterns (possibly processed by brain mechanisms,
although this possibility was not discussed by Husserl) and a subjective
side, by which the individual poses himself/herself as the being who has
the experiences. Mitterauer (this volume, Chapter 8), based on the phi-
losophy of Gotthard Guenther, also raises the idea that consciousness has
both objective and subjective sides. He translates the Husserlian double
polarity into a dialogue of neuronal and astroglial cells in the brain, sim-
ilarly to the model of neuro-astroglial interactions proposed by Pereira
and Furlan (2010). The subjective side should in principle be the bearer
of feelings.
With the new family of models inspired by the astrocentric hypoth-
esis (Robertson 2002), it is possible to correct Aristotles mistake of
attributing conscious activities to the heart, while keeping the idea of
a consciousness system composed of two subsystems one responsi-
ble for the intellect and the other for the feeling. According to Gross
(1995), Aristotle (2012) did not dismiss the cognitive role of the brain
but conceived a larger system supporting conscious activity: he believed
the brain to play an essential, although subordinate, role in a heart-
brain system that was responsible for sensation (Gross 1995, p. 248).
In neuro-astroglial models, cognitive functions are attributed to the neu-
ronal network, while the generation of subjective feelings is attributed to
the astroglial network, thus preserving the dialogic structure present in
this philosophical tradition.
A limitation of current consciousness theories is the understanding of
mental units. What is the origin of elementary feelings and respective

7 This concept appears in Spinozas (1677) propositions X (An idea, which excludes
the existence of our body, cannot be postulated in our mind) and XI (Whatsoever
increases or diminishes, helps or hinders the power of activity in our body, the idea
thereof increases or diminishes, helps or hinders the power of thought in our mind).
310 Alfredo Pereira Jr.

qualia such as hunger, thirst, feeling hot or cold, fear, anger, pain,
pleasure, happiness, sadness, color, sound, taste . . . composing men-
tal states? Are they in some sense physical, chemical or biological? Or
are they more fundamental, like Platos Ideas, being (or not) embed-
ded in material systems, as in Aristotelian hylomorphism? The dominant
assumption in scientific approaches to consciousness has been that men-
tal forms are biological, resulting from an evolutionary process (there are
several biological views of mind and consciousness, e.g., Millikan 1984;
Block 2009; Edelman et al. 2011; for a criticism of biological reduction-
ism, see Velmans, 2008). The three most influential contemporary the-
ories of consciousness; Information Integration (Tononi 2005), Neural
Darwinism (Edelman 1987, 1989) and Global Workspace (Baars 1988)
theories, attempt to explain how composite forms are selected and/or
constructed from elementary forms, but do not identify their ultimate
nature.
Although the assumption of a biological origin of mental forms is
likely to be true, it leaves unquestioned the issue of what comes first in
nature, mental or biological forms. Do mental forms somehow emerge
from selective pressure on biological non-mental matter, or do they pre-
exist in nature in potential states, contributing to drive evolutionary pro-
cesses as long as they are actualized? I assume, with Vimal (this volume,
Chapter 5) and possibly with Peirces philosophy the second option:
elementary forms composing conscious processes exist in nature in a
potential state, depending on specific processes as those found in the
brain of living individuals to be integrated and actualized into conscious
episodes. Churchland (2012) also assumes the preexistence of forms, but
moves into a Platonic approach when locating them in an abstract math-
ematical space. For TAM, the conceptual space of consciousness should
be constructed on the basis of lived experiences (as argued in Pereira Jr.
and Almada 2011).
I give two examples of potential elementary mental forms and the
process by which they are actualized. The smell of sulfur exists in a
potential state in nature, since this element of the periodic table came
to existence. However, it was not actualized (i.e., felt) until a signal
from the element reached a receptor with an adequate mechanism to
actualize the form of the smell. Only when someone felt the smell of
sulfur the potential form was fully actualized. However, the smell of sulfur
is not a creation of the receiver (although susceptible of modulation by
different receivers). The basic property of this quale is determined by
the electronic structure of the element and existed in a potential state
since it came to existence.
Another example is the taste of salt. This quale exists in a potential
state since sodium and chloride were first chemically bound. To be felt,
Triple-aspect monism: A framework consciousness science 311

it requires signaling (information transmission) to receptors of a system


with adequate mechanisms to actualize (feel) the taste. There may be
different ways to actualize the taste of salt, but with a common basis it
would never taste like sugar. Similarly, for Vimal (this volume, Chapter 5)
the conscious experience (of sulfur or salt) is determined by the interac-
tion of stimuli-dependent feed-forward signals with cognitive feedback
signals via matching and selection mechanisms, making possible the actu-
alization of those potential forms for the subject of experiences.
There is of course a deep difficulty to define form. To advance, I
make a description by giving examples. Our mental life is populated by
forms such as: sensory qualities (colors, sounds, smells, tastes), gestalt
figures formed by combining these qualities, imaginary forms, basic sen-
sations (hunger, thirst, pain, pleasure, fear, anger), mood states (feel-
ing happy or sad, excited, or depressed), higher-order representations
(concepts, thoughts) present to the first-person perspective, as well as
registers of these patterns in a physical medium, allowing them to be
socially reproduced. The latter become cultural patterns that can be
further copied and imitated by other subjects, and as such survive the
death of the individual who originally experienced them. Human mental
forms also include mathematical concepts and truths, aesthetic appraisals
(beautiful and ugly, sublime, and grotesque), values and morals (right
and wrong, good and bad), concepts of personal identity (the Self),
fictitious, religious, and transcendent entities (myths, gods, spirits,
souls).
Phenomenological forms can be conceived as gestalts or composi-
tions of elementary forms. Of course, these complex compositions of
forms do not exist as such in potential states in nature (like the smell of
sulfur and the taste of salt). According to TAM, the complexes are com-
posed of combinations of elementary forms that do exist eternally. The
apparent contradiction between the preexistence of elementary forms
and the creation of new compositions is solved by the concept of com-
binatorial evolution, originally proposed by Cowen and Lindquist (2005)
to explain some intriguing aspects of biological evolution. Even if the
number of natural potential forms is finite, in view of our limited life-
time, for practical purposes their combinations are infinite. This fact can
be drawn on to explain how events that appear to be completely new
can derive from the recombination of a well-known fixed set of elements.
For example, using a finite number of elements of human language,
we can form very many new combinations that never existed before.
The ontological picture of TAM is that everything that currently exists
unfolds from one eternal, non-created hyper-complex dynamical system
(nature), displaying a wide range of possibilities. The system is conceived
312 Alfredo Pereira Jr.

ACTUALITY

Conscious
Processes

Subjectivity Information Objectivity

Biology

Chemistry

Physics
TIME

POTENTIALITY

Fig. 10.2 The TAM tree. According to TAM, a conscious living system
is composed by three superposed aspects, physical-chemical-biological,
informational, and conscious. On the vertical axis, there is continuity
between the aspects; they are conceived as stages in an evolutionary
process, by which elementary potential states eternally existing in nature
are progressively combined and actualized. The informational aspect is
characterized, in the horizontal axis, by subjective and objective poles.
A part of information processes, including both subjective and objective
features, emerges as conscious experience. The whole system is moving
in time (represented by the oblique arrow).

to progressively differentiate into three aspects: the physical-chemical-


biological, the informational, and the conscious (Fig. 10.2; to avoid con-
fusion, the three aspects should not be compared to the three worlds
suggested by Karl Popper, which are separated and therefore do not fit
into a monist ontology).
For TAM, elementary mental forms have an eternal existence in nature
while in a potential state for instance, in quantum superposed states.
Under the operation of adequate mechanisms, during the evolution of the
universe complexes of elementary forms are actualized by self-organizing
individuals, leading to the emergence of forms of life and conscious-
ness (Pereira and Rocha 2000; Pereira 2003). Initially, the dynamical
system actualizes material substrates, as the chemical substances classi-
fied in the periodic table. Material beings have a property of apparent
discreteness, as implicit in the terminology of physics (e.g., it makes ref-
erence to particles and bodies). These parts interact and compose
Triple-aspect monism: A framework consciousness science 313

individual systems. In these material systems, energy is patterned, lead-


ing to the appearance of forms of organization, the second aspect. These
forms correspond to the distribution of energy in space and time, as in
the entropy function described by Boltzmann (1872). The (Boltzman-
nian) entropy value of a system is inversely proportional to the richness
of its organizational forms. As long as individual systems self-organize
by means of mechanisms that reduce their entropy at the same time
increasing the entropy of the environment they actualize forms that
pre-existed in a potential state.
These low-entropy and transmittable forms are called information,
for example, the information carried by our macromolecules or by the
hard disk of a computer. An information pattern can be transmitted
from one to another material system. These mental forms can exist in
diverse substrates as quantum states of matter, chemical elements and
compounds, biological macromolecules and tissues, and cultural media
as printed paper, cellulose, vinyl, magnetic tapes, and hard disks of
computers.
The process of actualization of elementary forms underlies biological
and cultural evolutions. These forms are actualized in complexes, com-
posing, in the phylogenetic scale, morphological properties of biological
species, and in the ontogenetic scale, conscious episodes experienced by
an individual. Biological as well as mental forms are therefore conceived
as complexes formed by combinations of elementary forms, like letters
combine to form words. In the case of biological evolution, the pathway
from the elementary to the complex is well known: combination of chem-
ical elements in the DNA, transmission to ribosomes by RNA (as well as
some interference of small RNAs in the process), and assemblage of
proteins with functional capabilities.
The third aspect, consciousness, appears when self-organizing individ-
uals develop mechanisms of information reception and processing up to
the point of crossing a threshold of activity that allows the full actu-
alization of natural forms. In the formation of conscious episodes, the
combinatory process occurs by means of the operation of brain mecha-
nisms that combine internal and external information patterns. The pro-
cess of actualization includes necessary phases, as detection, selection,
matching, and integration of patterns, forming the conscious episodes
experienced by an individual. Full actualization means that such patterns
are not only acknowledged and eventually represented, but also experi-
enced, in the sense that they both sense/affect and are sensed/affected
by the body of the individual. This complex interaction process was sci-
entifically captured in Damasios somatic marker hypothesis (Damasio
2000).
314 Alfredo Pereira Jr.

Only when present in an individuals consciousness (i.e., in the first-


person perspective) mental forms are fully actualized, revealing their
phenomenal aspect for the individual. For TAM, conscious processing
is therefore one step in the evolution of the universe, when potential
forms are actualized, influencing the next step by means of the actions of
conscious living individuals. Such an idea of consciousness influencing
evolution was advanced by Baldwin (1896). It implies a very large, uni-
versal loop (similar to the cognitive cycle discussed by Baer, this volume,
Chapter 4) by which an endpoint of evolutionary process, the conscious
processes of the living individual, affects the foundation upon which it is
built, nature. This thesis has two important philosophical consequences:
1. Consciousness has both an epistemological and an ontological sta-
tus. Besides being a subjective mental phenomenon to be studied by
the sciences of the mind, consciousness is also a physical-chemical-
biological and informational phenomenon to be studied by the sci-
ences of nature. It is the vehicle for actualization of natural forms,
which otherwise would remain in a potential state, and therefore with-
out influence on evolutionary pathways;
2. Human culture and technology, the application of knowledge to the
transformation of nature, are completely natural affairs. They facilitate
the actualization of potentialities of nature. The laser, for instance, is
at the same time a technological creation (it did not appear sponta-
neously in nature) and a natural phenomenon, in the sense of reveal-
ing a possibility of photonic coherence inherent to the nature of light.
As a consequence, the conceptual oppositions of nature and culture,
or nature and technology, lose their appeal for the understanding of
the contemporary epoch. Human cultural development leading to the
society of knowledge or the technological society is just the con-
tinuation of natural evolutionary processes, progressively actualizing
in our region of the universe some of the potentialities as in the case
of the laser that exist eternally in nature.
In the next section, we will see how TAM was almost formulated in
Aristotles philosophy, and how it can be formulated in the context of
modern Information Theory.

10.4 TAM: From Aristotle to information theory, and beyond


The concept of patterns was advanced in Platos concept of a world of
ideas separated from the material world, as well as in Aristotles concept
of forms embedded in the material world his doctrine of hylomorphism
stating that matter and form are complementary concepts necessary to
describe natural beings. For Plato, the actualization of ideas depends on
Triple-aspect monism: A framework consciousness science 315

a supernatural process carried by a demiurge, while in the Aristotelian


approach to physics forms have a natural power to shape matter and
reproduce themselves. This power was named the formal cause, one
of the four powers of nature (the others being the efficient, material,
and final causes). Understanding how the transmission of forms occurs
is a task similar to contemporary efforts of understanding informational
relations between structures and events.
According to Aristotle, natural beings are constituted of a substrate
(matter) and a form; the latter can be in a potential or actual state. In a
criticism to his antecessors, who were not able to conciliate Monism and
diversity (the one and the many), Aristotle argues against the difficulty
of conceiving the same thing being both one and many, provided that
these are not opposites; for one may mean either potentially one or
actually one (Aristotle 2012; Physics, Book 1, Section 1). Potentiality
makes room for change and movement in nature (Metaphysics, Book Z,
Section 9).
In a summary of his thesis, Aristotle states that the principles of nature
are three. First, the form; second, the contrary, or the privation of the
form, leading to a process of actualization; third, the substrate, matter.
In his own (translated) words:

We explained first that only the contraries were principles, and later that a sub-
stratum was indispensable, and that the principles were three; our last statement
has elucidated the difference between the contraries, the mutual relation of the
principles, and the nature of the substratum. Whether the form or the substratum
is the essential nature of a physical object is not yet clear. But that the principles
are three, and in what sense, and the way in which each is a principle, is clear.
(Aristotle 2012; Physics, Book 1, Section 7)

In the Zeta book of Metaphysics, he argues that substances are the ulti-
mate reality, and that differences in form enable us to distinguish one
substance from another (for a discussion of this controversial thesis,
see Aubenque 1962). For Aristotle, matter is possibility of being.
Form is responsible for determining the kind of being (e.g., the bio-
logical species), while matter is responsible for individuation (distinction
between individuals of the same species). In the study of substantial gen-
eration (summarized in Metaphysics, Book Z, Section 8), the reciprocal
action of form and matter suggests that every existing being comes from
the intersection between a material possibility and the action of a form.
Matter and form, as constitutive principles of beings, are also consid-
ered as causes, that is, the material cause and formal cause. In the
passage from potency to act, there is a common term. A potentiality that
is already present in a piece of matter (e.g., sculptable marble) meets
316 Alfredo Pereira Jr.

with a form (e.g., the form of a horse) that is brought by an efficient


cause (e.g., the sculptor), resulting in full actualization of the form (e.g.,
a horse sculpture). The active form corresponds, in Aristotelian termi-
nology, to the notion of energeia, which is an immanent act, in the sense
that it has no other purpose than itself; it improves the agent and does
not tend to work for a result alien to this agent; its ultimate purpose is
none other than the activity for its own sake (commentary by Tricot in
Aristotle 1953; my translation). Therefore, it is necessary to be careful
not to interpret the immanent act as an efficient cause, a relation between
the agent and an external object.
In the book Physics, presenting his postulate of motion, Aristotle infers
via induction that all things that exist naturally are subject to movement:
nature is the principle of movement (Physics, Book 3, Section 1). The
term movement encompasses far more than the movement of bodies,
it refers to the act that is in power. We find in the Categories book
(Section 14) a list of kinds of movement: There are six sorts of
movement: generation, destruction, increase, diminution, alteration, and
change of place.
Most of the book of Physics is devoted to the study of problems relating
to the insertion of matter as a principle of being: the problem of chance,
the problem of the infinite continuum, as well as issues related to space
(i.e., the location of objects) and time. Unlike the schools of Parmenides
and Plato, he admits that beings can exist as an indeterminate possibility,
thus conciliating being with movement. A being is composed of what is
real as well as by what may become real. The process of becoming is
intelligible; movement can be explained by the combination of the four
causes. However, the material cause makes natural threads contingent.
Chance is not the absence of causes, but the very presence of the material
cause, producing an irregular dynamics. This presence also prevents full
a priori explanation for any phenomenon. To be complete or almost
complete, knowledge has to be a posteriori (for a full discussion of
Aristotles epistemology, see Aubenque 1962).
Materiality both causes movement irregularity and provides a substrate
for correcting and updating threads. Because of these characteristics, the
presence of matter is a condition of the possibility for self-subsistence
of substances. These beings, which are far away from the Prime Mover
(Aristotles God, the last final cause not the efficient cause, as in the
Jewish-Christian tradition), are steeped in contingency. Operating with
mobility, they can during some time maintain their stability, playing with
contingent factors, similar to the contemporary concept of stability in
chaotic dynamical systems. Although Aristotle had a Fixist view of forms
(i.e., the repertory of forms was conceived as static), his conception can
Triple-aspect monism: A framework consciousness science 317

be made compatible with contemporary views of evolutionary processes


(as mentioned in the previous sections) by considering that if such forms
have combinatory powers (as implicit in the concept of formal causa-
tion) and their repertory is large, their products corresponding to the
phenomena that we experience can be taken as infinite for practical
purposes.
In the Aristotelian linguistic approach to ontology in the Organon (Cat-
egories, Sections 513), we find that the actualization of potential forms
has the propositional structure S is P, where S is the being (hylomor-
phic substance) that actualizes a form (P property) in a given moment of
time. P has a potential existence in a world of possibilities (correspond-
ing to Platos world of Ideas), but only comes to existence in the natural
world when it is actualized by a substance. Existence of forms in the
natural world implies an organizing action on matter.
An information pattern corresponds, in the Aristotelian framework, to
a potential form that can be actualized by different substances and then
transmitted to other substances. The transmission of form is also a com-
mon theme in Aristotles biology and theory of art. Parents transmit their
form (the species to which they belong and morphological traits) to their
children; a sculptor transmits the form he has in his mind to a material
(e.g., bronze) in the making of a statue.
Besides S and P, it is implicit in the statement (although Aristotle does
not make it sufficiently explicit) that the actualization process is witnessed
by a conscious subject (CS), who makes the statement. The inclusion of
this second subject, CS, gives a clue to the relation between information
(in the sense of actualization of a distinguishable form in a material
system) and conscious activity: the actualization of forms (P) in substances
(S) is witnessed by a conscious subject (CS). It is a natural fact that a form
(P property) is actualized (using a contemporary term, instantiated)
in substance S and this process is witnessed by a CS. The latter is not a
Platonic demiurge producing the fact, but only a witness who is affected
by what happens. This move expands Aristotelian Hylomorphic Monism
to the TAM view, which includes conscious activity as a fundamental
phenomenon.
Using another contemporary conception, it may be clarifying to say
that the effect of such a natural fact is the feeling of what-is-it-like-to-
be-experiencing-that-P-is-instantiated-in-S. Aristotle does not seem to
have paid much attention to this subjective effect, which is crucial for
any theory of consciousness. One may also suggest that being affected
by the fact corresponds to the deeper aletheia phenomenon that is, the
manifestation of the form of the being for the conscious subject that
Aristotle (according to Heidegger 1926) would have neglected.
318 Alfredo Pereira Jr.

According to TAM, to be conscious that S is P the CS should be


sensitive to the event that S is P, and affected by the recognition of
the event. These sensitive and affective feelings would be responsible
for conscious processes being framed in the first-person perspective. For
instance, John consciously knowing that Athens is the capital of Greece
is in the first-person perspective even if John is not Greek. This is because
he is somehow sensitized to recognize Athens as the capital of Greece and
affected by the recognition, for example, because John likes to read Plato.
In this formulation, there are two subjects, the subject of the sentence
(S) and the subject for whom the content is actualized (CS). In special
cases self-referential statements such as John knows that he is lucky
they may be the same. However, being self-referential is not necessary
for a statement to denote a conscious process, since the actualization
of forms may (and often do) occur to a being that is different from the
cognitive subject who witnesses the actualization.
A CS being conscious of S is P therefore implies a process by which
the CS is sensitized to recognize, and once recognizing he/she is affected
by the content of the statement. This process can be explained by the
causal powers of forms, corresponding to Aristotles formal causation.
Forms naturally have the power of being transmitted and affect their
receivers. A Neo-Aristotelian concept of consciousness would be in terms
of a mental activity (energeia) of a subject being sensitized to recognize
and then affected by the actualization of a form.
The corresponding contemporary concept of information is central
to scientific and philosophical approaches to cognition. Perceptual pro-
cesses can be understood as the transmission of forms from a physical
substrate where they are embedded in a potential state to the conscious
mind of a self-organizing individual who receives and further combines
them. Conscious action can reversely be conceived as a transmission of
information from the mind of the individual to a material substrate, as
in the Aristotelian classical example of formal causation (a sculptor who
transposes a form from his conscious mind to marble). It is assumed that
these information patterns are not radically different, when actualized in
the conscious mind or when embodied in a material substrate.
It should be noted that Aristotelian philosophical analysis was based
on the grammar of human language. The propositional structure that
supports his physics, foundations of logics and metaphysics depends
on the subject/predicate structure of Greek language. It would not be
adequate for an explanation of non-human cognition and even for other
human natural language grammars (of course, the last point may disputed
by defenders of Noam Chomskys concept of grammar). A broader view
of information appeared with the Weaver and Shannon (1949) approach,
Triple-aspect monism: A framework consciousness science 319

which is also limited in regard to the understanding of consciousness, but


for a different reason.
The modern concept of information seems to have implicit roots in
the Aristotelian concept of form (for a discussion of Aristotles philoso-
phy and the contemporary concept of information, see Aleksander and
Morton, 2011). This long history continued with Ludwig Boltzmanns
account of the second law of thermodynamics in terms of a mechanical
atomistic model, in the context of the kinetic theory, leading to the con-
temporary mathematical theory of information of Shannon and Weaver.
An atomistic approach had explicitly been rejected by Aristotle, in his
criticisms of Democritus. In spite of this difference, the usage of the term
information in contemporary cognitive sciences and philosophy has
both Aristotelian and Shannonian connotations.
Considering the complexity of a many-particle interacting sys-
tem, Boltzmann (1872, 1896) adopted an atomistic and probabilistic
approach, considering that each macrostate (definable in terms of ther-
modynamic values, such as temperature, volume and pressure) has a
probability, given by the number of microstates that generate them. A
macrostate that can be generated by a larger number of microstates is
then considered as more probable than a macrostate that can be gener-
ated by a smaller number of microstates. In this framework, the increase
of entropy was conceived as a spontaneous evolution towards more prob-
able macrostates.
When Weaver and Shannon (1949) formulated their mathematical the-
ory to measure the quantity of information transmitted between a source
and a receiver, they soon realized that it was proportionally inverse to
Boltzmanns entropy. What could that proportion mean? A first inter-
pretation was that the higher the entropy of the source, the quantity of
information it generates is smaller. Boltzmanns measure of the entropy
of a macrostate of a closed system is S = P. Log P, where S is the entropy
and P the probability of the macrostate. In Shannon and Weavers theory,
the quantity of information transmitted between a source and a receptor
appeared to be the inverse of P. Log P, where P refers to the probability
of the respective macrostate of the source.
A hundred years after Boltzmanns first formulation of the probabilistic
approach, Dretske (1981) combined Shannon-Weaver concept of infor-
mation with a propositional approach to make additional claims, relating
information transmission with the acquisition of empirical knowledge.
This move attracted attention in the emerging Cognitive Sciences at the
end of the twentieth century. However, the application of Information
Theory to the explanation of cognitive processes faces two epistemolog-
ical problems (besides the limitation for the explanation of semantic
320 Alfredo Pereira Jr.

properties, already noted by Weaver and Shannon [1949]). First,


attempts to construct meaningful mental concepts by means of combina-
torial processes operating on physical entities (such as atoms, molecules,
or chemical substances) are philosophically nave. As mental forms are
considered to be irreducible to physical properties, the combination of
physical elements as complex as the combinatory functions may be
does not afford an explanation of mental processes. Second, both Boltz-
manns original concepts and WeaverShannons reconstruction were
about states of systems, not about messages being transmitted. As Wicken
(1987) appropriately noted, the above concept of information content
does not capture structured relationships within the message being
transmitted. He proposes the concept of complexity (or algorithmic com-
plexity, as defined in the work of Chaitin 1966) as a better way to refer
to structured relationships between the elements of the message.
In the context of psychology, linguistics, and philosophy of mind, effec-
tive transmission of information is conceived as involving more than pas-
sive reception of content; it includes the attribution of subjective mean-
ings. For TAM, the attribution of meaning is one step in the process
of actualization of forms carried by the transmitted message. Meanings
may not correspond to properties of external stimulation, because actu-
alizations of forms always occur in a combinatorial manner, leading to
gestalts. The information integration process that leads to the formation
of conscious episodes is based on the individuals meanings, which have
a personal and ontogenetic history.
The distinction between meaning and the intentional object to which
it refers is classical in the philosophy of logics. Frege and Husserl
offered Platonic conceptions of the intentional object, while Twardowsky
(see Betti 2011) proposed an interpretation that is closer to the neo-
Aristotelian view advocated here. In the process of information trans-
mission, the message received by a conscious individual contains an
informational content that specifies the contours of potential forms, that
is, properties of objects and processes in the world. Upon reception of
the message, the self-organizing individual is sensitized to recognize and
affected by the recognition of the informational content, then producing
further meanings, supplementary to the informational content carried by
the message. Such further meanings are related, but not limited to, the
informational content of the message.
This concept of meaning has a close connection with Batesons con-
cept of information as a difference, which makes a difference (Bateson
1979). The first difference would correspond to the message. The mes-
sage is a difference in the sense that it has a significant signal-to-noise
ratio relative to some particular context within which the individual is
Triple-aspect monism: A framework consciousness science 321

situated. The subjective meaning, dependent on the individuals previ-


ous experiences and memories, corresponds to the second difference.
Such a concept of information could be applied to the process of
conscious actualization of forms for individuals. This process is mediated
by the attribution of meaning and completed with the formation of a
feeling about the information.

10.5 The dynamics of conscious systems


The climax of my long answer is that what makes an individuals men-
tal activity conscious is the presence of sensitive and affective feelings about
the content of recognized information. Only with the feeling the pro-
cess of actualization is completed. Feelings are assumed to arise from
fundamental potentialities of nature. Therefore, the qualities of feelings
(the qualia) are not explainable from simpler phenomena, although sus-
ceptible of being related to physical states of affair such as the tensions
generated by mass and charge separation (as argued by Baer, this vol-
ume, Chapter 4). In this sense, TAM is compatible with non-reductionist
formulations of Physicalism, which would necessarily extend the disci-
pline of physics beyond Newtons classical framework (see discussion in
Pereira 2003).
Elementary potential forms are always actualized in complexes, for
individuals and in context (as discussed by Pereira and Ricke 2009).
Introspectively, in many cases we are not able to identify the matrix of
elementary forms, since our conscious thought operate with their com-
binations forming gestalts. The process of actualization corresponds to
a presentation; a conscious episode is presented to the individual, who
experiences it. For Platonic Idealism, Ideas in their own world are always
actual, while presentations in the sensory world are just illusory appear-
ances. For TAM, Ideas are potential forms, while presentations are the
actualization of the forms and, therefore, have an ontological status. Con-
scious states and cultural artifacts are then conceived as actualizations of
potentialities of nature; technologies are conceived as pathways to help
nature to unfold its potentialities.
TAM implies that the dynamics of conscious systems occurs at three
related levels (Fig. 10.3; for a discussion of the hierarchy of processes,
see Cottam and Ransom, this volume, Chapter 3). At the lower level, the
system can be described as an ordinary physical-chemical-biological one,
ruled by causal relations that ultimately reduce to the four fundamental
physical forces. At the middle level, the system can be described as an
information-processing system obeying the rules of information theory.
At the higher level, the system can be phenomenologically described in
322 Alfredo Pereira Jr.

t1 KINDS OF RELATION t2

Conscious Symbolic Conscious


State A State B

Sensitive
Informational
Mental NC Mental NC
State C State D

Affective

Physical NM Causal Physical NM


State E State F

Fig. 10.3 Kinds of temporal relations between and within aspects of a


conscious dynamic system: Relations within conscious states (A and B)
are symbolic, and relations within mental states (C and D) are informa-
tional. Causal relations within states apply only to physical aspects of the
system (states E and F). Bottom-up and top-down relations (oblique
arrows) are proposed to support feelings. The ascending arrow (EB)
represents sensitive feelings, by which a state of the body and/or the world
is felt as a conscious mental sensation (e.g., sensations of heat and cold,
hunger and thirst). The descending arrow (AF) represents affective
feelings, by which a state of mind is felt as a state of the body and/or
the world (e.g., chills up the spine, facial expressions and orgasms).
Abbreviations: NC = Non-Conscious; NM = Non-Mental.

terms of conscious experiences or presentations, which can be symboli-


cally represented. Each presentation conditions the subsequent ones; the
corresponding symbolic representations take the form of logical chains.
In this sense, when looking at conscious dynamics at the higher level,
symbolic processes can be considered to be typically conscious (Dulany
2011).
The actualization of elementary forms into conscious episodes triggers
in the conscious individual the formation of sensitive and affective feelings
(Fig. 10.3). The issue of what is it like to be a bat raised by Nagel
(1974) would then be translated into a question about the feelings of bats.
Considering that feelings are psychophysiological phenomena, they can
be scientifically studied by means of measurement of the activity of brain
subsystems that support them, as well as related to behaviors contingent
on the activation of these systems. This method gives only partial results,
since it is limited to the third-person perspective. However, considering
Triple-aspect monism: A framework consciousness science 323

our monist assumption, these partial results are supposed to reflect the
bigger picture to a reasonable degree.
The above cross-aspect relations (sensitive and affective feelings) are
not causal in the ordinary usage of the term in science (making refer-
ence to physical forces, as in Harnad and Scherzer 2008), but can be
conceived as similar to Aristotles formal causation, which applies
among other possibilities to the relations between aspects of the same
system. The unity and individuality of a conscious being in time depends
on such resonances between their aspects. These resonances have been
scientifically approached in the fields of psychosomatics and integrative
medicine (Walach et al. 2012), as well as by means of the somatic marker
hypothesis in neuropsychology (Damasio 1994). For each individual,
the three aspects (physical-chemical-biological, informational, and con-
scious) cannot be separated. When a person dies, his/her conscious activ-
ity apparently goes away (this expression, goes away, is intentionally
dubious), but some of the complexes of forms that he/she constructed
can survive, when re-actualized by other individuals (for instance, when
they read a book written by the first-person). Once a complex is actual-
ized for an individual, it can be reproduced to the same individual or to
others.
A necessary condition for reproductibility is the complex being rep-
resented. In this sense, a representation is a copy (Pereira 1997) of a
presented complex. For the individual, a copy may be used for executive
functions and working memory. Reasoning with representations (e.g.,
counterfactual thinking), the individual can go beyond the here and now
of existence, reconstruct the past, and project the future. The basis of
cultural evolution is the embodiment of presented complexes in material
mediums, as texts and paintings. This embodiment generates cultural
units that can be further copied and re-actualized by other individuals.
Central to the conscious life of individuals in society is an interchange
of information with the environment. Such an environment contains not
only physical-chemical-biological objects and processes, but also cultural
entities. Described as objective spirit (Hegel) or memes (Richard
Dawkins), cultural forms can also be regarded as potential forms to be
actualized in individual consciousness. Each individual is exposed to
physical and cultural information patterns corresponding to potential
forms that can be actualized in his/her conscious mind.

10.6 Inserting feelings in Global Workspace theory


These considerations indicate that an answer to the question of what
makes mental processes conscious should involve not only the existence
of some kinds of information-processing leading to the construction of
324 Alfredo Pereira Jr.

crisp representations and knowledge, but also the presence of feelings.


To complete the answer, it becomes necessary to discuss the existence of
thresholds for mental processes to become consciously attended by the
individual. Passing a threshold, in the TAM framework, can be conceived
as requiring conjoint activation of knowing and feeling processes, thus
accessing brain-wide broadcasting networks (see Carrara-Augustenborg
and Pereira 2012) and reaching limited capacity processing circuits. The
latter requirements are part of the Global Workspace theory (GWT)
of consciousness (Baars 1988, 1997). Considering the importance of
the last steps for the actualization of mental forms, I suggest (following
the directions of Schutter and van Honk 2004) an expansion of GWT
towards the consideration of sensitive/affective feelings and their respec-
tive specialized brain networks.
GWT was proposed as a cognitive theory of consciousness in 1988.
Newman and Baars (1993) further argued that subcortical systems are
responsible for the state of consciousness in the clinical sense of the
term (see also Merker, Chapter 1; Godwin et al., Chapter 2; Lehmann,
Chapter 5: all in this volume), while thalamo-cortical systems are respon-
sible for the cognitive contents. Since then, experimental results and a
theoretical synthesis by Panksepp (1998, 2005, 2007) have indicated that
subcortical circuits also contribute to conscious contents of the affec-
tive/emotional kind, leaving open the issue of how patterns of activity
formed in these circuits enter the Global Workspace and how they are
broadcasted to other parts of the brain.
Many sensations (like feeling hungry) and mood states (like feeling
tired) are typically conscious. We do not need to think about them to be
conscious of them; even if we try to consciously inhibit them, they dont
go away completely until the right physiological states are achieved. In
this regard, they are different from primitive drives that we can con-
sciously inhibit. They: (1) belong to the conscious subject; (2) are not
representations of an external situation; (3) co-exist with several differ-
ent cognitive contents; and (4) interfere with the processing of cognitive
contents (as argued by Damasio, 2000).
According to Panksepp (2007), affective consciousness has three
modalities:8

8 Although using different terminologies (for a criticism of this kind of terminology, see
for example, Clark 1997), the approaches advocated by Panksepp and myself are highly
compatible. What he calls sensory-affects partially corresponds to what I call sen-
sitive feelings; his homeostatic affects correspond to my affective feelings, while
his emotional affects partially correspond to my affective feelings. In the following
discussion, I will continue to use my own terminology.
Triple-aspect monism: A framework consciousness science 325

1. the exteroceptively driven sensory-affects that reflect the pleasures and


aversions of worldly objects and events;
2. the interoceptively driven homeostatic-affects, such as hunger and
thirst, that reflect the states of the peripheral body along the con-
tinuum of survival; and
3. the emotional-affects that reflect the arousal of brain instinctual action
systems . . . as basic tools for living to respond to major life challenges
such as various life-threatening stimuli (leading to fear, anger, and
separation-distress) and the search for various life-supporting stimuli
and interactions (reflected in species-typical seeking and playfulness,
as well as socio-sexual eagerness and maternal care).
Neuropeptides and other hormones are diffused in blood (e.g., by
means of hypothalamic function) and reach several parts of the brain and
other parts of the body, leading to the formation of feelings. These sig-
naling molecules are part of a broader network including immune and
endocrine systems and their released signaling molecules that reaches
the whole body of the living individual by means of blood flow. However,
this kind of diffusion process is computationally limited. Recent discov-
eries of neuro-glial-immune-endocrine inter-communication, as well as
the integrative role of glial cells specially the astrocytic network (Pereira
and Furlan 2010) allow the construction of broader explanatory models
(e.g., Pereira 2012).
The progress of several areas of neuroscience and psychology in recent
decades suggests that knowing and feeling processes are carried by com-
plex systems, covering a range of spatial and temporal scales of activity.
The limitations of GWT in accounting for this complexity could be
solved by considering the workspace to be composed of two parallel
broadcasting networks talking to each other one integrating and broad-
casting cognitive contents, and the other integrating and broadcasting
the individuals feeling about the contents. According to this idea, Pereira
and Furlan (2010) argued that the astroglial network is the organisms
Master Hub that integrates somatic signals with neuronal processes to
generate subjective feelings. Neuro-astroglial interactions in the whole
brain compose the domain where the knowing and feeling components
of consciousness get together.
An extended GW would be composed of these two broadcasting
networks and their reciprocal interactions. One is composed by neu-
ronal thalamo-cortical circuits providing the cognitive contents of con-
sciousness (Newman and Baars 1993). These contents are encoded
mostly by glutamatergic excitatory transmission, and balanced by
GABAergic inhibitory actions. The other is composed of subcorti-
cal regions as the periaqueductal gray area, hypothalamus, striatum
326 Alfredo Pereira Jr.

(the nucleus accumbens), limbic structures as the amygdala (LeDoux


1996) and the cingulate gyrus, as well as insular and orbital cortices
(Tsakiris et al. 2007; Volkow et al. 2005). These regions are involved in
specific dopaminergic, serotonergic, noradrenergic, and cholinergic cir-
cuits that modulate the balance of excitation and inhibition, producing
sensitive/affective/emotional conscious states. Neuropeptides and other
hormones drive these systems towards basic feelings. Broadcasting of
sensitive/affective signals to the whole brain requires a supplementary
mechanism. Pereira and Furlan (2010) argued that is the astroglial net-
work, mediated by purinergic mechanisms. The astroglial network is
connected to blood flow, cerebrospinal fluid (Iliff et al. 2012), and the
extracellular matrix (Dityatev and Rusakov 2011), carrying the signals to
endocrine and immune systems, and possibly to the whole body of the
living individual.
Conscious contents knowledge and feeling can be associated form-
ing blocks (e.g., the representation of a house and the feeling of pro-
tection). In an extended view of GWT, such blocks are the players that
reach the conscious spotlight. The broadcasting of one kind of content
does not compete with the other, since each one makes use of its own
network to reach a wide brain audience. Conflicts do occur at the end of
brain information processing lines, but coherent coordination of behav-
ior requires the absence of contradictory commands to skeletal mus-
cles (Godwin et al., Chapter 2, this volume). However, these conflicts
are not between feeling and knowing; they involve the above-mentioned
blocks; for instance: block A, including the A-feeling and the associ-
ated A-intentional-representation conflicts with block B, including the
B-feeling and the associated B-intentional-representation. Accord-
ing to this conjecture, the dynamics of the extended workspace would
not be based on a selective mechanism, as proposed by Neural Darwin-
ism (Edelman 1987), but mostly on cooperative coalitions of intentional
representations and their associated feelings. The conscious content of
an expanded GW is represented by the area of the lozenge in Fig. 10.4.
The dynamics of the extended GW should not be conceived as obeying
a dictatorial winner takes all, or even the classical Darwinian principle
survival of the fittest. Contemporary game theory has shown the possi-
bility of games where all cooperative players can win (Fehr and Rocken-
bach 2004), thus contrasting with the classical competitive approach of
von Neumann and Morgenstern (1944). The main function of the con-
sciousness mechanism would be the integration of patterns processed in
a distributed computing system, in such a way that most of the above-
threshold activated circuits would contribute with their resulting patterns
to the composition of a conscious episode. This function is described
Triple-aspect monism: A framework consciousness science 327

Peripheral Conscious Peripheral


Conscious Focus of Conscious
Feeling Attention Information

Fig. 10.4 The conscious continuum of an episode. The area of the


lozenge covers three modalities of content that coexist in conscious
episodes. From left to right, Peripheral Conscious Feelings refer to
feelings with variable degrees of intensity, which do not have a strong
cognitive match in the episode; the Conscious Focus of Attention refer
to coalitions of feeling and knowledge processes that reinforce each
other to the point of crossing a threshold for dominance; and Periph-
eral Conscious Information refer to information patterns with variable
degrees of crispness (for the latter concept, see Perlovsky, this volume,
Chapter 9), which do not have a strong sensitive/affective match in the
episode. My concept of Peripheral Consciousness is similar to Blocks
(1997) Phenomenal Consciousness, but I do not use his terminology in
order to avoid confusion with the classical Kantian and the more recent
usages.

by the mathematical model of Tononi (2004), expected to be compati-


ble (Ricardo Gudwin, personal communication) with the computational
models constructed by Baars and Franklin (2003).
The concept of an extended GW can be used to refute objections like
the following: conscious awareness is frequently regarded as a single
variable. Particularly, the theory of the common workspace (Baars 1988,
1997), most influential today, presumes that consciousness essentially
consists in the widespread access to a corresponding process from the
whole system of brain processes. From this point of view, the informa-
tion processed outside the common workspace and thus not accessible
for all other subsystems of the processing machinery is processed uncon-
sciously. For example, if I am concentrated on writing a paper while still
feeling my toothache in the background, it is said that my consciousness
(i.e., my common workspace) is busy with writing, whereas the pain is
perceived but unconsciously (Kotchoubey and Lang 2011, p. 430).
In an extended GW, lower level consciousness like pain perception
in the earlier example is broadcasted by the feeling network and can
328 Alfredo Pereira Jr.

reach the conscious spotlight together with higher level cognitive rep-
resentations; one does not exclude the other.
Several experimental results indicate that a stimulus that is subliminal
for the cognitive network may be supraliminal for the feeling network. For
example, the picture of a spider is presented for a brief time or masked by
another stimulus. The subjects do not form a visual representation of the
spider, and therefore cannot report visual features of the presented stim-
uli (Siegel and Weinberger 2009). However, this presentation can elicit a
supraliminal effect on the feeling network, for example, the subject feel-
ing fear, developing fearful behaviors and forming a memory of the event,
besides an increase in skin conductance and other unconscious effects.
Current paradigms based on a narrow view of GWT have led researchers
to classify these feelings as unconscious and the respective memory
as being of the implicit kind (Yang et al. 2011). An extended GWT
approach would help to revise these assumptions, leaving the classifica-
tion of unconscious only to those forms which are really not-conscious
(e.g., pre-motor contrasted to parietal cortical activations; see Desmurget
et al. 2009).

10.7 Concluding remarks


Consciousness is here conceived as a fundamental aspect of reality, not
separable from and not reducible to other aspects (namely, the physical-
non-mental and the mental-unconscious). Although being fundamental,
consciousness is considered to be not primitive: it primarily exists as a
potentiality to be actualized in the evolution of the universe, depending
on the operation of mechanisms such as those found in the human brain.
TAM assumes the existence of mental forms embedded in a potential
state in nature, focusing on the actualization processes by which these
forms compose organized systems as living beings, and manifest them-
selves in conscious episodes experienced by self-organizing individuals.
The necessity of a new, pluralistic framework for the understanding of
brain and mind has been stressed by philosophers of science (Horst 2007)
and consciousness theorists (Nunn 2007). In this context, TAM provides
an acceptable view of the complexity of material, informational, and
conscious aspects. They are considered to be at the same time different
and irreducible aspects of the same underlying reality.
TAM has epistemological implications, leading to a Post-Kantian con-
cept of the relation between the natural world and the world as we know
it (phenomenal). The phenomenal world is conceived as made of the
same stuff of the natural, but framed from a particular perspective. TAM
arguably solves many of the troublesome epistemological and ontological
Triple-aspect monism: A framework consciousness science 329

problems that bedevil the study of the brain, the mind, and the world,
and also opens a new avenue of dialogue with philosophical and religious
traditions.
TAMs framework is close to the philosophy of Hegel, the first philoso-
pher to elaborate on the concept of consciousness (according to Dove
2006). Hegels system, described with detail in his Encyclopedia of Philo-
sophical Science, is composed of three aspects of reality: Idea, Nature,
and Spirit. The world of Ideas contains all possibilities of effective devel-
opment of reality. The Ideas express themselves in Nature, but in a very
limited way. In his philosophy of nature a piece of German romanti-
cism that shares similar views with other contemporary authors, such as
Goethe, Fichte, and Schelling Nature is pregnant with Ideas. However,
full expression of these Ideas cannot occur in Nature itself; their devel-
opment requires Nature to be negated as a finite reality and resumed as
human culture (the Spirit). Only for human consciousness for Hegel,
particularly the German culture of his time the initial Ideas would reveal
their full meaning. In spite of the explicit ethnocentrism of this concep-
tion, Fleischmann (1968) convincingly argued that it is compatible with
current interdisciplinary scientific worldviews.
Charles Sanders Peirces semeiotic triad of Firstness, Secondness, and
Thirdness can be interpreted as a version of Hegels Idealism. In the
TAM approach, Firstness is the domain of potentiality, Secondness is the
domain of individual, discrete determinations of being, and Thirdness is
the domain of habits, conceived as projections of continuity in time and
space, obeying the laws of nature, and moving towards self-established
goals. Instead of a dialectical process mediated by the logical operation of
negation, in Peirce we find a semeiotic process of potentialities actualized
as signs possibly converging to the truth in the long run.
TAM gives a twist to the Hegelian major triad inherited by Peirce:
an inversion of order regarding the two first aspects of the triad, from
Mind-Nature to Nature-Mind, as originally proposed by Karl Marx. In
this sense, it points towards a new, fallible Dialectics of Nature based on
scientific developments and emphasizing dynamic interactions instead
of contradictions as the modus operandi of natural processes. The Proto-
Panpsychism of TAM suggests a re-conceptualization of natural sciences,
pointing to the existence of potentialities present in Nature. These poten-
tialities are like seeds that in adequate conditions develop into form, infor-
mation, and consciousness. If assumed by scientists, this ontology would
contribute to the re-enchantment of the world proposed by Prigogine
and Stengers (1979).
In the brain sciences, TAM leads to a richer view of the ionic, molec-
ular, intra-, and inter-cellular processes such as electrical currents and
330 Alfredo Pereira Jr.

waves, binding of transmitters and receptors, depolarization of mem-


branes and axon potentials (for a recent review of intercellular calcium
waves in the brain and other parts of the body, see Leybaert and Sander-
son 2012). These processes are conceptualized as being physical and
mental at the same time; for instance, discrete neuron firings can be
related to cognitive computations forming representations, while contin-
uous ionic waves can be related to the experience of affective states and
processes.
Mentality and consciousness are conceived as potentialities embed-
ded in natural processes that become actualized (or emerge; see
Vimal, this volume, Chapter 5) in the same space and time of physical
processes observed in the body. In current brain science, there is little
belief in a single locus of brain-mind communication (like the Carte-
sian pineal gland) or instantiation of consciousness (the homunculus
metaphor; Crick and Koch 2003), but there is still a strong belief that
some parts of the brain could turn physical signals into conscious experi-
ences, like for instance the prefrontal cortex generating conscious thought
(as apparently implied by Del Cul et al. 2007, 2009) and the insula or
the amygdala generating conscious feelings. This belief falls into what
Dennett called Cartesian Materialism, the view that there is a crucial
finish line or boundary somewhere in the brain, marking a place where
the order of arrival equals the order of presentation (Dennett 1991,
pp. 107). However, contrary to the localizationist (Bechtel, at press)
assumptions of Cartesian Materialists, evidence is that each part of the
brain may serve multiple cognitive and affective conscious functions, and
conscious functions may be carried by different brain structures. Large
frontal damage even when active language is destroyed does not
leave the patient unconscious, and feelings of, for example, empathy or
worrying may be abolished by right frontal damage.
There is a large database of localization of cognitive and affective func-
tions in the brain, beginning with motor acts and language, showing that
the location of a lesion suggests the function that is impaired or mod-
ified, for example, right frontal lesions makes people non-concerned;
occipital lesions make them blind; left temporo-parietal lesions destroy
speech understanding, and finger counting and number handling is lost
by a small left parietal lesion (see Mayer et al. 1999). Knowledge of
the location of a given activity is a good heuristic tool for medical inter-
ventions and electromagnetic stimulation, but does not afford an iden-
tification of the mechanisms of consciousness. Beyond this mapping,
conscious functions can be identified by the kind of brain activity, for
example, distinct patterns of frequency, amplitude and phase modula-
tion. For instance, the difference between feelings of pain and pleasure
Triple-aspect monism: A framework consciousness science 331

could be found by means of an analysis of brain electrical waveforms


and microstates (see Lehmann, this volume, Chapter 6), not by the brain
locations of the activities alone, since these can be the same (see Leknes
and Tracey 2008).
Consciousness mechanisms have recently been related to active net-
works (He and Raichle 2009), general brain connectivity (Rosanova et
al. 2012) and specific combinations of brain rhythms (Boly et al. 2012).
TAM points to a similar theoretical framework: the brain correspon-
dents of conscious processes are conceived as activities involving large
neuronal and glial networks, their interactions with the whole living body
by means of neural connections, blood flow and cerebrospinal fluid, and
last but not the least the interactions of the living individual with the
environment. The latter activates and modulates brain circuits according
to the features of the experience; then the brain evaluates the content
of received information and coordinates actions of the individual in the
environment. There are operational difficulties for the inclusion of such
a broad variety of explanatory factors, co-existing spatiotemporal scales
and stages of processing in the scope of brain sciences, but TAM can be
of help for the interested scientist, pointing to a theoretical framework
to be used to interpret experimental data, relating them to the context
where and when they were obtained.
In respect to philosophical psychology, although being not a phe-
nomenological approach TAM is compatible with the existential interpre-
tation of Husserlian Phenomenology by Martin Heidegger and Maurice
Merleau-Ponty, and current views of embedded and embodied mentality
in the Cognitive Sciences. It also suggests new interpretations of depth
psychologies (some of these were discussed by Nunn 2007).
On the religion side, TAMs world picture is close to the symbolic Tree
of Life from the Kabala, some branches of Indian philosophy, as well as
the Christian concept of Holy Trinity, among other possibilities. There
is also a relation with Spinozas philosophy, and Damasios approxima-
tion of this philosophy with Cognitive and Affective Neuroscience. TAM
would entail a Protopantheist conception, in the sense that God is not
the Creator, but the destiny of the universe (i.e., the potential perfection
of forms and the most eminent object of desire like the First Mover
in Aristotles Metaphysics, according to the interpretation of Aubenque
1962). In the context of the Christian doctrine of the Holy Trinity, the
following approximations would be compatible with TAM:
1. The Father is the symbol of a potential unity of all that exists;
2. The Son is any symbolic carrier of the word (mental form) about
the Father, making it accessible to all individuals to know about that
potentiality;
332 Alfredo Pereira Jr.

3. The Holy Spirit is the actual God, the actualization of unity for
a community of individuals who have faith (strong feelings) about it.
When individuals pray, they act towards the unity. This move can have
effects on their brains/bodies, for example, helping to heal a disease.
TAM provides an adequate framework for ethical discussions of recent
advances in the fields of biotechnology, synthetic biology, personalized
medicine, multi-scale self-organizing systems, and machine conscious-
ness. Scientific and technological progress in these fields raises concerns
about the possibility and limits of human control of the combination of
natural forms. For TAM, these possibilities are conceived as an evolu-
tionary step in the process of actualization of forms that characterize the
evolution of the universe. Therefore, there would be no sufficient a priori
reason to veto these scientific/technological projects; on the contrary, an
adequate focus would be to discuss benefits and risks.
Considering all the above possibilities, TAM is likely to be of interest
for a wide range of theoreticians who look for an integrative view of
consciousness as the unity of mind, brain, and the world.

REFERENCES
Aleksander I. (2005). The World in My Mind, My Mind in the World: Five Steps to
Consciousness. Exeter: Imprint Academic.
Aleksander I. (2007). Machine Consciousness. In Velmans M. and Schneider S.
(eds.) The Blackwell Companion to Consciousness. Malden, MA: Blackwell, pp.
8798.
Aleksander I. and Morton H. B. (2011). Informational minds: From Aristotle to
laptops (book extract). Int J Mach Consciousness 3(2):383397.
Almada L. F., Pereira Jr. A., and Carrara-Augustenborg C. (2013). What the
affective neuroscience means for a science of consciousness. Mens Sana
Monographs 11(1):253273.
Aristotle (2012). The Complete Aristotle. Adelaide, Australia: Feedbooks. URL:
www.feedbooks.com/book/4960/the-complete-aristotle.
Aristotle (1953). La Metaphysique. Trans. and comments Tricot J. Paris: Vrin.
Aubenque, P. (1962). Le Probleme de lEtreChez Aristote. Paris: PUF.
Baars B. (1988). A Cognitive Theory of Consciousness. New York: Cambridge
University Press.
Baars B. (1997). In the Theater of Consciousness: The Workspace of the Mind. New
York: Oxford University Press.
Baars B. and Franklin S. (2003). How conscious experience and working memory
interact. Trends Cogn Sci 7(4):166172.
Baldwin J. M. (1896). Consciousness and evolution. Psychol Rev 3, 300309.
URL: www.brocku.ca/MeadProject/Baldwin/Baldwin 1896 b.html.
Bateson G. (1979). Mind and Nature: A Necessary Unity. New York: Dutton.
Triple-aspect monism: A framework consciousness science 333

Bechtel W. P. (at press). The epistemology of evidence in cognitive neuroscience.


In Skipper Jr. R., Allen C., Ankeny R. A., Craver C. F., Darden L., et al.
(eds.) Philosophy and the Life Sciences: A Reader. Cambridge, MA: MIT Press.
Betti A. (2011). Kazimierz Twardowski. In Zalta E. N. (ed.) The Stanford Ency-
clopedia of Philosophy, Summer 2011 Edn. http://plato.stanford.edu/archives/
sum2011/entries/twardowski.
Block N. (1997). On a confusion about a function of consciousness. In Block
N., Flanagan O., and Guzeldere G. (eds.) The Nature of Consciousness. Cam-
bridge, MA: MIT Press.
Block N. (2009). Comparing the major theories of consciousness. In Gazzaniga
M. (ed.) The Cognitive Neurosciences IV. Cambridge, MA: MIT Press.
Boltzmann L. (1872/1965). Further studies in the thermal equilibrium of gas
molecules. In Brush S. (ed.) Kinetic Theory, Vol. 1. Oxford/London: Perga-
mon Press, pp. 88175.
Boltzmann L. (1896/1964). Lectures on Gas Theory. Trans. by Brush S. Berkeley:
University of California Press.
Boly M. Moran R., Murphy M., Boveroux P., Bruno M. A., Noirhomme Q., et
al. (2012). Connectivity changes underlying spectral EEG changes during
propofol-induced loss of consciousness. J Neurosci 32(20):70827090.
Broadbent D. E. (1958). Perception and Communication. London: Pergamon.
Carrara-Augustenborg C. and Pereira Jr. A. (2012). Brain endogenous feedback
and degrees of consciousness. In Cavanna A.E. and Nani A. (eds.) Conscious-
ness: States, Mechanisms and Disorders. New York: Nova Science Publishers,
pp. 3353.
Chaitin G. J. (1966). On the length of programs for computing finite binary
sequences. J ACM 13(4):547569.
Chalmers D. (1996). The Conscious Mind. New York: Oxford University Press.
Churchland P. (2012). Platos Camera: How the Physical Brain Captures a Land-
scape of Abstract Universals. Cambridge, MA: MIT Press.
Clark A. (1997). Being There: Putting Brain, Body, and World Together Again.
Cambridge, MA: MIT Press.
Cowen L. and Lindquist S. (2005). Hsp90 potentiates the rapid evolution of new
traits: Drug resistance in diverse fungi. Science 309(5744):21852189.
Crane T. (2000). The origins of qualia. In Crane T. and Patterson S. (eds.) The
History of the Mind-Body Problem. London: Routledge, pp. 169194.
Crick F. and Koch C. (2003). A framework for consciousness. Nat Neurosci
6(2):119126.
Damasio A. R. (1994). Descartes Error: Emotion, Reason, and the Human Brain.
New York: Grosset/Putnam.
Damasio A. (2000). The Feeling of What Happens: Body and Emotion in the Making
of Consciousness. New York: Harcourt.
Davidson D. (1980). Essays on Actions and Events. Oxford University Press.
Del Cul A., Baillet S., and Dehaene S. (2007). Brain dynamics underlying the
nonlinear threshold for access to consciousness. PLoS Biol 5(10):e260.
Del Cul A., Dehaene S., Reyes P., Bravo E., and Slachevsky A. (2009). Causal
role of prefrontal cortex in the threshold for access to consciousness. Brain
132(9):25312540.
334 Alfredo Pereira Jr.

Dennett D. (1991). Consciousness Explained. Boston, MA: Little, Brown and


Company.
Desmurget M., Reilly K. T., Richard N., Szathmari A., Mottolese C., and Sirigu
A. (2009). Movement intention after parietal cortex stimulation in humans.
Science 324(5928):811813.
Dityatev A. and Rusakov D. A. (2011). Molecular signals of plasticity at the
tetrapartite synapse. Curr Opin Neurobiol 21:17.
Dove K. R. (2006). Logic and theory in Aristotle, Hegel, stoicism. Philos Forum
37(3):265320.
Dretske F. (1981). Knowledge and the Flow of Information. Cambridge, MA: MIT
Press.
Dulany D. E. (2011). What should be the role of conscious states and brain states
in theories of mental activity? Mens Sana Monographs 9(1):93112.
Edelman G. M. (1987). Neural Darwinism. New York: Basic Books.
Edelman G. M. (1989). The Remembered Present: A Biological Theory of Conscious-
ness. New York: Basic Books.
Edelman G. M., Gally J. A., and Baars B. J. (2011). Biology of consciousness.
Front Psychology 2:4.
Fleischmann E. (1968). La Science Universelle ou La Logique de Hegel. Paris: Plon.
Fehr E. and Rockenbach B. (2004). Human altruism: Economic, neural, and
evolutionary perspectives. Curr Opin Neurobiol 14(6):784790.
Fell J. (2004). Identifying neural correlates of consciousness: The state space
approach. Consciousness Cogn 13:709729.
Gross C. G. (1995). Aristotle on the brain. The Neuroscientist 1(4): 245250.
Harnad S. and Scherzer P. (2008). First, scale up to the robotic Turing test, then
worry about feeling. Artif Intell Med 44(2):8389.
He B. J. and Raichle M. E. (2009). The fMRI signal, slow cortical potential and
consciousness. Trends Cogn Sci 13:302309.
Heidegger M. (1926/1993). Grundbegriffe der Antiken Philosophie. In Blust
F.-K. (ed.) Gesamtausgabe, Abteilung II, Vol. 22. Frankfurt a. M.: Kloster-
mann.
Horst S. (2007). Beyond Reduction: Philosophy of Mind and Post-Reductionist Phi-
losophy of Science. New York: Oxford University Press.
Houser N. (1983). Peirces general taxonomy of consciousness. T C S Peirce Soc
19(4):331359.
Husserl E. (1913). Ideas: General Introduction to Pure Phenomenology. Trans. Ker-
sten F. Dordrecht: Kluwer Academic, 1983.
Iliff J. J., Wang M., Liao Y., Plogg B. A., Peng W., Gundersen G. A., et al. (2012).
A paravascular pathway facilitates CSF flow through the brain parenchyma
and the clearance of interstitial solutes, including amyloid . Sci Transl Med
4(147):147ra111.
Kim J. (1998). Mind in a Physical World: An Essay on the Mind-Body Problem and
Mental Causation. Cambridge, MA: MIT Press.
Kohler W. (1965). Unsolved problems in figural aftereffects. Psychological Rec
15:6383.
Kotchoubey B. and Lang S. (2011). Intuitive versus theory-based assessment of
consciousness: The problem of low-level consciousness. Clin Neurophysiol
122(3):430432.
Triple-aspect monism: A framework consciousness science 335

LeDoux J. (1996). The Emotional Brain. New York: Simon & Schuster.
Leybaert L. and Sanderson M. J. (2012). Intercellular Ca2+ waves: Mechanisms
and function. Physiol Rev 92(3):13591392.
Leknes S. and Tracey I. (2008). A common neurobiology for pain and pleasure.
Nat Rev Neurosci 9:314320.
Mayer E., Martory M. D., Pegna A. J., Landis T., Delavelle J., and Annoni J. M.
(1999). A pure case of Gerstmann syndrome with a subangular lesion. Brain
122(6):11071120.
McFadden J. (2002). The conscious electromagnetic information (CEMI)
field theory: The hard problem made easy? J Consciousness Stud 9(4):23
50.
Millikan R. (1984). Language, Thought and Other Biological Categories.
Cambridge, MA: MIT Press.
Nagel T. (1974). What is it like to be a bat? Philos Rev 83(4):435450.
Newman J. B. and Baars B. J. (1993). A neural attentional model for access to
consciousness: A global workspace perspective. Concept Neurosci 4(2):255
290.
Nixon G. (2010). Hollows of experience. Journal of Consciousness Exploration and
Research 1(3):234288.
Nunn C. (2007). From Neurons to Notions: Brains, Minds and Meaning. Edin-
burgh: Floris Books.
Nunn C. (2010). Who Was Mrs Willett? Landscapes and Dynamics of the Mind.
London: Imprint Academic.
Panksepp J. (1998). Affective Neuroscience: The Foundations of Human and Animal
Emotions. New York: Oxford University Press.
Panksepp J. (2005). Affective consciousness: Core emotional feelings in animals
and humans. Consciousness Cogn 14:3080.
Panksepp J. (2007). Affective consciousness. In Velmans M. and Schnei-
der S. (eds.) The Blackwell Companion to Consciousness. Malden, MA:
Blackwell.
Peirce C. S. (19311958). Collected Papers of Charles Sanders Peirce.
Hartshorne C. and Weiss P. (eds.) Vols 16; Burks A. (ed.) Vols. 78. Cam-
bridge, MA: Belknap Press.
Pereira Jr. A. (1997). The concept of representation in cognitive neuroscience.
In Riegler A. and Peschl M. (eds.) Does Representation Needs Reality? Proceed-
ings of the International Conference New Trends in Cognitive Science. Vienna:
Austrian Society for Cognitive Science, pp. 4956.
Pereira Jr. A. and Rocha A. (2000). Auto-organizacao fsico-biologica e a
origem da consciencia. In Gonzales M. E. and DOttaviano I. (eds.) Auto-
Organizacao: Estudos Interdisciplinares. Campinas, Brasil: CLE/UNICAMP,
pp. 98115.
Pereira Jr. A. (2003). The quantum mind/classical brain problem. Neuroquantol-
ogy 1:94118.
Pereira Jr. A. and Ricke H. (2009). What is consciousness? Towards a preliminary
definition. J Consciousness Stud 16:2845.
Pereira Jr. A. and Furlan F. (2010). Astrocytes and human cognition: Modeling
information integration and modulation of neuronal activity. Prog Neurobiol
92(3):405420.
336 Alfredo Pereira Jr.

Pereira Jr. A. and Almada L. (2011). Conceptual spaces and consciousness


research: Integrating cognitive and affective processes. Int J Mach Conscious-
ness 3(1):117.
Pereira Jr. A. (2012). Perceptual information integration: Hypothetical role of
astrocytes. Cogn Computation 4(1):5162.
Pockett S. (2000). The Nature of Consciousness: A Hypothesis. San Jose, CA: Writ-
ers Club Presss.
Prigogine I. and Stengers I. (1979). La Nouvelle Alliance: Metamorphose de la
Science. Paris: Gallimard.
Robertson J. M. (2002). The astrocentric hypothesis: Proposed role of astro-
cytes in consciousness and memory formation. J. Physiology-Paris 96:251
255.
Rosenthal D. (2002). Explaining consciousness. In Chalmers D. (ed.) Philoso-
phy of Mind: Classical and Contemporary Readings. Oxford University Press,
pp. 406421.
Rosanova M., Gosseries O., Casarotto S., Boly M., Casali A. G., Bruno
M. A., et al. (2012). Recovery of cortical effective connectivity and recovery
of consciousness in vegetative patients. Brain 135(4):13081320.
Rosenthal D. (2008). Consciousness and its function. Neuropsychologia 46:829s
840.
Schutter D. J. and van Honk J. (2004). Extending the global workspace the-
ory to emotion: phenomenality without access. Conscious Cogn 13(3):539
549.
Siegel P. and Weinberger J. (2009). Very brief exposure: The effects of unre-
portable stimuli on fearful behavior. Conscious Cogn 18(4):939951.
Skyrms B. (2008). Signals: Evolution, Learning & Information. URL: www.lps.uci
.du/home/fac-staff/faculty/skyrms/signals.pdf (accessed March 22, 2013).
Skyrms B. (2010). Signals: Evolution, Learning, and Information. New York:
Oxford University Press.
Spinoza B. (1677). Ethics. Translation RHM Elwes. MTSU Philosophy
WebWorks Hypertext Edition. URL: http://frank.mtsu.edu/ rbombard/RB/
Spinoza/ethica-front.html (accessed March 1, 2013).
Thagard P. (2006). Hot Thought: Mechanisms and Applications of Emotional Cog-
nition Cambridge, MA: MIT Press.
Tononi G. (2004). An information integration theory of consciousness. BMC
Neurosci 5:4264.
Tononi G. (2005). Consciousness, information integration, and the brain. Prog
Brain Res 150:109126.
Tsakiris M., Hesse M. D., Boy C., Haggard P., and Fink G. R. (2007). Neural sig-
natures of body ownership: A sensory network for bodily self-consciousness.
Cereb Cortex 17(10):22352244.
Velmans M. (1990). Consciousness, brain, and the physical world. Philos Psychol
3:7799.
Velmans M. (2008). Reflexive monism. J Consciousness Stud 15(2):550.
Velmans M. (2009). Understanding Consciousness, 2nd Edn. London: Routledge.
Vimal R. L. P. (2008). Proto-experiences and subjective experiences: Classical
and quantum concepts. J Integr Neurosci 7(1):4973.
Triple-aspect monism: A framework consciousness science 337

Vimal R. L. P. (2010). Matching and selection of a specific subjective experience:


Conjugate matching and subjective experience. J Integr Neurosci 9(2):193
251.
Volkow N. D., Wang G. J., Ma Y., Fowler J. S., Wong C., Ding Y. S., et al. (2005).
Activation of orbital and medial prefrontal cortex by methylphenidate in
cocaine-addicted subjects but not in controls: relevance to addiction. J Neu-
rosci 25(15):39323939.
Von Neumann J. and Morgenstern O. (1944). Theory of Games and Economic
Behavior. Princeton University Press.
Walach H., Gander Ferrari M.-L., Sauer S., and Kohls N. (2012). Mind-body
practices in integrative medicine. Religions 3:5081.
Weaver W. and Shannon C. E. (1949). The Mathematical Theory of Communica-
tion. Urbana: University of Illinois Press.
Wicken J. S. (1987). Entropy and information: Suggestions for common lan-
guage. Roy I Ph S 54(2):176193.
Yang J., Xu X., Du X., Shi C., and Fang F. (2011). Effects of unconscious
processing on implicit memory for fearful faces. PLoS One 6(2):e14641.
Zurek W. H. (2003). Decoherence and the Transition from Quantum to Classi-
cal Revisited. Los Alamos Science 27. URL: arXiv:quant-ph/0306072v1.
Index

abduction, 78, 97, 98 biosemiotics, 77, 79, 97


abductive evidence, 100 birationality, 95, 100, 102, 106
abstract thinking, 206 blind smell, 48
action selection, 11, 17 Bohms pilot wave interpretation, 114,
actualization, 301 116
Advaita, 151, 171, 172 Bohms space model, 141
aesthetic bottleneck principle, 31
Kants aesthetics, 286 bottom-up signals, 89, 268
affects, 302 Brahman, 151, 159, 172, 177, 179, 183
affordance, 225 brain
agent functional state, 192
personal non-physical, 192 brain functioning
Alzheimers disease, 163 split-second, 209
anesthesia, 80, 82, 166, 203 brain state dependent, 196
architecture of quantum theory, 139 bridging principle, 220, 221, 229, 230
astrocentric hypothesis, 309 Buddha, 211
astrocyte domain, 244, 245, 246, 249, Buddhism, 179, 183
260 transient conscious moments, 169
astrocytes, 145, 236, 237, 243, 244, 246,
248, 250, 251, 254, 257 Carvaka/Lokayata, 151
astrocytic syncytium, 235, 237, 250, 252, category mistake, 152, 155, 165, 167,
254, 255, 257, 258, 260 168
atoms of thought, 211 cause-control point, 129
and emotions, 4 CEMI field, 156, 177
autaptic neuron, 223 change blindness, 31, 212
autobiographical self, 161, 162, 164, 165, classical logic, 269, 270, 282
182 cognition, 239, 240, 266, 267, 268, 269,
autonomic nervous system, 63 270, 273, 274, 276, 277, 281, 282,
autopoiesis, 150, 166, 259 284, 286, 290, 318
awareness, 80, 81, 84, 99, 100, 101, contradictory, 290
104 higher, 268, 282
aware of, 78 individual, 284
conscious awareness system, 29 non-human, 318
first-person awareness, 105 cognitive control
direct cognitive control, 56
beautiful, 266, 272, 277, 278, 281, 285, indirect cognitive control, 58
286, 289, 293, 294, 311 limited direct cognitive control, 58
behavioral final common path, 16, 17 cognitive cycle, 114, 122, 123, 125, 126,
Bells theorem, 118 132, 135, 141, 143, 144, 145, 308,
best estimate, 19 314
global best estimate, 19, 23 cognitive dissonance, 285, 289
biological quantum computer, 115 cognitive force, 139

338
Index 339

cognitive loop, 126, 132, 133, 134, 135, in dreams, 196


136, 137, 138, 144 indivisible, 197
calculation, 132 insula and consciousness, 45, 326, 330
cognitive science, 2, 146, 279, 280, 293, mammillary bodies and consciousness,
303, 319 45
collapse, 116, 307 matching and selection mechanisms,
coma, 106, 164, 166 150, 174
combinatorial evolution, 311 olfactory consciousness, 48
complementarity, 3, 86, 87 olfactory system and consciousness, 46,
ontological complementarity, 95 47, 48
complexity, 87, 94, 99 parietal cortex and consciousness, 46
computational complexity, 270 phenomenal consciousness, 304, 305
ConantAshby theorem, 10 primary consciousness, 82
conscious content respiratory cycle, 198
crisp conscious contents, 275 self-consciousness, 80
consciousness, 1, 2, 3, 45, 79, 80, 101, subcortical regions and consciousness,
219, 220 46, 325
and free will, 200 two irritating properties, 192
altered by electromagnetic fields, 198 unconnected distinct events in
altering its state, 198 Buddhism, 211
amygdala and consciousness, 45 why?, 199
as emergent property, 192 Copenhagen interpretation, 117
as inner aspect of brain state, 192 core self, 150, 161, 162, 164, 165, 182,
basal ganglia and consciousness, 45 223, 225
biological foundation of consciousness, correlated variances, 14
221 cortical midline, 154, 178, 182
brain localization, 198 creativity, 266, 277, 278
brain mechanisms of consciousness, 222 cyclopean, 26
building blocks, 210 cyclopean aperture, 9, 26, 34
collective consciousness, 286, 289
commissures and consciousness, 45 DAMv framework, 150, 152, 154, 156,
concept of consciousness, 304 157, 158, 159, 160, 161, 166, 167,
conceptual consciousness, 81 168, 169, 171, 172, 173, 174, 175,
conceptual differentiation, 170, 266, 176, 177, 178, 179, 181, 182
292 day-dreaming, 206
conflict and consciousness, 49, 50, 51, decision domain, 11, 17, 19, 27
52, 326 default-mode network, 165
conscious experience, 177, 211 default state, 210
consciousness system, 309 determinism, 266, 278, 279, 282
consciousness and language, 274 differentiation, 265, 266, 281, 284
content of consciousness, 220 differentiation of psyche, 266, 267
core consciousness, 82 digital computer, 79, 87
cortex and consciousness, 46, 164 discontinuous changes, 203, 212
crisp thoughts and consciousness, 267 distortion of perceived shape, 229
definition of, 3 dual-aspect monism, 3, 81, 86, 95, 100,
definition of consciousness, 81, 222 101, 105, 152, 154, 155, 157, 165,
degree of consciousness, 80, 81 167, 176, 182, 220, 229
development of, 194, 195 dual-coding, 95, 103
developmental advantage, 199 dual-mode, 149, 154, 157, 182
fluctuations of, 197 dynamic logic, 269, 270, 271, 273, 280
frontal cortex and consciousness, 49, dynamic processes
330 from vague to crisp, 270
frontal lobes and consciousness, 46
higher level, 82 ecosystemics, 91
hippocampus and consciousness, 45 ecosystems, 91, 92, 103
340 Index

EEG, 202, 206 evolutionary process, 305


analysis, 203 neural evolution, 81
data, 202, 305 expected touch field, 124, 125, 126
epochs, 207 experience of depth, 225
low frequencies, 196 explanatory gap, 113, 121, 152, 165, 166,
multichannel, 211 167, 169, 170, 173, 178, 180
power, 202 extracellular fields, 156
recording of concrete and abstract
thinking, 206 feeling, 301, 308, 309
signals, 282 affective feeling, 302
wave shape, 203 sensitive feeling, 302
wave shape recording, 202 final cause, 90, 315
waves, 198 first-person perspective, 193
EEG microstates, 204 firstness, 77, 80, 81, 82, 102, 303, 329
efference copy, 12 first-person event, 219
egocentric space, 222, 227 first-person perspective, 2, 24, 25, 34, 93,
ego-thou 114, 162, 304, 308, 311, 314, 318
interactions, 235 form, 311
intersubjectivity, 235 forms
reflections, 260 elementary forms, 311, 313
ego-thou-reflection, 234 elementary mental forms, 312
electric field, 207, 211 forms of organization, 313
of the brain, 203, 211 mental forms, 310, 311
electromagnetic fields foveal vision, 222
effect on consciousness, 198 free will, 32, 200, 201, 266, 278, 279
embeddedness, 150 functional magnetic resonance, 227
embodiment, 82, 97, 106, 150, 166, 235, functional microstates, 4, 203, 211
323 functional state
emergence, 97, 104, 149, 150, 169, 173, inner aspect, 192
174, 193, 312 outer aspect, 192
strong emergence, 150, 174, 176
unpacking principle, 175 gain-field, 14
weak emergence, 173, 174 gap junctions, 237, 242, 244, 250, 251,
emergentist monism, 150, 167 254, 255, 257
emotion, 292 gaze, 12, 14, 17, 23
aesthetic emotion, 266, 284, 285, 287, gaze shifts, 12, 13, 17, 20, 30
290 geometric space, 13
musical emotion, 266, 285, 291, 292 geometry
energy, 84, 99, 100, 101, 103 nested, 9, 13
entropy, 99, 100, 155, 313 of experienced space, 19
Boltzmanns entropy, 319 rotation based, 12
Ereigniss-an-Sich, 130 gestalts, 311
Everetts interpretation, 114, 116 glial-neuronal synaptic unit, 234, 236,
evolution, 3, 25, 82, 84, 88, 98, 99, 103, 243, 260
130, 153, 158, 160, 162, 171, 172, glial system, 233, 234, 235, 242, 244
175, 176, 265, 266, 281, 291, 311, global volume transmission, 156
312, 313, 314, 328, 332 global workspace, 303, 310
co-evolution, 153, 172, 179 global workspace theory, 150, 161, 324
cultural evolution, 272, 323 Guenther matrices, 255
evolution of human consciousness, gunas, 180
291
evolution of life, 266 habituation, 101, 102, 105
evolution of self, 162 Hamilton loop, 255, 257, 258
evolution of language, 291 hard problem, 2, 44, 79, 81, 113, 116,
evolution of universe, 158 117, 143
Index 341

hemispheres intuition, 269, 284


cerebral hemispheres, 106 intuition of free will, 281
heuristic self-locus, 225, 229 intuition of ideas, 281
heuristic self-locus (I! ), 225 intuition of infinity, 292
hierarchy, 82, 83, 88, 89, 95, 260, 267, intuition of self, 278, 279, 283
271, 272 scientific intuitions, 269
birational hierarchy, 93 inverse problems, 10
dual hierarchy, 274
natural hierarchy, 101 karma, 179, 183
high-order thought, 308 Kashmir Shaivism, 171, 172
Hilbert space, 117, 138, 139, 142, 144, knowledge does not change conscious
145 perception, 201
Hofstadters cognitive loops, 114
hubs, 245 Landes quantization rules, 114
hyperscale, 103, 104 language, 273
conceptual and emotional contents, 288
idealism, 151, 155, 158, 179, 321, 329 evolution of language, 291
ideomotor theory, 55, 60, 61 human language, 273
ill-posed problems, 10 language and cognition, 273, 274, 275,
illusory experience, 227 276, 286, 293
imagination, 36, 90, 145, 268 language and consciousness, 276, 281
visual imagination, 268 language and creativity, 277
inattention blindness, 31 language and emotion, 287, 288, 290
inconsistency, 280 language and knowledge, 290
information, 196, 197, 198, 199, 200, 202, language and Self, 283
203, 205, 207, 209, 211, 223, 228, language differentiation, 289
234, 237, 239, 241, 243, 254, 258, language instinct, 285
260, 300, 301, 302, 311, 313, 318, language learning, 274
319, 321 origin of language, 288
environmental information, 243 language, 275
incoming information, 209 language and abstract concepts, 277
information content, 209, 320 language and knowledge, 282
information pattern, 317 language and meaning, 288
information theory, 299 latent learning, 55
quantity of information, 319 learning, 47, 132, 194, 225, 274, 277
synaptic information, 246, 249, 252 human learning, 276
information integration, 170, 310, 320 instrumental learning, 55
inseparability, 150, 153, 157, 168 language learning, 274, 285
instinct, 266 latent learning, 55
bodly instincts, 294 learning abstract thoughts, 276
instinctual drive, 270 learning and creativity, 277
instinctual drives, 289 learning higher concepts, 273
knowledge instinct, 270, 281, 285, 286, learning of lucid dreaming, 198
289, 290, 294 linear perspective, 225
integrated action, 51 locomotion, 12, 14
integration consensus, 49, 59
intelligence, 104 mass-charge separation, 138, 140, 141,
artificial intelligence, 270 143, 144, 145
intentional programs, 234, 235, 242, 243, master hub, 235, 244, 257, 259, 325
249, 250, 252, 253 materialism, 151, 152, 155, 158, 161, 165,
programming, 235, 254, 259 166, 167, 174, 178, 179, 330
interactive substance dualism, 151, 155, McGurk effect, 50
158, 179 meaning, 77, 78, 79, 94, 97, 123, 133,
interpretant, 77, 105 301
intersensory conflict, 53, 63 mediodorsal thalamic nucleus, 47
342 Index

meditation, 198 Orch OR, 168, 169


mental forms, 310 order, 99, 100
micro-saccades, 227 structural order, 94
microstate orienting, 12
microstate dictionary, 205 orienting domain, 11, 13, 14, 17, 18, 19,
microstate determines the fate of 20, 22
information, 209 out-of-body experience, 178
microstate syntax, 209
microstate theory, 305 panpsychism, 119, 153, 171, 193,
microstate, 191 301
microstates of emotions, 207 perception, 34, 47, 60, 79, 123, 290
microtubule, 115, 168, 169, 193, 206 crisp perception, 268
mind-brain equivalence, 150, 161 music perception, 47
model pain perception, 327
forward model, 12 perception-action cycles, 3
inverse models, 10 perception-and-action, 60, 61, 62
neural model, 10 visual perception, 293
reality model, 19, 22, 23, 24, 26, 27, 29, perspective
30, 31, 32, 34 perspectival relation, 24, 25
model hierarchy, 88, 90, 94 perspective point, 9, 26
monism 172, 304 perspective drawing, 225, 227, 228,
neutral monism, 169, 170, 171, 172 229
moon illusion, 229 phenomenal, 4, 32, 43, 53, 54, 115, 123,
motivation for explaining theories, 200 143, 160, 161, 166, 171, 177, 182,
motor control, 56, 61, 64, 161 219, 220, 223, 225, 228, 229, 230,
motor equivalence, 55 231, 306, 314, 328
multiple constraint satisfaction, 16, 17, phenomenal, 220
24 physical reality model, 136, 138
physics of knowledge, 136
Nagels question, 144 poly-ontological architecture, 234
naive realism, 10, 26, 34, 35, 36 brain model, 235
nature, 300, 311 positivists, 120
near-death experience, 178 potential properties, 150, 174, 182
need for understanding, 285 potentialities, 300
negative language, 252, 253, 254, 255, potential state, 312
257 Prakr.ti, 172
negation operators, 253, 254, 255, primary visual cortex, 227
258 primitive consciousness, 146
nested ontology, 32 private descriptions, 220, 229
neural correlates of consciousness, 47, 66, private thoughts
121, 144, 150, 161 mind reading of, 193
neural Darwinism, 310 proemial, 235
neuro-astroglial, 309 relationship, 239, 240, 242, 257
neurophenomenology, 150, 166 synapse, 242
noumenal, 32, 305 proto-experience, 80, 153, 180
novelty, 94, 97, 104, 105, 174 proto-panpsychism, 301
protoself, 150, 161, 162, 164, 165,
objective physical aspect, 219 182
objective subject, 234, 235, 239, 240, psychophysical test, 228
242 public descriptions, 220
subjectivity, 236, 239, 243 purpose, 12, 16, 22, 90, 123, 292, 294,
occasions of experience, 169 316
optimization, 10, 16, 24, 31 purpose of art, 285
multi-objective optimization, 16 purpose of life, 178, 272
optimal controller, 10 Purus.a, 151, 172
Index 343

qualia, 43, 310 retinoid model, 222, 223


quantum theory, 4, 114, 115, 116, 117, retinoid space, 223, 225, 229
118, 119, 127, 128, 129, 130, 136, 3D representations, 225
137, 138, 139, 141, 145 analogs of conscious contents, 231
quantum unitary operator, 128, 129 spatio-temporal activation patterns,
224
re-afference, 11, 62 Z-planes, 223, 225, 231
reality, 1, 2, 5, 8, 9, 19, 160, 168, 172, retinoid system, 4, 150, 159, 222, 223,
195, 220, 229, 272, 300, 328, 329 225, 229, 308
aspects of reality, 2 rotated table illusion, 229
mind-dependent reality, 157, 178
ultimate reality, 315 samadhi, 163
unitary reality, 304 Sam. khya, 151
recognition, 252, 270, 318 scale, 83, 84, 90, 91, 92, 96, 100, 102,
of informational content, 320 104
reductionism, 169, 174, 279, 280, 310 different scales, 92
reentrant individual scales, 103
reentrant interactions, 166 multiple scales, 83, 102
reentrant processes, 49 secondness, 77, 80, 82, 102, 303, 329
reentrant signaling, 161 seeing-more-than-is there (SMTT), 229
relational plenum, 23 selective attention, 224, 229
representation, 59, 61, 182, 246, 271, 272, self-as-knower, 161, 162, 182
276 self-as-object, 162, 182
2D and 3D representations, 225 self-locus neurons (I!), 224
brain representations, 222, 228, 230 self-motion, 8, 9, 11
cognitive representations, 285 self-organization, 150, 161, 174, 178
coherent representations, 223 self-reference, 235, 259
conscious representations, 63, 268, 269, semiosis, 77, 78, 97
307 hierarchical biosemiosis, 98
crisp representations, 275, 277, 280, sensorium hypothesis, 62
282, 287, 307, 308, 324 sentience, 43
distributed representations, 269 sign, 77, 78, 79, 87, 95, 102, 104, 105
egocentric representation, 222 emergence of signs, 97
high-level representations, 273 simulator, 269
higher-level representations, 271 size-constancy, 229
language and cognitive representations, sleep, 80, 126, 196, 197, 203
274, 276 circadian wake-sleep cycle, 194
language representations, 275, 283, 284 sleep walking, 196
mental representations, 265, 267, 283 soliton propagation, 156
neural representation, 268 somatic nervous system, 63
phenomenal representations, 223 specious moments, 169, 197
projection of representations, 268 speech, 288, 289
representation and consciousness, 58, inner speech, 221, 302
222 speech processing, 60
representations and emotions, 284 split brain, 197
retinotopic representations, 227 stake-your-life-on-it reality, 125, 126
stimulus representation, 182 stasis neglect, 101, 102, 104, 105
vague representations, 268, 270 state dependency, 196
respiratory cycle stream of consciousness, 197, 211
effect on consciousness, 198 Stroop task, 52, 53, 61
resting subjective experience, 43, 80
networks, 210 subjective subject, 239, 240, 241, 242
task-free, 206 subjective subjectivity, 235, 236, 241, 242,
resting state networks, 210 243
retinal image, 7, 9 subjectivity, 222, 230, 231
344 Index

sublime, 272, 277, 278, 281, 285, 286, unconscious, 59, 60, 61, 62, 63, 64, 161,
287, 289, 293, 294 197, 266, 267, 268, 269, 270, 271,
subliminal, 51, 328 272, 275, 276, 277, 278, 279, 280,
subliminal stimuli, 51 281, 282, 283, 284, 285, 286, 287,
superposition, 124, 153, 168, 173 288, 289, 292, 299, 302, 303, 307,
supervenience, 301 308, 327, 328, 330
supramodular interaction theory, 50 mental-unconscious, 328
symbols-of-reality, 133 unconscious emotions, 294
functional meaning, 133 unconscious-to-conscious, 293
referential meaning, 133 unconscious counterparts
synchrony blindness, 53 cerebellum, 44, 45
unification, 83, 86, 90, 96, 97, 98, 103,
theories of consciousness, 310 104, 105, 106
thirdness, 77, 78, 80, 82, 102, 303, 329
third-person event, 219 varying degrees of dominance, 150, 152,
third-person perspective, 3, 85, 90, 118, 157, 182
150, 152, 157, 158, 160, 182, 304, Vedanta, 171
305, 322 vegetative state, 46, 164
three-dimensional, 9, 18 Visis.t.advaita, 151, 171
tilde mode, 154, 155, 156 visual angle, 227
time visual-concrete thinking, 206
as internal parameter, 127 volition, 239, 240, 241, 288
time transport function, 127
top-down signals, 58, 59, 61, 268 wakefulness, 154, 164, 166, 178, 197
tritostructure, 246 wave function collapse, 117, 307
2D perspective drawings into 3D what is it like to be, 304
representations, 225 will, the, 64

You might also like