You are on page 1of 4

The dynamic coordination of cortical activities

Presented to the Tamagawa-Riken Dynamic Brain Forum, Japan, March 2007

W. A. Phillips
University of Stirling, Scotland
and
Frankfurt Institute of Advanced Studies, Germany

Neural systems must be reliable but flexible. The contrast between these two
requirements is reflected by two frequently opposed perspectives that have arisen from
the neuroscience of the last century. First, there is the Hubel and Wiesel tradition that
sees sensory features and semantic attributes as being signalled by single cells, or
small local populations of cells. These codes are highly reliable. They do not change
from moment to moment, and do not depend upon what is going on elsewhere. Within
this conception feature detection and object recognition are achieved through a fixed
or slowly adapting feedforward projection through a hierarchy of cortical areas, and
this provides the basis for many higher cognitive capabilities.
In contrast, the second perspective emphasizes flexibility. From the early 1980s
onwards, there has been plenty of evidence that, even in sensory systems, activity is
influenced by high-level cognitive state variables such as attention, and by an ever-
changing stimulus context that reaches far beyond the classical receptive field. This
has led many to conclude that the simple Hubel and Wiesel tradition is no longer
viable, and that information is conveyed only by the rich non-linear dynamics of very
large and ever-changing populations of cells.
Our work on cognitive coordination combines these two views. It emphasizes
dynamic contextual interactions, but claims that, instead of robbing the local signals of
their meanings, they make those local signals even more reliable and relevant. Its
central hypothesis is that there are two classes of synaptic interaction: those that
specify the semantics of the signals transmitted, and those that coordinate these
computations so as to achieve current goals in current circumstances. They do this
through the two fundamental processes of contextual disambiguation and dynamic
grouping. These two processes amplify activity relevant to the current task and
stimulus context, group activity into coherent subsets, and combat noise by context-
sensitive redundancy. They are crucial to Gestalt perception, selective attention,
working memory, and strategic coordination.
Contextual disambiguation and dynamic grouping require many locally specific
coordinating interactions between all the detailed processes that compute the cognitive
contents. This implies that coordinating interactions must occur within and between
cortical regions, because it is only they that know the detailed cognitive contents. Our
working assumption is that there is a special class of synaptic interactions that
selectively amplifies and synchronises relevant activities. They are predominantly
mediated by long-range lateral and descending connections and influence post-
synaptic activity via a combination of NMDA receptors and GABA-ergic
interneurons. They do not themselves provide primary drive to post-synaptic cells, but
modulate the effects of those that do. We call them coordinating interactions to
distinguish them from the diffuse effects of the classical neuromodulators.
2

These broad claims of close relations between particular synaptic interactions and
particular cognitive functions are based upon many studies from many labs. They
show detailed relations within and between psychophysical functions, neurobiology,
and psychopathology. There is no time to review that evidence here, but much of it is
reviewed in Phillips and Singer, Behavioral and Brain Sciences 1997, 20:657-722 and
Phillips and Silverstein, Behavioral and Brain Sciences, 2003, 26:65-137. Here I
outline some recent studies from Stirling and Frankfurt.
This work builds upon Steve Silverstein’s original insight of a close relation
between cognitive coordination and cognitive disorganization in schizophrenia. His
argument for this leapt to my attention because, in addition to noting that the cognitive
functions that are impaired are exactly those for which coordination is most crucial, he
also noted that a new view of the pathophysiology of schizophrenia implicated exactly
those mechanisms postulated to be crucial to coordination, i.e. synaptic interactions
mediated by NMDA receptors. Evidence for that view continues to get even stronger,
and I expect a lot more to come, including a better understanding of the consequences
for neuronal dynamics and cognition of the various receptor sub-types that are now
know to occur in cerebral cortex, such as the 2A and 2B sub-types of NMDA receptor.
The first study that I will report here uses a new version of the visual contour
integration paradigm to measure sensitivity of Gestalt perception to temporal cues.
Several earlier studies suggest that temporal resolution is reduced in schizophrenia.
This may reflect a core underlying deficit with various cognitive consequences,
including impairments to Gestalt perception, attention, and working memory. There is
now growing evidence that dynamic aspects of early vision may provide an
endophenotype for cognitive disorganisation in disorders such as schizophrenia. If so,
a better understanding of those dynamic visual processes and their pathologies will
greatly facilitate the search for the distal genetic origins of cognitive disorganisation,
and aid the design of better pharmacotherapies. At Stirling we have therefore
developed a simple, sensitive, and specific psychophysical test of sensitivity to
temporal cues. It has been used extensively to study individual differences within and
between groups of students and schizophrenia patients. EEG and MEG studies using
this paradigm are now just beginning in Frankfurt.
In this paradigm two pseudo-random arrays of Gabor patches are displayed; one
to the left and one to the right of fixation. Within one array a sub-set of elements form
a figure, such as a continuous contour, that can only be reliably detected when their
onset is not synchronized with that of the background elements. For earlier work of
this sort see Hancock and Phillips, Vision Research 2004:2285-99. Using our new
version of the paradigm we found clear evidence on several issues. First, for most
subjects, segregation required an onset asynchrony of 20 – 40 ms. Second, detection
was no better when the figure was presented first, and thus by itself, than when the
background elements were presented first, even though in the latter case the figure
could not be detected in either of the two successive displays alone. This finding is
counter-intuitive. It is evidence against the hypothesis that salience is signalled by the
latency of cortical responses, and contrasts with inferences drawn from studies of
phase relations in rapidly cycling displays. Third, asynchrony segregated subsets of
randomly oriented elements as effectively as those aligned with the underlying
contour. Fourth, asynchronous onsets aligned with the contour could be discriminated
from those lying on the contour but not aligned with it. Fifth, though figure-ground
segregation depended upon asynchrony of the transient neural responses to abrupt
onsets, transient and sustained responses were not processed independently. Finally,
there were wide individual differences in sensitivity to these temporal cues, with

2
3

schizophrenia being associated with greatly reduced sensitivity. In the Frankfurt


studies we are now looking for electrophysiological correlates of these psychophysical
effects and their impairment in psychosis.
I now move on to studies of contextual modulation, using target-surround contrast
effects. One reason for doing so is to report data where impaired context-sensitivity in
schizophrenia leads to better performance, as this could not be explained by any notion
of a ‘general’ cognitive deficit. If patients have low context-sensitivity then they
should have better than normal performance in conditions where context is misleading.
We have tested this prediction using the effects of centre-surround contrast on size
perception. It has been know for more than 100 years that being surrounded by bigger
things reduces perceived size, and vice versa. In vision research this is known as the
Ebbinghaus or Titchener illusion. The prediction that we tested was therefore that
patients with reduced sensitivity to context will perform better than controls in
conditions where context is misleading. This is what we found. Similar supra-normal
performance by schizophrenia patients has also been reported by Steve Dakin and
others (Current Biology, 2005, Vol. 15, No 20, R822-824) using a different centre-
surround contrast illusion.
EEG studies in Frankfurt have also found relevant relations between Gestalt
perception, schizophrenia, and EEG rhythms. One study by Uhlhaas et al (Journal of
Neuroscience, 2006, 26: 8168-8175) used Mooney faces, which are binary black-white
images of faces that were designed initially to study the development of higher-level
face recognition capabilities, and which have since been much used to study
neuropsychological deficits in face perception. They found that long-range phase
synchrony in the Beta-band was associated with successful face recognition by normal
subjects. Schizophrenia patients were less able to categorize these face images
correctly, and had significantly less phase-synchrony in the Beta-band. This was
therefore interpreted as evidence that long-range coordination is dysfunctional in
schizophrenia.
Though there is not time to discuss it in depth here, the concept of cognitive
coordination has been formalised in precise neurocomputational terms using the theory
of Coherent Infomax, as described by Phillips et al, Network: Computation in Neural
Systems, 1995, 6:225-246, and by Kay et al , Neural Networks, 1998, 11:117-140. That
theory uses concepts of three-way mutual information and conditional mutual
information to show how it is possible in principle for contextual inputs to have large
effects on the transmission of information about the primary driving inputs, while
transmitting little or no information about themselves, thus influencing the
transmission of cognitive content, but without becoming confounded with it. That
formalisation includes the specification of an objective function, which describes the
signal processing work to be done. To meet that objective, a learning rule for
modifying the synaptic weights in a neural network was derived analytically by Jim
Kay. What most impressed us about the consequent learning rule is that, although it
was derived without any knowledge of the physiology of synaptic plasticity, it fits that
physiology well. The theory of Coherent Infomax thus shows how the contextual
guidance of local neuronal processing can give neural systems the flexibility that they
need, but without compromising their reliability.
In summary, the research reported here, together with that from many other labs,
suggests the following. 1. There is a family of dynamic coordinating interactions
whose computational role, neural bases, and cognitive consequences can be
distinguished from those of the interactions that compute the cognitive contents
conveyed by neural signals. 2. They require interactions within and between cortical

3
4

regions that modulate spike rate and timing. Local circuits using NMDA receptors and
GABA-ergic interneurons do this. 3. Their consequences include process of dynamic
grouping and contextual disambiguation that are relevant to all domains and levels of
cognition. 4. Their malfunctions can produce underlying impairments in both temporal
resolution and context-sensitivity which can lead to cognitive disorganization as seen
in several psychopathologies.

Acknowledgements: Many people have contributed to the development of the


conceptual framework on which this work is based. Those contributing to the new
results reported here include Peter Hancock, Gordon Mitchell, Laura Walton, Yvonne
Plenderleith, and Sivakumar Anandaciva in Stirling, and Peter Uhlhaas, Eugenio
Rodriguez, and Wolf Singer in Frankfurt.

You might also like