Professional Documents
Culture Documents
ABSTRACT 1. INTRODUCTION
Connectivity patterns in biological brains exhibit many re- How did evolution, an unguided search process, discover
peating motifs. This repetition mirrors inherent geomet- structures as astronomically complex and sophisticated as
ric regularities in the physical world. For example, stimuli the human brain? An intriguing possibility is that physi-
that excite adjacent locations on the retina map to neu- cal properties of the universe constrain the space of physi-
rons that are similarly adjacent in the visual cortex. That cal cognitive structures and the problems they encounter to
way, neural connectivity can exploit geometric locality in the make such discovery likely. In particular, the physical ge-
outside world by employing local connections in the brain. ometry of space provides a bias toward brain-like solutions.
If such regularities could be discovered by methods that The geometry of space is effectively three-dimensional Cart-
evolve artificial neural networks (ANNs), then they could esian. An ubiquitous useful property that results from Carte-
be similarly exploited to solve problems that would other- sian geometry is the principle of locality. That is, what hap-
wise require optimizing too many dimensions to solve. This pens at one coordinate in space is often related to what hap-
paper introduces such a method, called Hypercube-based pens in adjacent locations. This fact is exploited effectively
Neuroevolution of Augmenting Topologies (HyperNEAT), in biological brains, wherein the physical world is often pro-
which evolves a novel generative encoding called connec- jected onto Cartesian arrays of neurons [5]. Just by virtue
tive Compositional Pattern Producing Networks (connec- of the neurons that are related to nearby events in space
tive CPPNs) to discover geometric regularities in the task also being near each other, it becomes likely that many rele-
domain. Connective CPPNs encode connectivity patterns vant operations can be performed through local connectivity.
as concepts that are independent of the number of inputs or Such connectivity is natural in a physical substrate in which
outputs, allowing functional large-scale neural networks to longer distance requires more resources, greater accuracy,
be evolved. In this paper, this approach is tested in a simple and better organization. That is, the properties of physical
visual task for which it effectively discovers the correct un- space inherently bias cognitive organization towards local
derlying regularity, allowing the solution to both generalize connectivity, which happens to be useful for solving prob-
and scale without loss of function to an ANN of over eight lems that are projected from the physical world. Thus it
million connections. may not be simply through brute-force search that the hu-
man brain employs significant local connectivity [14, 24].
Categories and Subject Descriptors: This hypothesis suggests that techniques for evolving ar-
I.2.6[Artificial Intelligence]: Learning–Concept Learn- tificial neural network (ANN) structure should implement a
ing, Connectionism and neural nets means for evolution to exploit the task geometry. Without
such a capability, even if useful regularities are inherent in
C.2.1[Computer-Communication Networks]: Network
the task, they offer no advantage or accessible bias to the
Architecture and Design–Network Topology
learning method. Thus the ability to exploit such structure
General Terms: Experimentation, Algorithms may be critical for genuinely sophisticated cognitive appa-
ratus to be discovered.
Keywords: Compositional Pattern Producing Networks, If large-scale ANNs that exploit geometric motifs are to
NEAT, HyperNEAT, large-scale artificial neural networks be evolved with computers, a powerful and efficient encod-
ing will be necessary. This paper introduces a novel genera-
tive encoding, Hypercube-based NeuroEvolution of Augment-
ing Topologies (HyperNEAT), for representing and evolv-
Permission to make digital or hard copies of all or part of this work for ing large-scale ANNs that exploit geometric regularities in
personal or classroom use is granted without fee provided that copies are the task domain. HyperNEAT employs the NEAT method
not made or distributed for profit or commercial advantage and that copies [19] to evolve a novel abstraction of developmental encod-
bear this notice and the full citation on the first page. To copy otherwise, to ing called connective Compositional Pattern Producing Net-
republish, to post on servers or to redistribute to lists, requires prior specific works (connective CPPNs), which compose simple canonical
permission and/or a fee.
functions into networks that can efficiently represent com-
GECCO’07, July 7–11, 2007, London, England, United Kingdom.
Copyright 2007 ACM 978-1-59593-697-4/07/0007 ...$5.00. plex connectivity patterns with regularities and symmetries.
output pattern
This approach is tested in a variable-resolution visual task
(applied at ... f
that requires discovering a key underlying regularity across
each point)
the visual field. HyperNEAT not only discovers the regular- x
...
value y
ity, but is able to use it to generalize and apply its solution at y
f at x,y
higher resolutions without losing performance. The conclu-
sion is that connective CPPNs are a powerful new generative x
x y
encoding for evolving large-scale ANNs.
(a) Mapping (b) Composition
2. BACKGROUND
This section provides an overview of CPPNs, which are Figure 1: CPPN Encoding. (a) The function f takes
arguments x and y, which are coordinates in a two-
capable of generating complex spatial patterns in Cartesian
dimensional space. When all the coordinates are drawn
space, and then describes the NEAT method that is used to
with an intensity corresponding to the output of f , the
evolve them. These two topics are foundational to the novel
result is a spatial pattern, which can be viewed as a phe-
approach introduced in this paper, in which CPPNs evolve
notype whose genotype is f . (b) The CPPN is a graph
patterns that are interpreted as ANN connectivities.
that determines which functions are connected. The con-
2.1 Compositional Pattern Producing Networks nections are weighted such that the output of a function
In biological genetic encoding the mapping between geno- is multiplied by the weight of its outgoing connection.
type and phenotype is indirect. The phenotype typically
contains orders of magnitude more structural components
than the genotype contains genes. Thus, the only way to
discover such high complexity may be through a mapping
between genotype and phenotype that translates few dimen-
sions into many, i.e. through an indirect encoding. A most (a) Symmetry (b) Imperf. Sym. (c) Rep. with var.
promising area of research in indirect encoding is develop-
mental and generative encoding, which is motivated from Figure 2: CPPN-generated Regularities. Spatial pat-
biology [2, 3, 9]. In biological development, DNA maps to a terns exhibiting (a) bilateral symmetry, (b) imperfect
mature phenotype through a process of growth that builds symmetry, and (c) repetition with variation are depicted.
the phenotype over time. Development facilitates the reuse These patterns demonstrate that CPPNs effectively en-
of genes because the same gene can be activated at any lo- code fundamental regularities of several different types.
cation and any time during the development process.
This observation has inspired an active field of research in
generative and developmental systems [3, 4, 7, 9, 10, 12, 13, tion such as sine). Figure 1b shows how such a composition
20, 22]. The aim is to find an abstraction of natural develop- is represented as a network.
ment for a computer running an evolutionary algorithm, so Such networks are called Compositional Pattern Produc-
that Evolutionary Computation can begin to discover com- ing Networks because they produce spatial patterns by com-
plexity on a natural scale. Prior abstractions range from posing basic functions. While CPPNs are similar to ANNs,
low-level cell chemistry simulations to high-level grammati- they differ in their set of activation functions and how they
cal rewrite systems [20]. are applied. Furthermore, they are an abstraction of devel-
Compositional Pattern Producing Networks (CPPNs) are opment rather than of biological brains.
a novel abstraction of development that can represent so- Through interactive evolution, Stanley [16, 17] showed
phisticated repeating patterns in Cartesian space [16, 17]. that CPPNs can produce spatial patterns with important
Unlike most generative and developmental encodings, CPPNs geometric motifs that are expected from generative and de-
do not require an explicit simulation of growth or local inter- velopmental encodings and seen in nature. Among the most
action, yet still exhibit their essential features. The remain- important such motifs are symmetry (e.g. left-right symme-
der of this section reviews CPPNs, which will be augmented tries in vertebrates), imperfect symmetry (e.g. right-handed-
in this paper to represent connectivity patterns and ANNs. ness), repetition (e.g. receptive fields in the cortex [24]), and
Consider the phenotype as a function of n dimensions, repetition with variation (e.g. cortical columns [8]). Figure
where n is the number of dimensions in physical space. For 2 shows examples of several such important motifs produced
each coordinate in that space, its level of expression is an through interactive evolution of CPPNs.
output of the function that encodes the phenotype. Figure These patterns are generated by applying the right ac-
1a shows how a two-dimensional phenotype can be generated tivation functions (e.g. symmetric functions for symmetry;
by a function of two parameters. A mathematical abstrac- periodic functions for repetition) in the right order in the
tion of each stage of development then can be represented network. The order of activations is an abstraction of the
explicitly inside this function. unfolding process of development.
Stanley [16, 17] showed how simple canonical functions
can be composed to create networks that produces complex 2.2 CPPN-NEAT
regularities and symmetries. Each component function cre- Because NEAT was originally designed to evolve increas-
ates a novel geometric coordinate frame within which other ingly complex ANNs, it is naturally suited to doing the
functions can reside. The main idea is that these simple same with CPPNs, which are also represented as graphs.
canonical functions are abstractions of specific events in de- The NEAT method begins evolution with a population of
velopment such as establishing bilateral symmetry (e.g. with small, simple networks and complexifies them over genera-
a symmetric function such as Gaussian) or the division of tions, leading to increasingly sophisticated solutions. While
the body into discrete segments (e.g. with a periodic func- the NEAT method was originally developed to solve difficult
control and sequential decision tasks, yielding strong perfor- 1) Query each potential 2) Feed each coordinate pair into CPPN
connection on substrate
mance in a number of domains [19, 21, 18], in this paper it x1 y1 x2 y2
is used to evolve CPPNs. This section briefly reviews the
NEAT method. See also Stanley and Miikkulainen [19, 21] 1,1 0,1 ... 1,1 1,1 0,1
... Connective
for detailed descriptions of original NEAT. CPPN(evolved)
NEAT is based on three key ideas. First, in order to 1,1 1,0
...
allow network structures to increase in complexity over gen- 1,0 0,0 1,0 1,1 0,0
erations, a method is needed to keep track of which gene is
...
...
which. Otherwise, it is not clear in later generations which 0.5,1 1,1
individual is compatible with which, or how their genes 1,1 0,1 1,1 ... 3) Output is weight
should be combined to produce offspring. NEAT solves this Substrate between (x1,y1) and (x2,y2)
problem by assigning a unique historical marking to every
Figure 3: Hypercube-based Geometric Connectivity
new piece of network structure that appears through a struc-
Pattern Interpretation. A grid of nodes, called the
tural mutation. The historical marking is a number assigned
substrate, is assigned coordinates such that the center
to each gene corresponding to its order of appearance over
node is at the origin. (1) Every potential connection in
the course of evolution. The numbers are inherited during
the substrate is queried to determine its presence and
crossover unchanged, and allow NEAT to perform crossover
weight; the dark directed lines shown in the substrate
without the need for expensive topological analysis. That
represent a sample of connections that are queried. (2)
way, genomes of different organizations and sizes stay com-
For each query, the CPPN takes as input the positions
patible throughout evolution.
of the two endpoints and (3) outputs the weight of the
Second, NEAT speciates the population, so that individu-
connection between them. In this way, connective CPPNs
als compete primarily within their own niches instead of with
produce regular patterns of connections in space.
the population at large. This way, topological innovations
are protected and have time to optimize their structure be- For example, consider a 5×5 grid of nodes. The nodes are
fore competing with other niches in the population. NEAT assigned coordinates corresponding to their positions within
uses the historical markings on genes to determine to which the grid (labeled substrate in figure 3), where (0, 0) is the
species different individuals belong. center of the grid. Assuming that these nodes and their po-
Third, NEAT begins with a uniform population of simple sitions are given a priori, a geometric connectivity pattern is
networks with no hidden nodes, differing only in their ini- produced by a CPPN that takes any two coordinates (source
tial random weights. Speciation protects new innovations, and target) as input, and outputs the weight of their connec-
allowing diverse topologies to gradually complexify over evo- tion. The CPPN is queried in this way for every potential
lution. Thus, NEAT can start minimally, and grow the nec- connection on the grid. Because the connection weights are
essary structure over generations. A similar process of grad- thereby a function of the positions of their source and target
ually adding new genes has been confirmed in natural evolu- nodes, the distribution of weights on connections through-
tion [11, 23] and shown to improve adaptation [1]. Through out the grid will exhibit a pattern that is a function of the
complexification, high-level features can be established early geometry of the coordinate system.
in evolution and then elaborated and refined as new genes A CPPN in effect computes a four-dimensional function
are added [11]. CP P N (x1 , y1 , x2 , y2 ) = w, where the first node is at (x1 , y1 )
For these reasons, in this paper the NEAT method is used and the second node is at (x2 , y2 ). This formalism returns a
to evolve increasingly complex CPPNs. CPPN-generated weight for every connection between every node in the grid,
patterns evolved with NEAT exhibit several essential motifs including recurrent connections. By convention, a connec-
and properties of natural phenotypes [15]. If such properties tion is not expressed if the magnitude of its weight, which
could be transferred to evolved connectivity patterns, the may be positive or negative, is below a minimal threshold
representational power of CPPNs could potentially evolve wmin . The magnitude of weights above this threshold are
large-scale ANNs, as explained in the next section. scaled to be between zero and a maximum magnitude in the
substrate. That way, the pattern produced by the CPPN
3. HYPERNEAT can represent any network topology (figure 3).
The spatial patterns in Section 2.1 present a challenge: The connectivity pattern produced by a CPPN in this way
How can such spatial patterns describe connectivity? This is called the substrate so that it can be verbally distinguished
section explains how CPPN output can be effectively in- from the CPPN itself, which has its own internal topology.
terpreted as a connectivity pattern rather than a spatial Furthermore, in the remainder of this paper, CPPNs that
pattern. Furthermore, this novel interpretation allows neu- are interpreted to produce connectivity patterns are called
rons, sensors, and effectors to exploit meaningful geometric connective CPPNs while CPPNs that generate spatial pat-
relationships. The next section introduces the key insight, terns are called spatial CPPNs. This paper focuses on neural
which is to assign connectivity a geometric interpretation. substrates produced by connective CPPNs.
3.1 Geometric Connectivity Patterns Because the CPPN is a function of four dimensions, the
two-dimensional connectivity pattern expressed by the CPPN
The main idea is to input into the CPPN the coordinates
is isomorphic to a spatial pattern embedded in a four-di-
of the two points that define a connection rather than in-
mensional hypercube. Thus, because CPPNs generate reg-
putting only the position of a single point as in Section 2.1.
ular spatial patterns (Section 2.1), by extension they can be
The output is interpreted as the weight of the connection
expected to produce geometric connectivity patterns with
rather than the intensity of a point. This way, connections
corresponding regularities. The next section demonstrates
can be defined in terms of the locations that they connect,
this capability.
thereby taking into account the network’s geometry.
Target (x2,y2)
1,1 0,1 1,1
1,0
0,0 1,0
1,1 0,1 1,1
0.4
0.5
0.2
0 0
0 50 100 150 200 250 300 0 50 100 150 200 250 300
Generation Generation
(a) P-NEAT and HyperNEAT Generalization (b) HyperNEAT Generation Champion Scaling
Figure 7: Generalization and Scaling. The graphs show performance curves over 300 generations averaged over
20 runs each. (a) P-NEAT is compared to HyperNEAT on both evaluation and generalization. (b) HyperNEAT
generation champions with and without delta inputs are evaluated for their performance on 11 × 11, 33 × 33, and 55 × 55
substrate resolutions. The results show the HyperNEAT generalizes significantly better than P-NEAT (p < 0.01) and
scales almost perfectly.
While the deltas did perform significantly better on aver- substrate, solutions were able to generalize by leveraging
age between generations 38 and 70 (p < 0.05), the CPPNs substrate geometry.
without delta inputs were able to catch up and reach the Examples of the halo activation pattern are shown in fig-
same level of performance after generation 70. Thus, al- ure 8, which also shows how the activation pattern looks at
though applicable geometric coordinate frames provide a different resolution substrates generated by the same CPPN.
boost to evolution, HyperNEAT is ultimately powerful enough This motif is sufficiently robust that it works at variable res-
to discover the concept of distance on its own. olutions i.e. the encoding is scalable.
Most importantly, both variants were able to scale al- Although HyperNEAT generates ANNs with millions of
most perfectly from 11 × 11 resolution substrate with up connections, such ANNs can run in real time on most mod-
to 14,641 connections to a 55 × 55 resolution substrate with ern hardware. Using a 2.0GHz processor, an eight million
up to 9,150,625 connections, with no significant difference connection networks takes on average 3.2 minutes to create,
in performance after the second generation. This result is but only 0.09 seconds to process a single trial.
also significant because the higher-resolution substrates were
tested on all valid object placements, which include many
6. DISCUSSION AND FUTURE WORK
positions that did not even exist on the lower-resolution sub- Like other direct encodings, P-NEAT can only improve by
strate. Thus, remarkably, CPPNs found solutions that lose discovering each connection strength individually. Further-
no abilities at higher resolution! more, P-NEAT solutions cannot generalize to locations out-
High-quality CPPNs at the 55 × 55 resolution contained side its training corpus because it has no means to represent
on average 8.3 million connections in their substrate and the general pattern of the solution. In contrast, HyperNEAT
performed as well as their 11 × 11 counterparts. These sub- discovers a general connectivity concept that naturally cov-
strates are the largest functional structures produced by evo- ers locations outside its training set. Thus, HyperNEAT
lutionary computation of which the authors are aware. creates a novel means of generalization, though substrate
CPPN encoding is highly compact. If good solutions are geometry. That is, HyperNEAT exploits the geometry of the
those that achieve an average distance under 0.25, the aver- problem by discovering its underlying regularities.
age complexity of a good solution CPPN was only 24 con- Because the solution is represented conceptually, the same
nections. In comparison, at 11 × 11 resolution the average solution effectively scales to higher resolutions, which is a
number of connections in the substrate was 12,827 out of the new capability for ANNs. Thus a working solution was pos-
possible 14,641 connections. Thus the genotype is smaller sible to produce with over eight million connections, among
than the evaluated phenotype on average by a factor of 534! the largest working ANNs ever produced through evolution-
ary computation. The real benefit of this capability however
5.2 Typical Solutions is that in the future, further evolution can be performed at
Typical HyperNEAT solutions developed several different the higher resolution.
geometric strategies to solve the domain. These strategies For example, one potentially promising application is ma-
are represented in the connectivity between the visual field chine vision, where general regularities in the structure of
and the target field. For example, one common strategy images can be learned at low resolution. Then the resolution
produces a soft halo around each active locus, creating the can be increased, still preserving the learned concepts, and
largest overlap at the center of the large object, causing it to further evolved with higher granularity and more detailed
activate highest. HyperNEAT discovered several other re- images. For example, a system that identifies the location
peating motifs that are equally effective. Thus, HyperNEAT of a face in a visual map might initially learn to spot a circle
is not confined to a single geometric strategy. Because Hy- on a small grid and then continue to evolve at increasingly
perNEAT discovers how to repeat its strategy across the high resolutions with increasing face-related detail.
(a) Sensor Field (b) 11x11 Target Field (c) 33x33 Target Field (d) 55x55 Target Field
(object placement) (12,827 connections) (1,033,603 connections) (7,970,395 connections)
Figure 8: Activation Patterns of the Same CPPN at Different Resolutions. The object placements for this example
are shown in (a). For the same CPPN, the activation levels of the substrate are shown at (a) 11 × 11, (b) 33 × 33, and
(c) 55 × 55 resolutions. Each square represents a neuron in the target field. Darker color signifies higher activation and
a white X denotes the point of highest activation, which is correctly at the center of the large object in (a).
It is important to note that HyperNEAT is not restricted [7] D. Federici. Evolving a neurocontroller through a process of
to state-space sandwich substrates. The method is suf- embryogeny. In S. Schaal, A. J. Ijspeert, A. Billard,
S. Vijayakumar, J. Hallam, and Jean-Arcady, editors,
ficiently general to generate arbitrary two-dimensional or Proceedings of the Eighth International Conference on
three-dimensional connectivity patterns, including hidden Simulation and Adaptive Behavior (SAB-2004), pages
373–384, Cambridge, MA, 2004. MIT Press.
nodes and recurrence. Thus it applies to a broad range of [8] G. J. Goodhill and M. A. Carreira-Perpinn. Cortical columns.
high-resolution problems with potential geometric regulari- In L. Nadel, editor, Encyclopedia of Cognitive Science,
ties. In addition to vision, these include controlling artificial volume 1, pages 845–851. MacMillan Publishers Ltd., London,
2002.
life forms, playing board games, and developing new kinds [9] G. S. Hornby and J. B. Pollack. Creating high-level
of self-organizing maps. components with a generative representation for body-brain
evolution. Artificial Life, 8(3), 2002.
[10] A. Lindenmayer. Adding continuous components to L-systems.
7. CONCLUSIONS In G. Rozenberg and A. Salomaa, editors, L Systems, Lecture
Notes in Computer Science 15, pages 53–68. Springer-Verlag,
This paper presented a novel approach to encoding and Heidelberg, Germany, 1974.
evolving large-scale ANNs that exploits task geometry to [11] A. P. Martin. Increasing genomic complexity by gene
discover regularities in the solution. Connective Composi- duplication and the origin of vertebrates. The American
Naturalist, 154(2):111–128, 1999.
tional Pattern Producing Networks (connective CPPNs) en-
[12] J. F. Miller. Evolving a self-repairing, self-regulating, French
code large-scale connectivity patterns by interpreting their flag organism. In Proceedings of the Genetic and
outputs as the weights of connections in a network. To Evolutionary Computation Conference (GECCO-2004),
Berlin, 2004. Springer Verlag.
demonstrate this encoding, Hypercube-based Neuroevolu-
[13] E. Mjolsness, D. H. Sharp, and J. Reinitz. A connectionist
tion of Augmenting Topologies (HyperNEAT) evolved CPPNs model of development. Journal of Theoretical Biology,
that solve a simple visual discrimination task at varying res- 152:429–453, 1991.
[14] O. Sporns. Network analysis, complexity, and brain function.
olutions. The solution required HyperNEAT to discover a Complexity, 8(1):56–60, 2002.
repeating motif in neural connectivity across the visual field. [15] K. O. Stanley. Comparing artificial phenotypes with natural
It was able to generalize significantly better than a direct biological patterns. In Proceedings of the Genetic and
encoding and could scale to a network of over eight million Evolutionary Computation Conference (GECCO) Workshop
Program, New York, NY, 2006. ACM Press.
connections. The main conclusion is that connective CPPNs [16] K. O. Stanley. Exploiting regularity without development. In
are a powerful new generative encoding that can compactly Proceedings of the AAAI Fall Symposium on Developmental
represent large-scale ANNs with very few genes. Systems, Menlo Park, CA, 2006. AAAI Press.
[17] K. O. Stanley. Compositional pattern producing networks: A
novel abstraction of development. Genetic Programming and
Evolvable Machines Special Issue on Developmental Systems,
8. REFERENCES 2007. To appear.
[1] L. Altenberg. Evolving better representations through selective [18] K. O. Stanley, B. D. Bryant, and R. Miikkulainen. Real-time
genome growth. In Proceedings of the IEEE World Congress neuroevolution in the NERO video game. IEEE Transactions
on Computational Intelligence, pages 182–187, Piscataway, on Evolutionary Computation Special Issue on Evolutionary
NJ, 1994. IEEE Press. Computation and Games, 9(6):653–668, 2005.
[2] P. J. Angeline. Morphogenic evolutionary computations: [19] K. O. Stanley and R. Miikkulainen. Evolving neural networks
Introduction, issues and examples. In J. R. McDonnell, R. G. through augmenting topologies. Evolutionary Computation,
Reynolds, and D. B. Fogel, editors, Evolutionary 10:99–127, 2002.
Programming IV: The Fourth Annual Conference on [20] K. O. Stanley and R. Miikkulainen. A taxonomy for artificial
Evolutionary Programming, pages 387–401. MIT Press, 1995. embryogeny. Artificial Life, 9(2):93–130, 2003.
[3] P. J. Bentley and S. Kumar. The ways to grow designs: A [21] K. O. Stanley and R. Miikkulainen. Competitive coevolution
comparison of embryogenies for an evolutionary design through evolutionary complexification. Journal of Artificial
problem. In Proceedings of the Genetic and Evolutionary Intelligence Research, 21:63–100, 2004.
Computation Conference (GECCO-1999), pages 35–43, San [22] A. Turing. The chemical basis of morphogenesis. Philosophical
Francisco, 1999. Kaufmann. Transactions of the Royal Society B, 237:37–72, 1952.
[4] J. C. Bongard. Evolving modular genetic regulatory networks. [23] J. D. Watson, N. H. Hopkins, J. W. Roberts, J. A. Steitz, and
In Proceedings of the 2002 Congress on Evolutionary A. M. Weiner. Molecular Biology of the Gene Fourth Edition.
Computation, 2002. The Benjamin Cummings Publishing Company, Inc., Menlo
[5] D. B. Chklovskii and A. A. Koulakov. MAPS IN THE BRAIN: Park, CA, 1987.
What can we learn from them? Annual Review of [24] M. J. Zigmond, F. E. Bloom, S. C. Landis, J. L. Roberts, and
Neuroscience, 27:369–392, 2004. L. R. Squire, editors. Fundamental Neuroscience. Academic
[6] P. M. Churchland. Some reductive strategies in cognitive Press, London, 1999.
neurobiology. Mind, 95:279–309, 1986.