You are on page 1of 5

EMERGENCE IN HISTORY

The origin of the modern concept of emergence can be traced to the middle
of the nineteenth century when realist philosophers first began pondering the
deep dissimilarities between causality in the fields of physics and chemistry.
The classical example of causality in physics is a collision between two
molecules or other rigid objects. Even in the case of several colliding
molecules the overall effect is a simple addition. If, for example, one
molecule is hit by a second one in one direction and by a third one in a
different direction the composite effect will be the same as the sum of the
two separate effects: the first molecule will end up in the same final position
if the other two hit it simultaneously or if one collision happens before the
other. In short, in these causal interactions there are no surprises, nothing is
produced over and above what is already there. But when two molecules
interact chemically an entirely new entity may emerge, as when hydrogen
and oxygen interact to form water. Water has properties that are not
possessed by its component parts: oxygen and hydrogen are gases at room
temperature while water is liquid. And water has capacities distinct from
those of its parts: adding oxygen or hydrogen to a fire fuels it while adding
water extinguishes it.
The fact that novel properties and capacities emerge from a causal
interaction was believed to have important philosophical implications for the
nature of scientific explanation. In particular, the absence of novelty in
physical interactions meant that explaining their effects could be reduced to
deduction from general principles or laws. Because deductive logic simply
transfers truth from general sentences to particular ones without adding
anything new it seemed like an ideal way of modeling the explanation of
situations like those involving rigid collisions. But the synthesis of water does
produce something new, not new in the absolute sense of something that has
never existed before but only in the relative sense that something emerges
that was not in the interacting entities acting as causes. This led some
philosophers to the erroneous conclusion that emergent effects could not be
explained, or what amounts to the same thing, that an effect is emergent
only for as long as a law from which it can be deduced has not yet been
found. This line of thought went on to become a full fledged philosophy in
the early twentieth century, a philosophy based on the idea that emergence
was intrinsically unexplainable. This first wave of emergentist philosophers
were not mystical thinkers but quite the opposite: they wanted to use the
concept of emergence to eliminate from biology mystifying entities like a life
force or the lan vital. But their position towards explanation gave their
views an inevitable mystical tone: emergent properties, they said, must be
accepted with an attitude of intellectual resignation, that is, they must be
treated as brute facts towards which the only honest stance is one of natural
piety.
Expressions like these were bound to make the concept of emergence
suspect to future generations of philosophers. It was only the passage of time
and the fact that mathematical laws like those of classical physics were not
found in chemistry or biology or for that matter, in the more historical fields
of physics, like geology or climatology that would rescue the concept from
intellectual oblivion. Without simple laws acting as self-evident truths
(axioms) from which all causal effects could be deduced as theorems the

axiomatic dream eventually withered away. Today a scientific explanation is


identified not with some logical operation but with the more creative
endeavor of elucidating the mechanisms that produce a given effect. The
early emergentists dismissed this idea because they could not imagine
anything more complex than a linear clockwork mechanism. But there are
many other physical mechanisms that are nonlinear. Even in the realm of
human technology we have a plurality of exemplars to guide our imagination:
steam engines, thermostats, transistors. And outside technology the diversity
is even greater as illustrated by all the different mechanisms that have been
discovered in chemistry and biology. Armed with a richer concept of
mechanism the emergent properties of a whole can now be explained as an
effect of the causal interactions between its component parts. A large portion
of this book will be dedicated to describe the wide variety of mechanisms of
emergence that have been elucidated in the decades since the original
emergentists first wrote.
Thus, what is different today from the early twentieth century views is the
epistemological status of emergence: it does not have to be accepted as a
brute fact but can be explained without fearing that it will be explained away.
What has remained the same is the ontological status of emergence: it still
refers to something that is objectively irreducible. But what kinds of entities
display this ontological irreducibility? The original examples of irreducible
wholes were entities like Life, Mind, or even Deity. But these entities
cannot be considered legitimate inhabitants of objective reality because they
are nothing but reified generalities. And even if one does not have a problem
with an ontological commitment to entities like these it is hard to see how we
could specify mechanisms of emergence for life or mind in general, as
opposed to accounting for the emergent properties and capacities of concrete
wholes like a metabolic circuit or an assembly of neurons. The only problem
with focusing on concrete wholes is that this would seem to make
philosophers redundant since they do not play any role in the elucidation of
the series of events that produce emergent effects. This fear of redundancy
may explain the attachment of philosophers to vague entities as a way of
carving out a niche for themselves in this enterprise. But realist philosophers
need not fear irrelevance because they have plenty of work creating an
ontology free of reified generalities within which the concept of emergence
can be correctly deployed.
What kinds of concrete emergent wholes can we legitimately believe in?
Wholes the identity of which is determined historically by the processes that
initiated and sustain the interactions between their parts. The historically
contingent identity of these wholes is defined by their emergent properties,
capacities, and tendencies. Lets illustrate the distinction between properties
and capacities with a simple example. A kitchen knife may be either sharp or
not, sharpness being an actual property of the knife. We can identify this
property with the shape of the cross section of the knifes blade: if this cross
section has a triangular shape then the knife is sharp else it is blunt. This
shape is emergent because the metallic atoms making up the knife must be
arranged in a very particular way for it to be triangular. There is, on the other
hand, the capacity of the knife to cut things. This is a very different thing
because unlike the property of sharpness which is always actual the capacity
to cut may never be actual if the knife is never used. In other words, a

capacity may remain only potential if it is never actually exercised. This


already points to a very different ontological status between properties and
capacities. In addition, when the capacity does become actual it is not as a
state, like the state of being sharp, but as an event, an event that is always
double: to cut-to be cut. The reason for this is that the knifes capacity to
affect is contingent on the existence of other things, cuttable things, that
have the capacity to be affected by it. Thus, while properties can be specified
without reference to anything else capacities to affect must always be
thought in relation to capacities to be affected. Finally, the ontological
relation between properties and capacities displays a complex symmetry. On
one hand, capacities depend on properties: a knife must be sharp to be able
to cut. On the other, the properties of a whole emerge from interactions
between its component parts, interactions in which the parts must exercise
their own capacities: without metallic atoms exercising their capacity to bond
with one another the knifes sharpness would not exist.
A similar distinction can be made between emergent properties and
tendencies. To stick to the same example: a knife has the property of solidity,
a property that is stable within a wide range of temperatures. Nevertheless,
there are always environments that exceed that range, environments in
which the temperature becomes so intense that the knife is forced to
manifest the tendency to liquify. At even greater intensities the molten metal
may gasify. These tendencies are as emergent as the shape of a knifes
blade: a single metallic atom cannot be said to be solid, liquid, or gas; we
need a large enough population of interacting atoms for the tendency to be in
any of these states to emerge. Tendencies are similar to capacities in their
ontological status, that is, they need not be actual to be real, and when they
do become actual is as events: to melt or to solidify. The main difference
between tendencies and capacities is that while the former are typically finite
the latter need not be. We can enumerate, for example, the possible states in
which a material entity will tend to be (solid, liquid, gas, plasma) or the
possible ways in which it may tend to flow (uniformly, periodically,
turbulently). But capacities to affect need not be finite because they depend
on the capacities to be affected of innumerable other entities: a knife has the
capacity to cut when it interacts with something that has the capacity to be
cut; but it also has the capacity to kill if it interacts with large organisms with
differentiated organs, that is, with entities that have the capacity to be killed.
Since neither tendencies nor capacities must be actual to be real it would be
tempting to give them the status of possibilities. But the concept of a
possible event is philosophically suspect because it is almost
indistinguishable from that of a real event, the only difference being the
formers lack of reality. Rather, what is needed is a way of specifying the
structure of the space of possibilities that is defined by an entitys tendencies
and capacities. A philosophers ontological commitment should be to the
objective existence of this structure and not to the possibilities themselves
since the latter exist only when entertained by a mind. Some possibility
spaces are continuous having a well defined spatial structure that can be
investigated mathematically, while others are discrete, possessing no
inherent spatial order but being nevertheless capable of being studied
through the imposition of a certain arrangement. The space of possible
regimes of flow (uniform, periodic, turbulent) is an example of a continuous

possibility space in which the only discontinuities are the critical points
separating the different tendencies. The space of possible genes, on the
other hand, is an example of a discrete space that must be studied by
imposing an order on it, such as an arrangement in which every gene has as
neighbors other genes differing from it by a single mutation. As we will see in
the different chapters of this book the structure of possibility spaces plays as
great a role in the explanation of emergence as do mechanisms.
The chapters are deliberately arranged in a way that departs from the ideas
of the original emergentists. These philosophers believed that entities like
Space-Time, Life, Mind, and Deity (not god but the sense of the
sacred that emerges in some minds) formed a pyramid of progressively
ascending grades. Although the levels of this pyramid were not supposed to
imply any teleology it is hard not to view each level as leading to the next
following a necessary sequence. To eliminate this possible interpretation an
entirely different image is used here, that of a contingent accumulation of
layers or strata that may differ in complexity but that coexist and interact
with each other in no particular order: a biological entity may interact with a
subatomic one, as when neurons manipulate concentrations of metallic ions,
or a psychological entity interact with a chemical one, as when subjective
experience is modified by a drug. The book begins with purely physical
entities, thunderstorms, that are already complex enough to avoid the idea
that their behavior can be deduced from a general law. It then moves on to
explore the prebiotic soup, bacterial ecosystems, insect intelligence,
mammalian memory, primate social strategies, and the emergence of trade,
language, and institutional organizations in human communities. Each of
these layers will be discussed in terms of the mechanisms of emergence
involved, drawing ideas and insights from the relevant fields of science, as
well as in terms of the structure of their possibility spaces, using the results
of both mathematical analysis and the outcomes of computer simulations.
Simulations are partly responsible for the restoration of the legitimacy of the
concept of emergence because they can stage interactions between virtual
entities from which properties, tendencies, and capacities actually emerge.
Since this emergence is reproducible in many computers it can be probed and
studied by different scientists as if it were a laboratory phenomenon. In other
words, simulations can play the role of laboratory experiments in the study of
emergence complementing the role of mathematics in deciphering the
structure of possibility spaces. And philosophy can be the mechanism
through which these insights can be synthesized into an emergent materialist
world view that finally does justice to the creative powers of matter and
energy.
MANUEL DELANDA
[Note: The following essay is the Introduction to a new book by Manuel
DeLanda, easily the most important philosopher of the present day,
concerning topics and concepts of particular relevance, I believe, for
contemporary and future architects. It is considerably longer than my usual
posts, but the clarity of DeLandas writing makes it a compelling read. It is
published here under rights of Fair Use in international copyrights law,
meaning for educational and research purposes only. The book, entitled

Philosophy and Simulation: The Emergence of Synthetic Reason, will


be published by Continuum, London, in January 2011. LW]

You might also like