You are on page 1of 12

Onaral, B., Cammarota, J. P. Complexity, Scaling, and Fractals in Biomedical Signals.

The Biomedical Engineering Handbook: Second Edition.


Ed. Joseph D. Bronzino
Boca Raton: CRC Press LLC, 2000

59
Complexity, Scaling,
and Fractals in
Biomedical Signals
59.1

Complex Dynamics.
Overcoming the Limits of Newtonian Mathematics Critical
Phenomena: Phase Transitions An Illustration of Critical
Phenomena: Magnetism A Model for Phase Transitions:
Percolation Self-Organized Criticality Dynamics at the
Edge of Chaos

59.2
Drexel University

Joseph P. Cammarota
Naval Air Warfare Center,
Aircraft Division

Introduction to Scaling Theories


Fractal Preliminaries Mathematical and Natural Fractals
Fractal Measures Power Law and 1/f Processes
Distributed Relaxation Processes Multifractals

Banu Onaral
59.3

An Example of the Use of Complexity Theory


in the Development of a Model of the Central
Nervous System

Complexity, a contemporary theme embraced by physical as well as social sciences, is concerned with the
collective behavior observed in composite systems in which long-range order is induced by short-range
interactions of the constituent parts. Complex forms and functions abound in nature. Particularly in
biology and physiology, branched, nested, granular, or otherwise richly packed, irregular, or disordered
objects are the rule rather than the exception. Similarly ubiquitous are distributed, broad-band phenomena that appear to fluctuate randomly. The rising science of complexity holds the promise to lead to
powerful tools to analyze, model, process, and control the global behavior of complex biomedical systems.
The basic tenets of the complexity theory rest on the revelation that large classes of complex systems
(composed of a multitude of richly interacting components) are reducible to simple rules. In particular,
the structure and dynamics of complex systems invariably exist or evolve over a multitude of spatial and
temporal scales. Moreover, they exhibit a systematic relationship between scales. From the biomedical
engineering standpoint, the worthwhile outcome is the ability to characterize these intricate objects and
processes in terms of straightforward scaling and fractal concepts and measures that often can be
translated into simple iterative rules. In this sense, the set of concepts and tools, emerging under the
rubric of complexity, complements the prediction made by the chaos theory that simple (low-order
deterministic) systems may generate complex behavior.
In their many incarnations, the concepts of complexity and scaling are playing a refreshingly unifying
role among diverse scientific pursuits; therein lie compelling opportunities for scientific discoveries and
technical innovations. Since these advances span a host of disciplines, hence different scientific languages,
cultures, and dissemination media, finding ones path has become confusing. One of the aims of this

2000 by CRC Press LLC

presentation is to serve as a resource for key literature. We hope to guide the reader toward substantial
contributions and away from figments of fascination in the popular press that have tended to stretch
emerging concepts ahead of the rigorous examination of evidence and the scientific verification of facts.
This chapter is organized in three mains parts. The first part is intended to serve as a primer for the
fundamental aspects of the complexity theory. An overview of the attendant notions of scaling theories
constitutes the core of the second part. In the third part, we illustrate the potential of the complexity
approach by presenting an application to predict acceleration-induced loss of consciousness in pilots.

59.1 Complex Dynamics


There exists a class of systems in which very complex spatial and temporal behavior is produced through
the rich interactions among a large number of local subsystems. Complexity theory is concerned with
systems that have many degrees of freedom (composite systems), are spatially extended (systems with
both spatial and temporal degrees of freedom), and are dissipative as well as nonlinear due to the interplay
among local components (agents). In general, such systems exhibit emergent global behavior. This means
that macroscopic characteristics cannot be deduced from the microscopic characteristics of the elementary components considered in isolation. The global behavior emerges from the interactions of the local
dynamics.
Complexity theories draw their power from recognition that the behavior of a complex dynamic system
does not, in general, depend on the physical particulars of the local elements but rather on how they
interact to collectively (cooperatively or competitively) produce the globally observable behavior. The
local agents of a complex dynamic system interact with their neighbors through a set of usually (very)
simple rules.
The emergent global organization that occurs through the interplay of the local agents arises without
the intervention of a central controller. That is, there is self-organization, a spontaneous emergence of
global order. Long-range correlations between local elements are not explicitly defined in such models,
but they are induced through local interactions. The global organization also may exert a top-down
influence on the local elements, providing feedback between the macroscopic and microscopic structures
[Forrest, 1990] (Fig. 59.1).

Overcoming the Limits of Newtonian Mathematics


Linearity, as well as the inherent predictive ability, was an important factor in the success of Newtonian
mechanics. If a linear system is perturbed by a small amount, then the system response will change by
a proportionally small amount. In nonlinear systems, however, if the system is perturbed by a small
amount, the response could be no change, a small change, a large change, oscillations (limit cycle), or

FIGURE 59.1

2000 by CRC Press LLC

A complex dynamic system.

chaotic behavior. The response depends on the state of the system at the time it was perturbed. Since
most of nature is nonlinear, the key to success in understanding nature lies in embracing this nonlinearity.
Another feature found in linear systems is the property of superposition. Superposition means that the
whole is equal to the sum of the parts. All the properties of a linear system can be understood through
the analysis of each of its parts. This is not the case for complex systems, where the interaction among
simple local elements can produce complex emergent global behavior.
Complexity theory stands in stark contrast to a purely reductionist approach that would seek to explain
global behavior by breaking down the system into its most elementary components. The reductionist
approach is not guaranteed to generate knowledge about the behavior of a complex system, since it is
likely that the information about the local interactions (which determine the global behavior) will not
be revealed in such an analysis. For example, knowing everything there is to know about a single ant will
reveal nothing about why an ant colony is capable of such complex behaviors as waging war, farming,
husbandry, and the ability to quickly adapt to changing environmental conditions. The approach that
complexity theory proposes is to look at the system as a whole and not merely as a collection of irreducible
parts.
Complexity research depends on digital computers for simulation of interactions. Cellular automata
(one of the principal tools of complexity) have been constructed to model sand piles, earthquakes, traffic
patterns, satellite communication networks, evolution, molecular autocatalysis, forest fires, and species
interactions (among others) [Toffoli & Margoulis, 1987]. We note here that complexity is building on,
and in some cases unifying, developments made in the fields of chaotic dynamics [Devaney, 1992], critical
phenomena, phase transitions, renormalization [Wilson, 1983], percolation [Stauffer & Aharony, 1992],
neural networks [Harvey, 1994; Simpson, 1990], genetic algorithms [Goldberg, 1989] and artificial life
[Langton, 1989; Langton et al., 1992].

Critical Phenomena: Phase Transitions


For the purpose of this discussion, a phase transition can be defined as any abrupt change between the
physical and/or dynamic states of a system. The most familiar examples of phase transitions are between
the fundamental stages of matter: solid, liquid, gas, and plasma. Phase transitions are also used to define
other changes in matter, such as changes in the crystalline structure or state of magnetism. There are
also phase transitions in the dynamics of systems from ordered (fixed-point and limit-cycle stability) to
disordered (chaos). Determining the state of matter is not always straightforward. Sometimes the apparent
state of matter changes when the scale of the observation (macroscopic versus microscopic) is changed.
A critical point is a special case of phase transitions where order and disorder are intermixed at all scales
[Wilson, 1983]. At criticality, all spatial and temporal features become scale invariant or self-similar.
Magnetism is a good example of this phenomenon.

All Illustration of Critical Phenomena: Magnetism


The atoms of a ferromagnetic substance have more electrons with spins in one direction than in the
other, resulting in a net magnetic field for the atom as a whole. The individual magnetic fields of the
atoms tend to line up in one direction, with the result that there is a measurable level of magnetism in
the material. At a temperature of absolute zero, all the atomic dipoles are perfectly aligned. At normal
room temperature, however, some of the atoms are not aligned to the global magnetic field due to thermal
fluctuations. This creates small regions that are nonmagnetic, although the substance is still magnetic.
Spatial renormalization, or coarse graining, is the process of averaging the microscopic properties of the
substance over a specified range in order to replace the multiple elements with a single equivalent element.
If measurements of the magnetic property were taken at a very fine resolution (without renormalization), there would be some measurements that detect small pockets of nonmagnetism, although most
measurements would indicate that the substance was magnetic. As the scale of the measurements is
increased, i.e., spatially renormalized, the small pockets of nonmagnetism would be averaged out and

2000 by CRC Press LLC

would not be measurable. Therefore, measurements at the larger scale would indicate that the substance
is magnetic, thereby decreasing its apparent temperature and making the apparent magnetic state dependent on the resolution of the measurements. The situation is similar (but reversed) at high temperatures.
That is, spatial renormalization results in apparently higher temperatures, since microscopic islands of
magnetism are missed because of the large areas of disorder in the material.
At the Curie temperature there is long-range correlation in both the magnetic and nonmagnetic
regions. The distribution of magnetic and nonmagnetic regions is invariant under the spatial renormalization transform. These results are independent of the scale at which the measure is taken, and the
apparent temperature does not change under the renormalization transform. This scale invariance (selfsimilarity) occurs at only three temperatures: absolute zero, infinity, and the Curie temperature. The
Curie temperature represents a critical point (criticality) in the tuning parameter (temperature) that
governs the phase transition from a magnetic to a nonmagnetic state [Pietgen & Richter, 1986].

A Model for Phase Transitions: Percolation


A percolation model is created by using a simple regular geometric framework and by establishing simple
interaction rules among the elements on the grid. Yet these models give rise to very complex structures
and relationships that can be described by using scaling concepts such as fractals and power laws. A
percolation model can be constructed on any regular infinite n-dimensional lattice [Stauffer & Aharony,
1992]. For simplicity, the example discussed here will use a two-dimensional finite square grid. In site
percolation, each node in the grid has only two states, occupied or vacant. The nodes in the lattice are
populated-based on a uniform probability distribution, independent of the sate of any other node. The
probability of a node being occupied is p (and thus the probability of a node being vacant is 1 p).
Nodes that are neighbors on the grid link together to form clusters (Fig. 59.2).

FIGURE 59.2

2000 by CRC Press LLC

A percolation network.

Clusters represent connections between nodes in the lattice. Anything associated with the cluster can
therefore travel (flow) to any node that belongs to the cluster. Percolation can describe the ability of
water to flow through a porous medium such as igneous rock, oil fields, or finely ground Colombian
coffee. As the occupation probability increases, the clusters of the percolation network grow from local
connectedness to global connectedness [Feder, 1988]. At the critical occupation probability, a cluster that
spans the entire lattice emerges. It is easy to see how percolation could be used to describe such phenomena
as phase transitions by viewing occupied nodes as ordered matter, with vacant nodes representing
disordered matter. Percolation networks have been used to model magnetism, forest fires, and the
permeability of ion channels in cell membranes.

Self-Organized Criticality
The concept of self-organized criticality has been introduced as a possible underlying principle of
complexity [Bak et al., 1988; Bak & Chen, 1991]. The class of self-organized critical systems is spatially
extended, composite, and dissipative with many locally interacting degrees of freedom. These systems
have the capability to naturally evolve (i.e., there is no explicit tuning parameter such as temperature or
pressure) toward a critical state.
Self-organized criticality is best illustrated by a sand pile. Start with a flat plate. Begin to add sand one
grain at a time. The mound will continue to grow until criticality is reached. This criticality is dependent
only on the local interactions among the grains of sand. The local slope determines what will happen if
another grain of sand is added. If the local slope is below the criticality (i.e., flat) the new grain of sand
will stay put and increase the local slope. If the local slope is at the criticality, then adding the new grain
of sand will increase the slope beyond the criticality, causing it to collapse. The collapsing grains of sand
spread to adjoining areas. If those areas are at the criticality, then the avalanche will continue until local
areas with slopes below the criticality are reached. Long-range correlations (up to the length of the sand
pile) may emerge from the interactions of the local elements. Small avalanches are very common, while
large avalanches are rare. The size (and duration) of the avalanche plotted against the frequency of
occurrence of the avalanche can be described by a power law [Bak et al., 1988]. The sand pile seeks the
criticality on its own. The slope in the sand pile will remain constant regardless of even the largest
avalanches. These same power laws are observed in traffic patterns, earthquakes, and many other complex
phenomena.

Dynamics at the Edge of Chaos


The dynamics of systems can be divided into several categories. Dynamic systems that exhibit a fixedpoint stability will return to their initial state after being perturbed. A periodic evolution of states will
result from a system that exhibits a limit-cycle stability. Either of these systems may display a transient
evolution of states before the stable regions are reached. Dynamic systems also may exhibit chaotic
behavior. The evolution of states associated with chaotic behavior is aperiodic, well-bounded, and very
sensitive to initial conditions and resembles noise but is completely deterministic [Tsonis & Tsonis, 1989].
The criticality that lies between highly ordered and highly disordered dynamics has been referred to
as the edge of chaos [Langton, 1990] and is analogous to a phase transition between states of matter,
where the highly ordered system can be thought of as a solid and the highly disordered system a liquid.
The edge of chaos is the critical boundary between order and chaos. If the system dynamics are stagnant
(fixed-point stability, highly ordered system), then there is no mechanism for change. The system cannot
adapt and evolve because new states cannot be encoded into the system. If the system dynamics are
chaotic (highly disordered), then the system is in a constant state of flux, and there is no memory, no
learning, and no adaptation (some of the main qualities associated with life). Systems may exhibit
transients in the evolution of states before settling down into either fixed-point or limit-cycle behavior.
As the dynamics of a complex system enter the edge of chaos region, the length of these transients quickly
grows. The chaotic region is where the length of the transient is infinite. At the edge of chaos (the

2000 by CRC Press LLC

dynamic phase transition) there is no characteristic scale due to the emergence of arbitrarily long
correlation lengths in space and time [Langton, 1990]. The self-organized criticality in the sand piles of
Per Bak is an example of a system that exists at the edge of chaos. It is in this region that there is no
characteristic space or time scale. A single grain of sand added to the pile could cause an avalanche that
consists of two grains of sand, or it could cause an avalanche that spreads over the entire surface of the
sand pile.

59.2 Introduction to Scaling Theories


Prior to the rise of complexity theories, the existence of a systematic relationship between scales eluded
the mainstream sciences. As a consequence, natural structures and dynamics have been commonly
dismissed as too irregular and complex and often rejected as monstrous formations, intractable noise,
or artifacts. The advent of scaling concepts [Mandelbrot, 1983] has uncovered a remarkable hierarchical
order that persists over a significant number of spatial or temporal scales.
Scaling theories capitalize on scale-invariant symmetries exhibited by many natural broadband (i.e.,
multiscale) phenomena. According to the theory of self-organized criticality (see Section 59.1), this scaling
order is a manifestation of dilation (compression) symmetries that define the organization inherent to
complex systems which naturally evolve toward a critical state while dissipating energies on broad ranges
of space and time scales. Long overlooked, this symmetry is now added to the repertoire of mathematical
modeling concepts, which had included approaches based largely on displacement invariances under
translation and/or rotation.
Many natural forms and functions maintain some form of exact or statistical invariance under transformations of scale and thus belong in the scaling category. Objects and processes that remain invariant
under ordinary geometric similarity constitute the self-similar subset in this class.
Methods to capture scaling information in the form of simple rules that relate features on different
scales are actively developed in many scientific fields [Barnsley, 1993]. Engineers are coping with scaling
nature of forms and functions by investigating multiscale system theory [Basseville et al., 1992], multiresolution and multirate signal processing [Akansu & Hadad, 1992; Vaidyanathan, 1993], subband coding,
wavelets, and filter banks [Meyer, 1993], and fractal compression [Barnsley & Hurd, 1993].
These emerging tools empower engineers to reexamine old data and to re-formulate the question at
the root of many unresolved inverse problemsWhat can small patterns say about large patterns, and
vice versa? They also offer the possibility to establish cause-effect relationships between a given physical
(spatial) medium and the monitored dynamic (temporal) behavior that constitutes the primary preoccupation of diagnostic scientists.

Fractal Preliminaries
In the broadest sense, the noun or adjective fractal refers to physical objects or dynamic processes that
reveal new details on space or time magnification. A staple of a truly fractal object or process is therefore
the lack of characteristic scale in time or space. Most structures in nature are broadband over a finite
range, covering at least a number of frequency decades in space or time. Scaling fractals often consist of
a hierarchy or heterarchy of spatial or temporal structures in cascade and are often accomplished through
recursive replication of patterns at finer scales. If the replication rule preserves scale invariance throughout
the entity, such fractals are recognized as self-similar in either an exact or a statistical sense.
A prominent feature of fractals is their ability to pack structure with economy of resources, whether
energy, space, or whatever other real estate. Fitting nearly infinite networks into finite spaces is just one
such achievement. These types of fractals are pervasive in physiology, i.e., the branching patterns of the
bronchi, the cardiovascular tree, and the nervous tissue [West and Goldberger, 1987], which have the
additional feature of being fault tolerant [West, 1990].
Despite expectations heightened by the colorful publicity campaign mounted by promoters of fractal
concepts, it is advisable to view fractals only as a starting approximation in analyzing scaling shapes and
2000 by CRC Press LLC

fluctuations in nature. Fractal concepts are usually descriptive at a phenomenologic level without pretense
to reveal the exact nature of the underlying elementary processes. They do not offer, for that matter,
conclusive evidence of whatever particular collective, coupled, or isolated repetitive mechanism that
created the fractal object.
In many situations, the power of invoking fractal concepts resides in the fact that they bring the logic
of constraints, whether in the form of asymmetry of motion caused by defects, traps, energy barriers,
residual memories, irreversibility, or any other appropriate interaction or coupling mechanisms that
hinder free random behavior. As discussed earlier, the spontaneous or forced organization and the ensuing
divergence in correlations and coherences that emerges out of random behavior are presumably responsible for the irregular structures pervasive throughout the physical world.
More important, the versatility of fractal concepts as a magnifying tool is rooted in the facility to
account for scale hierarchies and/or scale invariances in an exact or statistical sense. In the role of a scale
microscope, they suggest a fresh look, with due respect to all scales of significance, at many structural
and dynamic problems deemed thus far anomalous or insoluble.

Mathematical and Natural Fractals


The history of mathematics is rife with pathologic constructions of the iterated kind which defy the
euclidian dimension concepts. The collection once included an assortment of anomalous dust sets, lines,
surfaces, volumes, and other mathematical miscellenia mostly born out of the continuous yet nondifferentiable category of functions such as the Weierstrass series.
The feature unifying these mathematical creations with natural fractals is the fractional or integer
dimensions distinct from the euclidian definition. Simply stated, a fractional dimension positions an
object in between two integer dimensions in the euclidian sense, best articulated by the critical dimension
in the Hausdorff-Besicovith derivation [Feder, 1988]. When this notion of dimension is pursued to the
extreme and the dimension reaches an integer value, one is confronted with the counterintuitive reality
of space-filling curves, volume-filling planes, etc. These objects can be seen readily to share intrinsic
scaling properties with the nearly infinite networks accomplished by the branching patterns of bronchi
and blood vessels and the intricate folding of the cortex.
A rewarding outcome afforded by the advent of scaling concepts is the ability to characterize such
structures in terms of straightforward scaling or dimension measures. From these, simple iterative
rules may be deduced to yield models with maximum economy (or minimum number) or parameters
[Barnsley, 1993]. This principle is suspected to underlie the succinct, coding adopted by nature in order
to store extensive information needed to create complex shapes and forms.

Fractal Measures
The measure most often used in the diagnosis of a fractal is the basic fractal dimension, which, in the
true spirit of fractals, has eluded a rigorous definition embracing the entire family of fractal objects. The
guiding factor in the choice of the appropriate measures is the recognition that most fractal objects scale
self-similarly; in other words, they can be characterized by a measure expressed in the form of a power
factor, or scaling exponent , that links the change in the observed dependent quantity V to the independent variable x as V(x) x [Falconer, 1990, p 36]. Clearly, is proportional to the ratio of the
logarithm of V(x) and x, i.e., = log V(x)/log x. In the case of fractal objects, is the scaling exponent
in the fractal sense and may have a fractional value. In the final analysis, most scaling relationships can
be cast into some form of a logarithmic dependence on the independent variable with respect to which
a scaling property is analyzed, the latter also expressed on the logarithmic scale. A number of dimension
formulas have been developed based on this observation, and comprehensive compilations are now
available [Falconer, 1990; Feder, 1988].
One approach to formalize the concept of scale invariance utilizes the homogeneity or the renormalization principle given by f () = f (a)/b, where a and b are constants and is the independent variable

2000 by CRC Press LLC

[West & Goldberger, 1987]. The function f that satisfies this relationship is referred as a scaling function.
The power-law function f () is a prominent example in this category provided = log b/log a. The
usefulness of this particular scaling function has been proven many times over in many areas of science,
including the thermodynamics of phase transitions and the threshold behavior of percolation networks
[Schroeder, 1991; Stauffer & Aharony, 1992; Wilson, 1983].

Power Law and 1/f Processes


The revived interest in power-law behavior largely stems from the recognition that a large class of noisy
signals exhibits spectra that attenuate with a fractional power dependence on frequency [West &
Shlesinger, 1989; Wornell, 1993]. Such behavior is often viewed as a manifestation of the interplay of a
multitude of local processes evolving on a spectrum of time scales that collectively give rise to the socalled 1/f or, more generally, the 1/f-type behavior. As in the case of spatial fractals that lack a characteristic length scale, 1/f processes such as the fractional brownian motion cannot be described adequately
within the confines of a characteristic time scale and hence exhibit the fractal time property [Mandelbrot, 1967].

Distributed Relaxation Processes


Since the later part of the nineteenth century, the fractional power function dependence of the frequency
spectrum also has been recognized as a macroscopic dynamic property manifested by strongly interacting
dielectric, viscoelastic, and magnetic materials and interfaces between different conducting materials
[Daniel, 1967]. More recently, the 1/f-type dynamic behavior has been observed in percolating networks
composed of random mixtures of conductors and insulators and layered wave propagation in heterogeneous media [Orbach, 1986]. In immittance (impedance or admittance) studies, this frequency dispersion
has been analyzed conventionally to distinguish a broad class of the so-called anomalous, i.e., nonexponential, relaxation/dispersion systems from those which can be described by the ideal single exponential
form due to Debye [Daniel, 1967].
The fractal time or the multiplicity of times scales prevalent in distributed relaxation systems necessarily
translates into fractional constitutive models amenable to analysis by fractional calculus [Ross, 1977] and
fractional state-space methods [Bagley & Calico, 1991]. This corresponds to logarithmic distribution
functions ranging in symmetry from the log-normal with even center symmetry at one extreme to singlesided hyperbolic distributions with diverging moments at the other. The realization that systems that do
not possess a characteristic time can be described in terms of distributions renewed the interest in the
field of dispersion/relaxation analysis. Logarithmic distribution functions have been used conventionally
as means to characterize such complexity [West, 1994].

Multifractals
Fractal objects and processes in nature are rarely strictly homogeneous in their scaling properties and
often display a distribution of scaling exponents that echos the structural heterogeneities occurring at a
myriad of length or time scales. In systems with spectra that attenuate following a pure power law over
extended frequency scales, as in the case of Davidson-Cole dispersion [Daniel, 1967], the corresponding
distribution of relaxation times is logarithmic and single-tailed. In many natural relaxation systems,
however, the spectral dimension exhibits a gradual dependence on frequency, as in phenomena conventionally modeled by the Cole-Cole type dispersion. The equivalent distribution functions exhibit doublesided symmetries on the logarithmic relaxation time scale ranging from the even symmetry of the lognormal through intermediate symmetries down to strictly one-sided functions.
The concept that a fractal structure can be composed of fractal subsets with uniform scaling property
within the subset has gained popularity in recent years [Feder, 1988]. From this perspective, one may
view a complicated fractal object, say, the strange attractor of a chaotic process, as a superposition of
simple fractal subsystems. The idea has been formalized under the term multifractal. It follows that

2000 by CRC Press LLC

each individual member contributes to the overall scaling behavior according to a spectrum of scaling
exponents or dimensions. The latter function is called the multifractal spectrum and summarizes the
global scaling information of the complete set.

59.3 An Example of the Use of Complexity Theory in the


Development of a Model of the Central Nervous System
Consciousness can be viewed as an emergent behavior arising from the interactions among a very large
number of local agents, which, in this case, range from electrons through neurons and glial cells to
networks of neurons. The hierarchical organization of the brain [Churchland & Sejnowski, 1992; Newell,
1990], which exists and evolves on a multitude of spatial and temporal scales, is a good example of the
scaling characteristics found in many complex dynamic systems. There is no master controller for this
emergent behavior, which results from the intricate interactions among a very large number of local
agents.
A model that duplicates the global dynamics of the induction of unconsciousness in humans due to
cerebral ischemia produced by linear acceleration stress (G-LOC) was constructed using some of the
tenets of complexity [Cammarota, 1994]. It was an attempt to provide a theory that could both replicate
historical human acceleration tolerance data and present a possible underlying mechanism. The model
coupled the realization that an abrupt loss of consciousness could be thought of as a phase transition
from consciousness to unconsciousness with the proposed neurophysiologic theory of G-LOC [Whinnery,
1989]. This phase transition was modeled using a percolation network to evaluate the connectivity of
neural pathways within the central nervous system.
In order to construct the model, several hypotheses had to be formulated to account for the unobservable interplay among the local elements of the central nervous system. The inspiration for the
characteristics of the locally interacting elements (the nodes of the percolation lattice) was provided by
the physiologic mechanism of arousal (the all-or-nothing aspect of consciousness), the utilization of
oxygen in neural tissue during ischemia, and the response of neural cells to metabolic threats. The
neurophysiologic theory of acceleration tolerance views unconsciousness as an active protective mechanism that is triggered by a metabolic threat which in this case is acceleration-induced ischemia. The
interplay among the local systems is determined by using a percolation network that models the connectivity of the arousal mechanism (the reticular activating system). When normal neuronal function is
suppressed due to local cerebral ischemia, the corresponding node is removed from the percolation
network. The configuration of the percolation network varies as a function of time. When the network
is no longer able to support arousal, unconsciousness results.
The model simulated a wide range of human data with a high degree of fidelity. It duplicated the
population response (measured as the time it took to lose consciousness) over a range of stresses that
varied from a simulation of the acute arrest of cerebral circulation to a gradual application of acceleration
stress. Moreover, the model was able to offer a possible unified explanation for apparently contradictory
historical data. An analysis of the parameters responsible for the determination of the time of LOC
indicated that there is a phase transition in the dynamics that was not explicitly incorporated into the
construction of the model. The model spontaneously captured an interplay of the cardiovascular and
neurologic systems that could not have been predicted based on existing data.
The keys to the models success are the reasonable assumptions that were made about the characteristics
and interaction of the local dynamic subsystems through the integration of a wide range of human and
animal physiologic data in the design of the model. None of the local parameters was explicitly tuned to
produce the global (input-output) behavior. By successfully duplicating the observed global behavior of
humans under acceleration stress, however, this model provided insight into some (currently) unobservable inner dynamics of the central nervous system. Furthermore, the model suggests new experimental
protocols specifically aimed at exploring further the microscopic interplay responsible for the macroscopic
(observable) behavior.

2000 by CRC Press LLC

Defining Terms
1/f process: Signals or systems that exhibit spectra which attenuate following a fractional power dependence on frequency.
Cellular automata: Composite discrete-time and discrete space dynamic systems defined on a regular
lattice. Neighborhood rules determine the state transitions of the individual local elements (cells).
Chaos: A state the produces a signal that resembles noise and is aperiodic, well-bounded, and very
sensitive to initial conditions but is governed by a low-order deterministic differential or difference
equation.
Complexity: Complexity theory is concerned with systems that have many degrees of freedom (composite systems), are spatially extended (systems with both spatial and temporal degrees of freedom),
and are dissipative as well as nonlinear due to the rich interactions among the local components
(agents). Some of the terms associated with such systems are emergent global behavior, collective
behavior, cooperative behavior, self-organization, critical phenomena, and scale invariance.
Criticality: A state of a system where spatial and/or temporal characteristics are scale invariant.
Emergent global behavior: The observable behavior of a system that cannot be deduced from the
properties of constituent components considered in isolation and results from the collective (cooperative or competitive) evolution of local events.
Fractal: Refers to physical objects or dynamic processes that reveal new details on space or time
magnification. Fractals lack a characteristic scale.
Fractional brownian motion: A generalization of the random function created by the record of the
motion of a brownian particle executing random walk. Brownian motion is commonly used to
model diffusion in constraint-free media. Fractional brownian motion is often used to model
diffusion of particles in constrained environments or anomalous diffusion.
Percolation: A simple mathematical construct commonly used to measure the extent of connectedness
in a partially occupied (site percolation) or connected (bond percolation) lattice structure.
Phase transition: Any abrupt change between the physical and/or the dynamic states of a system,
usually between ordered and disordered organization or behavior.
Renormalization: Changing the characteristic scale of a measurement though a process of systematic
averaging applied to the microscopic elements of a system (also referred to as coarse graining).
Scaling: Structures or dynamics that maintain some form of exact or statistical invariance under
transformations of scale.
Self-organization: The spontaneous emergence of order. This occurs without the direction of a global
controller.
Self-similarity: A subset of objects and processes in the scaling category that remain invariant under
ordinary geometric similarity.

References
Akansu AN, Haddad RA. 1992. Multiresolution Signal Decomposition: Transforms, Subbands, and Wavelets. New York, Academic Press.
Bagley R, Calico R. 1991. Fractional order state equations for the control of viscoelastic damped structures.
J Guidance 14(2):304.
Bak P, Tang C, Wiesenfeld K. 1988. Self-organized criticality. Phys Rev A 38(1):364.
Bak P, Chen K. 1991. Self-organized. Sci Am Jan:45.
Barnsley MF. 1993. Fractals Everywhere, 2d ed. New York, Academic Press.
Barnsley MF, Hurd LP. 1993. Fractal Image Compression. Wellesley, AK Peters.
Basseville M, Benveniste A, Chou KC, et al. 1992. Modeling and estimation of multiresolution stochastic
processes. IEEE Trans Information Theory 38(2):766.
Cammarota JP. 1994. A Dynamic Percolation Model of the Central Nervous System under Acceleration
(+Gz) Induced/Hypoxic Stress. Ph.D. thesis, Drexel University, Philadelphia.

2000 by CRC Press LLC

Churchland PS, Sejnowski TJ. 1992. The Computational Brain. Cambridge, Mass, MIT Press.
Daniel V. 1967. Dielectric Relaxation. New York, Academic Press.
Devaney RL. 1992. A First Course in Chaotic Dynamical Systems: Theory and Experiment. Reading,
Mass, Addison-Wesley.
Falconer K. 1990. Fractal Geometry: Mathematical Foundations and Applications. New York, Wiley.
Feder J. 1988. Fractals. New York, Plenum Press.
Forrest S. 1990. Emergent computation: Self-organization, collective, and cooperative phenomena in
natural and artificial computing networks. Physica D 42:1.
Goldberg DE. 1989. Genetic Algorithms in Search, Optimization, and Machine Learning. Reading, Mass,
Addison-Wesley.
Harvey RL. 1994. Neural Network Principles. Englewood Cliffs, NJ, Prentice-Hall.
Langton CG. 1989. Artificial Life: Proceedings of an Interdisciplinary Workshop on the Synthesis and
Simulation of Living Systems, September 1987, Los Alamos, New Mexico, Redwood City, Calif,
Addison-Wesley.
Langton CG. 1990. Computation at the edge of the chaos: Phase transitions and emergent computation.
Physica D 42:12.
Langton CG, Taylor C, Farmer JD, Rasmussen S. 1992. Artificial Life II: Proceedings of the Workshop
on Artificial Life, February 1990, Sante Fe, New Mexico. Redwood City, Calif, Addison-Wesley.
Mandelbrot B. 1967. Some noises with 1/f spectrum, a bridge between direct current and white noise.
IEEE Trans Information Theory IT-13(2):289.
Mandelbrot B. 1983. The Fractal Geometry of Nature. New York, WH Freeman.
Meyer Y. 1993. Wavelets: Algorithms and Applications. Philadelphia, SIAM.
Newell A. 1990. Unified Theories of Cognition. Cambridge, Mass, Harvard University Press.
Orbach R. 1986. Dynamics of fractal networks. Science 231:814.
Peitgen HO, Richter PH. 1986. The Beauty of Fractals. New York, Springer-Verlag.
Ross B. 1977. Fractional calculus. Math Mag 50(3):115.
Shroeder M. 1990. Fractals, Chaos, Power Laws. New York, WH Freeman.
Simpson PK. 1990. Artificial Neural Systems: Foundations, Paradigms, Applications, and Implementations. New York, Pergamon Press.
Stauffer D, Aharony A. 1992. Introduction to Percolation, 2d ed. London, Taylor & Francis.
Toffoli T, Margolus N. 1987. Cellular Automata Machines: A New Environment for Modeling. Cambridge,
Mass, MIT Press.
Tsonis PA, Tsonis AA. 1989. Chaos: Principles and implications in biology. Comput Appl Biosci 5(1):27.
Vaidyanathan PP. 1993. Multi-rate Systems and Filter Banks. Englewood Cliffs, NJ, Prentice-Hall.
West BJ. 1990. Physiology in fractal dimensions: Error tolerance. Ann Biomed Eng 18:135.
West BJ. 1994. Scaling statistics in biomedical phenomena. In Proceedings of the IFAC Conference on
Modeling and Control in Biomedical Systems, Galveston, Texas.
West BJ, Goldberger A. 1987. Physiology in fractal dimensions. Am Scientist 75:354.
West BJ, Shlesinger M. 1989. On the ubiquity of 1/f noise. Int J Mod Phys 3(6):795.
Whinnery JE. 1989. Observations on the neurophysiologic theory of acceleration (+Gz) induced loss of
consciousness. Aviat Space Environ Med 60:589.
Wilson KG. 1983. The renormalization group and critical phenomena. Rev Mod Phys 55(3):583.
Wornell GW. 1993. Wavelet-based representations for the 1/f family of fractal processes. IEEE Proc
81(10):1428.

2000 by CRC Press LLC

You might also like