You are on page 1of 17

Chaos: What Is It? by Amara L.

Graps

Table of Contents
Section Section Section Section Section Section Section Section Section
o

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dynamical Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . What is an Attractor? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Strange Attractors and Chaotic Behavior . . . . . The Beauty of Fractals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Degree of Chaos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Information Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Emergent Order: A Philosophical Discussion Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Page Page Page Page Page Page Page Page Page

1 1 3 5 8 10 10 14 15

Written in 1988.

DYNAMICAL SYSTEMS

Introduction

A sector in the scientic community has reshaped the way that we view the world around us. The short-hand name for this sector is chaos. It is a name for both the discipline itself (also called nonlinear science, experimental mathematics, and the study of dynamical systems ) and the physical behavior of a dynamical system when it shows a sensitive dependence on initial conditions. Chaos is a science of the global nature of systems. The center of this set of ideas is that simple, deterministic systems can breed complexity; that systems too complex for traditional mathematics can yet obey simple laws. Nature is intrinsically nonlinear and in the past nonlinear was nearly synonymous with nonsolvable. An essential dierence exists between linear and nonlinear systems. Mathematically, linear equations are special in that any two solutions can be added together to form a new solution. As a consequence, analytic methods have been established for solving any linear system. You simply break up the complicated system into many simple pieces and patch together the separated solutions for each piece to form a solution to the full problem. In contrast, two solutions to a nonlinear system cannot be added together to form a new solution. The system must be treated in its full complexity. No general analytic approach exists for solving them. Nonlinear equations can generate either order or chaos and for those that generate chaotic motion, there are no useful analytic solutions. So what is chaotic motion? And if no useful analytic solutions exist how can a nonlinear system be modeled? This paper will attempt to answer those two questions.

Dynamical Systems

Two separate problems occur in nonlinear science. One is purely mathematical: Given an equation, what are its properties and how can it be solved? The second problem is : Given a process in the real world, how can we write down the best mathematical description or model? We must be able to address both problems. In nonlinear science, modeling is performed on what are called dynamical systems. This is a physical system that evolves in time according to some well-dened rule. It is characterized by the fact that the rate of change of its variables is given as a function of the values of the variables at that time. Examples are Maxwells equations, the Navier-Stokes equations, and Newtons equations of motion for a particle with suitably specied forces. Modeling of dynamical systems are done in what is called phase space. This is the space that has position as one axis and velocity as the other axis. If we assign n to be the number of degrees of freedom of the particles motion, then the number of dimensions of the phase space is 2n. Each point in this space represents the complete behavior of the system. For example the position and velocity of a pendulum with 1 degree of freedom at any instant in time is in a 2 dimensional phase space. Notice that if we have a more complicated system the number of dimensions of the phase space is enormous because the space is constructed by assigning coordinates to every independent variable (every degree of freedom requires 2 more dimensions in phase space). In fact, the dimension is taken to be innite in the general hydrodynamic description. Since all of the information about a system is stored in a point at one instant in time, the evolution of a system can be charted by the moving point tracing its path through phase space. The time evolution is often called a trajectory or an orbit of a point in phase space. The set of orbits originating from all possible initial conditions generates a ow in this space governed by a set of 2n rst-order coupled dierential equations:

dxi = F (x1 , x2 , . . . , xn ; v1 , v2 , . . .), dt 1

i = 1, . . . , n

DYNAMICAL SYSTEMS and: dvi = F (x1 , x2 , . . . , xn ; v1 , v2 , . . . , vn ), dt i = 1, . . . , n

where n is the number of degrees of freedom. The change in position and velocity is added to the previous values of the position and velocity to get a new (xi , vi ) point in phase space. In other words:

xt+1 = xt + Ft (x1 , x2 , . . . , xn ; v1 , v2 , . . .) for position and vt+1 = vt + Ft ((x1 , x2 , . . . , xn ; v1 , v2 , . . .)) for velocity, where t represents a particular time. Therefore a natural sequence of discrete points (xi , vi ) can be singled out- one per unit of time- to give us an n-dimensional recurrence relation in which output values of the tth generation (x1 , x2 , . . . , xn ; v1 , v2 , . . . , vn ) are fed right back into Ft to produce the t + 1st generation. So a multidimensional point (x1 , x2 , . . . , xn ; xn ; v1 , v2 , . . . , vn ) jumps from one discrete location in phase space to another, as time is incremented. For example, let 0 be the initial multidimensional point (x1 , x2 , . . . , xn ; v1 , v2 , . . . , vn ). And set F (0 ) = k0 . Then in my above notation:

1 = F (0 ) + 0 = k0 + 0 = 0 (k + 1), 2 = F (1 ) + 1 = F (k0 + k0 ) + 1 = F (k0 ) + F (0 ) + 0 (k + 1) = = 0 (k + 1)2 , 3 = F (2 ) + 2 = 0 (k + 1)3 , 4 = F (3 ) + 3 = 0 (k + 1)4 , etc. In relating the coupled recursion relation, which assumes discrete time steps, to the continuous time variable in the dierential equation, dynamicists assume that a natural period is used in the recursion. It is not always possible to nd a natural period. Fortunately in dissipative systems driven by a periodic force (which are the systems that are usually the most interesting to scientists), there is a natural frequency dened. The system will head for some kind of steady state. Such a steady state is a stable orbit. So continuous orbits can be replaced by discrete orbits, which allows us to use the recursion relation. One advantage of thinking of states as points in space is that it makes change easier to watch. If some combination of variables never occur, then a scientist can simply imagine that part of that space is out of bounds. The system will never evolve there. If a system behaves periodically, then the point will move around in a loop, passing through the same position in phase space again and again. The motion of a point in phase space must always be nonself-intersecting. This arises from the fact that a point in phase space representing the state of a system encodes all the information 2

WHAT IS AN ATTRACTOR? about the system, including its future history, so that there cannot be two dierent pathways leading out of one and the same point. Scientists are usually interested in the long-term behavior of dynamical systems. So if an initial condition is picked and allowed to evolve for a long time, what will be the nature of the motion after all of the short-lived motions have died out? For dynamical systems with friction or some other form of dissipation (i.e. a nonconservative system), the system will eventually approach a restricted region of the phase space called an attractor. This is the solution set of the dynamical system. If we know the structure of the attractor, then we can sensibly claim that we know all the important things about the solution of our dierential equation.

What is an Attractor?

As the name implies, nearby initial conditions are attracted; the set of points that are attracted form the basin of attraction. A dynamical system can have more than one attractor, each with its own basin, in which case dierent initial conditions lead to dierent types of long-term behavior. (Rigorously speaking, the mathematical solutions never actually arrive at their attractor, but only approach it exponentially.) The region in the 2n-dimensional phase space- volume- occupied by the attractor is in general very small relative to the amount of phase space. A property specic to dissipative systems is that the volume of any set of initial conditions in phase space diminishes on the average in time. Chaoticists express this property by saying that the ow contracts volumes in phase space or that the dimensionality of the phase space is reduced. This property is in contrast to classical conservative systems where the dimensionality of the solutions of the phase space always stays xed. The following gure shows one example of a contraction of a volume in phase space:

Figure 1a. Contraction of volume in phase space. The dimension of an attractor is the rst level of knowledge necessary to characterize an attractors properties. The dimension gives the amount of information necessary to specify the position of a point on the attractor to within a given accuracy. The dimension is also a lower bound on the number of essential variables needed to model the dynamics. Even if the dimensionality of a phase space is reduced for the attractor, that does not mean that it contracts lengths in all directions. Some directions may be stretched, provided some others are so much contracted that the nal volume is smaller than the initial volume. The next two examples illustrate this:

Figure 1b. Contraction of volume in phase space, with stretching of length.

WHAT IS AN ATTRACTOR?

Figure 1c. Contraction of volume, stretching of length, and folding in phase space. The simplest attractor in phase space is a xed point. This is the damped harmonic oscillator:

Figure 2. Damped Harmonic Oscillator. Equation is: dx/dt = y, dy/dt = x y With xed points, motion in phase space eventually stops; the system is attracted toward one point and stays there. Regardless of its initial position, the pendulum will eventually come to rest in a vertical position. Similarly, if a glass of water is shaken and then placed on a table, the water eventually approaches a state of uniform rest as its solution set. This is true despite the fact that the waters phase space is initially eectively innite in dimension. The dimensionality of the solution of this system is reduced to zero. Another example of an attractor is a periodic cycle called a limit cycle. Limit cycles represent a spontaneous sustained motion that is not necessarily explicitly present in the equations describing the dynamical system. One case where this happens is the Van der Pols dynamical model for a radio transmitter (where the dimensionality of the solution set is reduced from two to one:

Figure 3. Van Der Pols transmitter. Equation is: dx/dt = y, dy/dt = x + y(1 x2 )

STRANGE ATTRACTORS AND CHAOTIC BEHAVIOR

Strange Attractors and Chaotic Behavior


In dissipative systems, attractors often exist, such as the two examples just mentioned (stationary and periodic), and strange attractors often exist. If a strange attractor is found in your dynamical system, then you know it is chaotic. The two examples just cited are oscillators without any forcing. When a forcing oscillator is added to the system, you have more dimensions in your phase space and the orbits converge onto an object that is neither a xed point nor a limit cycle. The following gure is an example. This strange attractor depicts the chaotic behavior of a rotor, a pendulum swinging through a full circle, driven by an energetic kick at regular intervals.

Figure 4. Dierent Orbits of the Rotor Strange Attractor. To see the structure within a strange attractor, chaoticists use a technique to reduce a threedimensional structure to a two-dimensional structure. The technique is called taking a Poincar e section. It involves taking a slice perpendicularly through the middle. Each time the trajectory passes through a plane, it marks a point, and a pattern emerges. The following two illustrations show this for the rotor:

Figure 5. 1000 Orbits of the Rotor Strange Attractor with Slice.

STRANGE ATTRACTORS AND CHAOTIC BEHAVIOR So the perpendicular slice looks like:

Figure 6. Poincar e Section of the Rotor Strange Attractor. Another type of Poincar e section is the following gure. The dynamics of this system is also a periodically driven and damped pendulum. The motion of the point in phase space is plotted once every cycle of the driving force. The variable labeled Position plots the angle of the pendulum in units of 2 . The multiple images result from motions in which the pendulum swings over the top.

Figure 7. Driven Damped Harmonic Oscillator. The system is chaotic then if the motion of a point in phase space (trajectory) of the dynamical system follows a strange attractor. Strange attractors arise from the consequence that some directions in phase space volumes can be stretched. The long-term motion of the system is unstable within the attractor and the instability manifests itself in the following properties:

STRANGE ATTRACTORS AND CHAOTIC BEHAVIOR 1) A small dierence in this current position leads to an enormous dierence in position later on. 2) The trajectory will eventually come arbitrarily close to the attractor. Property 1) is called sensitive dependence on initial conditions. Property 2) means that a single trajectory pierces every little region in the basin of attraction. A trajectory on a strange attractor exhibits most of the properties intuitively associated with random functions, although no randomness is ever explicitly added. The equations of motion are purely deterministic; the random behavior emerges spontaneously from the nonlinear system. This is often referred to as deterministic chaos. Over short times it is possible to follow the trajectory of each point, but over longer periods small dierences in position are greatly amplied, so that predictions of long-term behavior are impossible. One of the most famous strange attractors arises from an innocent looking set of nonlinear dierential equations that model convection in the atmosphere. The meteorologist E.N. Lorenz discovered these in his studies of the weather in the 1960s:

dx/dt = 10x + 10y dy/dt = xz + 28x y 8 dz/dt = xy z 3 In these equations x measures the rate of convection overturning, y measures the horizontal temperature variation, and z measures the vertical temperature variation. The attractor is roughly a two dimensional sheet. The following two gures show this attractor:

Figure 8. Lorenz Attractor.

THE BEAUTY OF FRACTALS And from a dierent view:

Figure 9. Lorenz Attractor from another perspective. The Lorenz equations can be viewed as stretching this sheet out and then folding it over onto itself, in the way a baker would fold bread dough. The process repeats itself over and over, and the attractor develops an innitely folded structure. Objects of this type are called fractals. The folding process, in some sense, thickens the sheet, giving the attractor a dimension that is between two and three.

The Beauty of Fractals

All of the experimentally known chaotic attractors are characterized by fractal microstructure. Geometrically this makes sense because the orbit of the point must be drawn to a limited space that would never repeat itself and never cross itself. The orbit would have to be an innitely long line in a nite area- it would have to be a fractal. The most beautiful aspect of fractals is a quality of self-similarity. This is symmetry across scale. Any section of a strange attractor when blown up reveals itself to be just as exquisitely detailed as was the larger picture from which it was taken. There is innite regress of detail in fractals.

THE BEAUTY OF FRACTALS The following set of illustrations shows this. These orbits are Henons attractor, which was one of the earliest strange attractors to be found. It is generated by the sequence of points xn+1 = yn 1.4x2 n 1 and yn+1 = 0.3xn . The small square in each illustration is blown up in the next illustration, to reveal yet ner detail. Note that the detail has the same form as the picture preceding it. This is the self-similar quality of fractals.

Figure 10. Henons Attractor- an example of fractal structure. The lower the dimension of the fractal is, the simpler the structure. The dimension can give you some knowledge of the distribution or density of points on the attractor. To clarify the notion of fractal dimension, imagine a box which contains a small region of an attractor. If this box is subdivided into smaller boxes, some fraction of the smaller boxes will contain pieces of the attractor, while the rest wont. The number of piece-containing boxes will scale as Ld , where L is the factor by which each dimension of the box is divided. This construction denes the fractal dimension:

d = lim

log N ( ) log( 1 )

where N ( ) is the number of boxes whose sides have length , necessary to cover the attractor. For a plane, d equals two by this construction, the same as the topological dimension. The simplest chaotic attractors (those lying in a three-dimensional phase space) have fractal dimension between two and three; d then measures how closely packed the sheets of an attractor are. As a nal note on fractals I would like to mention that there is a branch of dynamical systems that is not concerned with the nal long-term chaotic behavior of a system that has been our concern throughout this paper. This branchs main focus is how a system chooses between competing options in a system that has more than one nonchaotic steady state. With more than one attractor, there are more than one basins of attraction and this new eld of mathematics and physics is the study of fractal basin boundaries. It turns out that fractals appear at the boundary between one kind 9

INFORMATION THEORY of steady behavior and another of a point in phase space. Near the boundary, prediction becomes impossible of which steady state the system will ultimately choose. The pictures of these boundaries are fantastic swirls with dierent colors representing dierent steady states. Often the fractals are Mandlebrot and Julia sets. It is these pictures that are usually shown when fractals are discussed. (The picture on the cover of this paper is part of the Mandelbrot set.)

Degree of Chaos

After the dimension, the next most important property of a dynamical systems attractor are its Lyapunov Characteristic Exponents (LCEs). The LCEs provide the connection between the average stability qualities of an attractor and its dimension. There are as many LCEs as there are dimensions in the phase space of the dynamical system. The spectrum of LCEs: - when negative, measures the average of exponential convergence of trajectories onto the attractor, and - when positive, measures the average rate of the exponential divergence of nearby trajectories within the attractor. The magnitude of an attractors positive exponents is a measure of its degree of chaos. The positive characteristic exponent is found by computing:

lim 1 n n

ln
i=1

dF dx xi

where ln dF dx xi is the average log of the slope of the map F at xi and n is the number of dimensions of the phase space. Chaoticists, then, often take > 0 as their denition of chaos. These numbers generally depend on the choice of initial conditions and makes quantitative the notion of sensitive dependence on initial conditions. LCEs for an individual chaotic attractor have experimentally been found to have the same value. Some examples of LCEs are: in three dimensions, a dynamical system with an attractor of a xed point has all negative exponents as its LCE spectrum, and is denoted by (- - -). A limit cycle attractor has a LCE spectrum of (0 - -). Chaotic attractors in three dimensions have an LCE spectrum of (+ 0 -). In this case the positive exponent indicates exponential divergence of nearby trajectories in the direction transverse to the ow and the negative exponent indicates exponential contraction onto the attractor. So there is a stretching of the attractor as the trajectories evolve, but since the attractor lies within a bounded region of phase space, the attractor must also exhibit folding. As implied in the fractals section: the stretching and folding is a hallmark of strange attractors. The topological dimension of the attractor is directly related to the number of nonnegative characteristic exponents. Since the dimension and the value of the positive LCEs can be computed, a fair amount of knowledge can be gained from a nonlinear system.

Information Theory

It is often said that chaos theory brings together scientic disciplines that are normally very widely separated. One discipline called information theory brings surprising insights into how dynamical systems work. It provides a very interesting link between macroscopic and microscopic physical studies. 10

INFORMATION THEORY Information theory was developed after World War II, largely by Claude Shannon at Bell Laboratories. He was concerned with such problems as the transmission of television images over telephone lines, and those problems got him thinking about the nature of information. He renounced all interpretations of the conventional meaning of the word and dened it operationally and gave it a measure. He dened the information associated with some measurement to be:

H
i

Pi log2 Pi

where Pi are the probabilities assigned to each possible outcome in the measurement of some message. This denition has the property that if we have a completely determined outcome with probability unity, then the information content is zero. We learn nothing new from the message. So in essence, H is a measure of the surprise of an occurrence, the less a priori knowledge we have, the more information it contains. The units of information are bits, the information contained in the outcome of an even binary experiment, such as an unbiased coin ip. To measure dynamical systems, Robert Shaw at UC Santa Cruz used information theory to provide the concept of information creation and destruction. By creation he meant that some previously inaccessible information in an expanding ow in phase space, such as a random uctuation of the heat bath, has been brought up to macroscopic scales. By destruction he meant that some accessible information has been destroyed by the contracting ow in phase space. Chaotic systems act as an information source, bringing into the macroscopic variables information not implicit in initial conditions. For example two initial conditions that are dierent but indistinguishable at a certain experimental precision will evolve into distinguishable states after a nite time. To explain this idea we need a link to microscopic scales. The link to microscopic scales is the Uncertainty Principle. Increasing the accuracy of a measurement increases the information available. However theres a limit to the resolution; the physical nature of reality limits the information we can learn about a given system to a particular number. In phase space theres a nite minimum size that it can be divided into:

xp h In statistical mechanics these blocks are referred to as states. So a system which has access to a nite volume of phase space can be found in one of only a nite number of states. The Uncertainty Principle guarantees an error in the observation of the initial state of some orbit, and given this error, the position of an orbit will be causally disconnected from its initial condition in a nite time. Thus any prediction as to its position after that time is in principle impossible. The system is then chaotic. It generates a steady stream of information. These concepts can be easily quantized. The rate of information creation or destruction is:

dH d 1 dV = (log ) = dt dt V dt where is the number of distinguishable states arising from some block value and V is the volume in phase space per state. The last term, the volume derivative, is called the Lie derivative. In the cases where the number of distinguishable states is simply proportional to the volume, the Lie derivative gives directly the information creation or destruction rate. However the number of distinguishable states (t) arising from some initial block volume need not be directly proportional to the volume change under the ow. Ive mentioned previously how some of the axes may be stretched and others compressed. In the context of information theory this means that there 11

INFORMATION THEORY will be some minimum uncertainty along each dimension (according to the best resolution of some measuring instrument) which inuences how stretched or compressed the phase space is in that direction. If (t) is a polynomial then the information obtained from repeated observations of the system saturates with time- the system is predictable. If (t) is an exponential, then dH/dt remains positive, and the system continues to be an information source and is unpredictable. Lyapunov Characteristic Exponents were discussed in the last section and they have a very direct relevance in information theory. The LCEs govern the rate at which information is lost to the macroscopic world and they compute the running average of the size of the uncertainty interval. The sign of the LCEs answer the question: Does an initial uncertainty interval get mapped into a larger or smaller interval after some large number of iterations? The map refers to the recursive procedure of applying the function F to a phase space volume a certain number of times. If the sign of the LCE is negative, then a small interval taken around a point is being mapped to a smaller and smaller size and the map will eventually fall within the original interval. This is the criterion for a stable periodic orbit. For a chaotic orbit, a small interval taken around a point is being mapped to a larger and larger size (in one or several of the dimensions) which folds over on itself. This gives the fractal microstructure. Another way to view the larger mapping is that any uncertainty interval will be amplied by a factor dy/dx (the slope of the map at a particular point), and this will happen at every point of the map. Nearby trajectories (wherever they exist) will be further separated with each iteration. In the Degrees of Chaos section a mathematical denition was given to compute the positive LCE. If you know the probability density for nding the trajectory at a particular point, then you can compute the LCE with the following formula:

= Havg =
0

P (x) log

dy dx dx

dy determines the log of the slope of the map at a where P (x) is the probability density and log dx particular point. This links the amount of information: H to the LCE and is the average information change over the entire interval. If you dont know the probability density you can compute it with the denition given in the previous section- that is, iterating the map and keeping track of the average log of the slope. What else can the LCEs tell you? Since they govern the rate at which (initial) information is lost to the macroscopic world, you can calculate how many iterations are required in your recursive system before your initial data is erased. All you need for this is the information known at some initial time, and the amount of this information lost per iteration of the map.

n=

Hinitial ||

You can also put in a steepness parameter to your mapping function and calculate the LCE as the steepness is increased. Physically, the steepness parameter could be the ow speed of a uid or the driving speed of a pendulum. At some critical value of the steepness parameter, the LCE becomes positive and the inuence of any initial data is lost and we have chaotic motion. The division between chaotic and non-chaotic motion or between strange and non-strange attractors is usually sharp. This corresponds qualitatively to our informational use of the LCE because as changes from negative to positive, information is either destroyed or created. Is this what we see in the real world? Usually yes. A good example is the division between laminar and turbulent uid ow. Laminar ow means that 1) each molecule in the uid follows the same path as its predecessor (streamlines) and 2) that two nearby molecules will, as time passes, slowly separate from each other in proportion to the dierence in their velocities, i.e. linearly. Turbulence in a uid is 12

INFORMATION THEORY the state where there is disorder on all scales- small eddies within large ones. It is unstable and highly dissipative. Disturbances grow catastrophically. Since this system is chaotic then turbulent ows are an informational source. In fact an estimate of its information production rate dH/dt can be made (which R. Shaw has done) and when the strange attractor is found in the Navier Stokes equations, this value can be tested. The chief qualitative dierence between laminar and turbulent ow is in the direction of information ow between the macroscopic and microscopic length scales. In laminar ow, motion is governed by boundary and initial conditions, no new information is generated by the ow, hence the motion is in principle predictable. On the other hand, turbulent motion is governed by information generated continuously by the ow itself and is unpredictable. R. Shaw illustrates this point:

Figure 11. Direction of Information Flow between Macroscopic and Microscopic Length Scales. The Second Law of Thermodynamics says that there is a universal increase in entropy, thus energy in the turbulent and laminar cases move from macroscopic to microscopic degrees of freedom. (This relationship between the Second Law and quantization in physics has been remarked upon by Fermi and Von Neumann among others during the conceptual development of quantum mechanics, and the resolution of many paradoxes of the Maxwells Demon variety usually requires the invocation of the quantum principle in some form.) However causality in a dissipative system can act either way, thus the direction of information ow diers. To sum up this section Ill quote R. Shaw: Thus the ceaseless and tumultuous ow of events in the world reects in a very direct way the chaotic motion of the heat bath. The constant injection of new information into the macroscales may place severe limits on our predictive ability, but it as well insures the constant variety and richness of our experience.

13

EMERGENT ORDER: A PHILOSOPHICAL DISCUSSION

Emergent Order: A Philosophical Discussion

The ideas of some of the previous sections lead to a concept which Ill call emergent order. Chaotic systems are completely unpredictable but when an attractor is found, some stable (by stable I mean a perseverance of form through time) structure emerges from the combination of individual elements. Each element has its own goal due to some constraints, and through the elements connections and combination of their properties, an unplanned, unpredictable order arises. I like to think of this order as a type of attractor. What systems exhibit emergent order? There are at least three: 1) Markets, 2) Ecologies/Evolution, and 3) Minds. In all three of these systems, there is the concept that millions of things are operating under similar constraints (while at the same time operating in unique local conditions). In Markets, people are constrained by rights. In Ecologies, living creatures are constrained by the environment and other creatures. In a Mind, the neurons are constrained by the physiology of the brain. In Free Market Economics people exchange goods in a mutually reinforcing process. The prices and wages form themselves into an overall pattern. Adam Smith called such patterns invisiblehand explanations: Every individual intends only his own gain, and he is in this, and in so many other cases, led by an invisible hand to promote an end which was no part of his intention. Other characteristics of Free Market Economics are that its very dicult to predict events- one cant possibly know all there is to know about the system. Eects such as weather, human beings decisions and technological innovations have widespread inuence. So is the overall pattern an attractor? There have been speculations in the early days of chaos research that cotton prices followed Lorenzs chaotic attractor. If such an attractor exits, then it represents a stable long-term behavior. It seems likely that any interference would induce an instability. The interference would entail someone or something (Kings, Governments...) trying to impose a single or small number of goals upon it. Since each element already has its own goals, this higher-order goal(s) wrecks havoc. Friedrich Hayek and Milton Friedman make this point with regard to government attempts to stabilize the economy and the money supply: the eect of attempting to stabilize a complex system articially often increases the instability rather than decreases it. An ecology is also a complex system. Im lumping Evolution into this category because the two are very interrelated- the time-scales are the main dierence. (Evolution time-scales are considerably longer.) Like Free Market Economics there is no central controller, i.e. the control is distributed. The order that emerges is unplanned and unpredictable. As in all three of these systems there is no analytic solution- the only way to calculate it is to run it out. A system with this property can be called computationally incompressible. What sort of stable structures occur in a Ecological/Evolutionary system? One can conjecture that given a large enough lattice, selfreproducing and self-organizing structures would probably appear. These could be the attractors. It has also been noted by Paul Ehrlich that attempts by Man to articially stabilize an ecosystem often increases its instability. The maximum stability occurs when no attempt is made to simplify the system by imposing a single or small number of goals upon it. The third system- a Mind- has analogous processes. Marvin Minsky conjectures that Mind is an emergent property of the interaction of many agents. Agents are the algorithms while the neurons are the hardware. The agents do simple unintelligent tasks and are not signicant individually, but their interaction perform amazingly complex processes. One can speculate that the attractor here is simply the Mind itself. More similarities can be found in these three systems. In all three of these systems recursion exists- make your output your next input and keep running the system. As described in the section on Information Theory, what you get over time is more a function of the process itself than of initial conditions. Information is created and destroyed. This leads to the concept of irreversibility. I would speculate that these three systems exhibit irreversibility. You cannot look at their present information state and infer their history of information processing. 14

SOURCES To conclude this article I would like to point out that, while these three systems are among the most obvious and important examples of emergent order, emergent order operates on a wide range of levels: Atomic Molecular Chemical Human Markets The levels of complexity are very dierent in these systems yet the same processes are operating at all levels! So the processes of emergent order are complexity-level-independent. If you recall from the Fractal section, fractals have this property of similar on all scales too. I tend to view the universe now through my fractal glasses.

Acknowledgments
The author acknowledges the invaluable discussions with F. Bennett on the contents of this paper especially on the philosophical issues. o

Sources

Abraham, R.H., Shaw, C.D., Dynamics- The Geometry of Behavior Part One: Periodic Behavior, Aerial Press, Santa Cruz, 1984. Abraham, R.H., Shaw, C.D., Dynamics- The Geometry of Behavior Part Two: Chaotic Behavior, Aerial Press, Santa Cruz, 1984. Barrow, J.D., and F.J. Tipler, The Anthropic Cosmological Principle, Oxford University Press, Oxford, 1988. Campbell, D., Crutcheld, J., Farmer, D., Jen, E., (1985) Experimental Mathematics: The Role of Computation in Nonlinear Science, Comm of the ACM, 28, p. 374-384. Devaney, R.L. An Introduction to Chaotic Dynamical Systems, Addison Wesley, 1987. Eckmann, J.-P, Ruelle, D., (1985) Ergodic Theory of Chaos and Strange Attractors, Rev. Mod. Phys., 57, p. 617-655. Ehrlich, P. R., Ehrlich, A.H., Population, Resources, and Environment, Freeman, San Francisco, 1970. Farmer, J. D., Ott, E., Yorke, J.A., (1983) The Dimension of Chaotic Attractors, Physica, 7D, p.153-180. Friedman, M., Friedman, R., Free to Choose, Harcourt Brace Jovanovich, NY, 1980. Froehling, H., Crutcheld, J.P., Farmer, D., Packard, N.H., Shaw, R., (1981), On Determining the Dimension of Chaotic Flows, Physica, 3D, p. 605-617. 15

SOURCES Hayek, F., Unemployment and Monetary Policy, Cato Institute, San Francisco, 1979. Hofstadter, D.R., Metamagical Themas: Strange Attractors: Mathematical Patterns Delicately Poised Between Order and Chaos, Scientic American, November 1981, p. 22-43. Hogan, J., Voyage from Yesteryear, Del Rey, 1982. Gleick, J., Chaos: Making A New Science, Viking Press, 1987. Minsky, M., The Society of Mind, Simon and Schuster, 1985. Nozick, R., Anarchy, State, and Utopia, Blackwell, 1974. Packard, N.H., Crutcheld, J.P., Farmer, J.D., Shaw, R.S., (1980), Geometry from a Time Series, Physical Review Letters, 47, p. 712-716. Poundstone, W., The Recursive Universe, Contemporary Books, Inc., 1985. Roux, J.-C., Simoyi, R.H., Swinney, H.L., (1983), Observation of a Strange Attractor, Physica, 8D, p. 257-266. Shaw, R., (1981), Strange Attractors, Chaotic Behavior, and Information Flow, Z. Naturforsch., 36a, p. 80-112. Shaw, R., The Dripping Faucet as a Model Chaotic System, Aerial Press, Inc., Santa Cruz, 1984. Smith, A., The Wealth of Nations, 1776. Sparrow, Colin, The Lorenz Equations: Bifurcations, Chaos, and Strange Attractors, Springer-Verlag, 1982. Wright, R., Did the Universe Just Happen?, Atlantic Monthly, April 1988, p. 29-44.

Copyright Notice:
I notice that I have the will and the ability to copy this paper. But what about the copy in your hands? If you have the will and the ability to copy it, thats great. Willful and able individuals are the most cost eective way for this paper to be distributed.

16

You might also like