You are on page 1of 11

Deterministic Prediction and Chaos

in Squid Axon Response


A.Mees(1) , K.Aihara(2), M.Adachi(2), K.Judd(2) , T.Ikeguchi(3)
and G.Matsumoto(4)
(l)Mathematics Department, The University of Western Australia, Nedlands,
Perth 6009
(2)Department of Electronic Engineering, Tokyo Denki University, 2-2 Kanda,
Chiyoda, Tokyo 101
(3)Department of Applied Electronics, Science University of Tokyo, Yamazaki,
Noda 278
(4)Electrotechnical Laboratory, 1-1-4 Umezono, Tsukuba 305, Japan

Abstract
We make deterministic predictive models of apparently complex squid axon
response to periodic stimuli. The result provides evidence that the response
is chaotic (and therefore partially predictable) and implies the possibility of
identifying deterministic chaos in other kinds of noisy data even w hen explicit
models are not available.

Deterministic chaos is often characterised by its long-term unpredictability. Short term prediction may, however, be possible if a good enough model
is available, together with sufficiently accurate measurements. Recent work
on building mathematical models directly from data[1-7] has made it possible
to detect deterministic structure in some time series data which appear at
first sight to be relatively unpredictable.
Since biological systems are essentially nonlinear and nOlllequilibrium[810), it is natural to expect existence of deterministic chaos in such systems.
Various studies have been directed toward discovering and understanding
chaos in biological systems such as brains and hearts[1l-13]. For example, it
has been experimentally confirmed that when squid giant axons are forced periodically, the nerve membranes fire irregulady under some cir.cumstances[1417]. Fig.1 shows a time series in which the membrane potential is stroboscopically sampled at the leading edge of each stimulating pulse. No clear
regularity is apparent. The spectrum is shown in Fig.2: in spite of some
peaks, it is not indicative of a periodic signal, or of a simple superposition of
periodic signals. It does not exclude the possibility that this axon response
could be stochastic.
Because of non-stationarity due to axon fatigue, the data, which are in
effect from a Poincare section[18-21)' are not readily available in sufficient
quantity to allow reliable calculation of such statistics as fractal dimension.
Estimation of the latter is, in any case, a delicate procedure, and even a
reliable estimate with confidence intervals[22] will not necessarily distinguish
deterministic from stochastic response.
If an explanation for the system's behaviour is proposed-for example,
in the form of a dynamical system-it can be tested by requiring it to make
predictions and then validating or falsifying them by experiment. The present
investigation adopted this approach. The Hodgkin-Huxley system[23] is the
obvious candidate, and is an excellent mathematical model even for chaotic
response[14,18-21], although it does not fit squid a..xon response well under
certain conditions[24]. However, we wanted to have a method that would
also work in other cases where no generally accepted model is available.
We chose instead to create an approximate model system directly from
the data. The model is a function (usually defined via a computer algorithm)
which estimates the value of the membrane potential at time t + k, given its
values at times 0, 1, ... , t. Several methods are available[1-7]; we selected the
tesselation[5,7] and neural network[3,25] approximations.

Both methods assume that the data has been embedded[26]. Figure 3( a)
shows an embedding in two dimensions; that is, a plot of Xl+! aLgainst Xl' This
is strong evidence that the data is not stochastic; the points lie almost on a
one-dimensional curve, and certainly do not densely fIll an area of the plane
as a simple random process would. As the time series data seem to be well
embedded in the two-dimensional space, we used an embedding dimension
of 2 and a lag of 1 in the subsequent analysis.
The time series data of the squid axon response were di vided into first
and second halves for learning and testing, respectively. In deterministic prediction via tesselation[5,7], the first 250 points Xl, X2, .. , X250 from our 500
point data set were embedded in the two-dimensional space and tesselated
as the starting model for prediction. For each successive time t > 250, data
up to and including Xl were used to make predictions i l +k of the response
Xl+k for k = 1,2,3,4 where k is prediction time.
In deterministic prediction via neural networks, the first half of the data
was used for learning with the back-propagation algorithm[25]. The structure
of the neural network was feed forward with three layers, having respectively
2 input, 9 hidden and 2 output neurons. The input signals to the first layer
and the teaching signals to the last layer were (Xl, Xl+!) and (Xl+!' Xt+2),
respectively. For t > 250, the output of the second neuron in the last layer
gives one-step predictions. By feeding back the output signals to the input
layer k times successively, we can also get k + I-step predictions. With either
prediction method we can allow free-running to see whether an apparent
strange attractor is produced. A possible strange attractor produced by the
neural network model is shown in Fig.3(b).
The result of the deterministic prediction is shown in Fig.4 where the
ordinate is the correlation coefficient between actual and predicted values
and the abscissa is the number of steps ahead being predicted. The solid
and dotted lines in Fig.4( a) correspond to the tesselation predictions and
the neural network predictions, respectively. The performance of the backpropagation network depends upon the initial choice of its connection weights
and thresholds as demonstrated in Fig.4(b); the dotted line in Fig.4(a) shows
the average performance over the five networks in Fig.4(b).
The one-step prediction it+! was excellent in both methods, with correlation better than 92% between actual and predicted values. Correlations
decreased as prediction time k increased, as shown in Figure 4. As argued by
Sugihara and May[6], tltis is evidence for chaotic dynamics: chaos is charac3

terized by sensitivity to initial conditions, and the errors both from modeUing
and from measurement will cause divergence between the predicted and the
true val ues as prediction time increases.
The Lyapunov spectrum[27] is a useful index of sensitive dependence on
initial conditions. The largest exponent AI of the Lyapunov spectrum can be
estimated from relationships between the prediction time T and statistical
quantities such as the root-mean-square error E [2,28] and the correlation
coefficient l' [29]. Using the relationship between T and E for iterated prediction given by Casdagli et al. [28], we obtain estimated values for AI of
0.35 for the tesselation and 0.36 for the averaged neural nehvork model of
Fig.4(a). Using Wales's result (equation (7) in Ref.(29]) relating r to AI and
T, the corresponding values of AI are 0.35 and 0.37 for the tesselation and
the neural network, respectively. Moreover, the value of 0.39 is obtained by
the Sano-Sawada algorithm[30]. These similar values give credence to the
existence of a positive Lyapunov exponent in the dynamics of the squid axon
response.
In all our tests there was good agreement between the two qui te different
modelling methods, so it seems unlikely that the results are a lucky accident. An alternative approach, which constructs a simple explicit model
from understancling of refractoriness and other neural characteristics, also
gives similar results[21].

Acknowledgements
AIM thanks Tokyo Denki University for hospit.ality and JSPS and the Australian Academy of Sciences for a travel grant. This research was partially
funded by ARC grant A69031387 and by a Grant-in-Aid(02255107) for Scientific Research on Priority Areas from the Ministry of Education, Science
and Culture of Japan. A. I. Mees thanks the Santa Fe Institute and the Los Alamos National Laboratory.

References
[1] J.P. Crutchfield & B.S. McNamara, Comple:v Systems 1,417-452 (1987).
[2] J.D. Farmer & J.J. Sidorowich, Phys. Rev. Letters 59,845-848 (1987).
4

[3] A. Lapedes & R. Farber, Nonlineal' signal pl'ocessing using neural netwOl'ks: pl'ediction and system modelling (Los Alamos National Laboratory, 1987).
[4] M. Casdagli, Physica D 35, 335-356 (1989).
[5] A.I. Mees, in Dynamics of Complex Intel'connected Biological Systems
(eds. T. Vincent, A.I. Mees & L.S. Jennings) 104-124 (Birkhauser,
Boston, 1990).
[6] G. Sugihara & R.M. May, Natul'e 344,734-741 (1990).

[7J A.I. Mees, Intemational Joul'nal of Biful'cation and C/wos 1, in press


(1991).
[8] P. GlansdorIT & I. Progogine, Thel'modynamic theol'y of stl'uctUl'e, stability and fluctuations (Wiley-Interscience, London, 1971).

[9J G. Nicolis

& I. Prigogine, Self-ol'ganization in nonequilibl'ium systems: f1'Om dissipative stl'uctUl'es to ol'del' th1'Ough fluctuations (Wiley-

Interscience, London, 1977).


[10] H. I-Iaken,

Synel'getics - an Introduction: Nonequilibl'ium Phase


Transitions and Self- 0l'ganisation in Physics, Chemistr'y and Biology

(Springer-Verlag, Berlin, 1977).


[I1J A.V. Holden, Chaos (Manchester University Press, Manchester, 1986).
[12] H. Degn, A.V. Holden & L.F. Olsen, Chaos in Biological Systems
(Plenum Press, New York, 1987).
[13] B.J. West, Fractal Physiology and Chaos in Medicine (World Scientific,
Singapore, 1990).
[14] K. Aihara & G. Matsumoto, in Chaos (eds. A.V. Holden) 257-269
(Manchester University Press, Manchester, 1986).
[15] IC Aihara, T. Numajiri, G. Matsumoto & M. I<otani, Physics Letters A
116, 313-317 (1986).
[16] G. Matsumoto et aI., Physics Letters A 123, 162-166 (HI87).
5

[17] N. Takahashi et a!., Physica D 43, (1990).


[18] K. Aihara, G. Matsumoto & Y. Ikegaya, J. Theor. BioI. 109, 249-269
(1984).
[19] K. Aihara & G. Matsumoto, in Chaos in Biological Systems (eds. H.
Degn, A.V. Holden & L.F. Olsen) 121-131 (Plenum Press, New York,
1987) .
[20] K. Aihara, in Bifurcation phenomena in nonlinear systems and theory
of dynamical systems (eds. H. Kawakami) 143-161 (World Scientific,
Singapore, 1990).
[21] K. Judd, Y. Hanyu, N. Takahashi & G. Matsumoto, to be submitted to
J. Math. Bioi. (1992).
[22] IC Judd & A.I. Mees, Inte1'7lational Journal of Bifurcation and Chaos
1, in press (1991).
[23] A.L. Hodgkin & A.F. Huxley, J. Physiol. (London) 117,500-544 (1952).
[24] Y. Hanyu & G. Matsumoto, Physica D , in press (1991).
[25] D.E. Rumelhart, G.E. Hinton & R.J. Williams, Nature 323, 533-536
( 1986).
[26] F. Takens, in Dynamical Systems and Turbulence (eds. D.A. Rand &
L.S. Young) 365-381 (Springer, Berlin, 1981).
[27] I. Shimada & T. Nagashima, Progress in Theoretical Physics 61, 1605
(1979).
[28] M. Casdagli, D. des Ja.rdins, S. Eubank, J.D. Farmer, J. Gibson, N.
Hunter & H. Theiler, Los Alamos National Laboratory, Report LA-UR91-1637 (1991).
[29] D.J. Wales, Nature 350,485-488 (1991).
[30] M. Sano & Y. Sawada, Phys. Rev. Lett. 55, 1082-1085 (1985).

Figure Captions
Fig.l
Squid axon membrane potentials stroboscopically sampled at the leading
edge of each stimulating pulse. The squid axon in the resting state was stimulated by periodic pulses with amplitude 1.19 times the threshold current,
pulse width 0.3 msec and pulse interval 3.8 msec. The temperature was
14.0'0. The number of data points is 500. The data are normalized between
-0.5 and 0.5.

Fig.2
Power spectrum of the squid axon response time series in Fig.l, computed
with a Hanning window of width 64. The dashed lines are !~5% confidence
intervals.

Fig.3
(a) An embedding in 2 dimensions of the first half of the squid data in Fig.1.
(b) A possible strange attractor obtained by allowing a ba.ck-propagation
network modelling the squid data to free-run (that is, to simulate the system
for many time steps).

Fig.4
Correlation coefficient between actual and predicted values as a function of
increasing prediction time. (a) The solid and dotted lines correspond to
predictions using tesselation and using a back-propagation neural network,
respectively. The dotted line is the average over five networks shown in (b).
The initial values of connection weights and thresholds in each network of
Fig.4(b) were determined randomly from a uniform distribution between -0.3
and 0.3. The neural nets were trained for a fixed time; on termination, the
root mean square error between output and teaching signals in the last layer
was in all cases less than 0.039.

0.5--r-........- - - - - - - - - - - - - - - - - - - - ,

0.4
0.3
0.2
0.1

o
-0.1

-0.2
-0.3
-0.4
-0.5-+-J----,-------r--!..---,.----r------/

200

400

Time

Fi~. 1

CMt' es t CLI.)

X( I +1)

0.500000

.." .

"~~:.:~.
'

'",>"l

..,

.._..~~,.~
I. '

0.000000

.i:l~"
:i!-=

....
..

"

'

-0.500000

.j--~-~-~-~-~-~--~~
X(I)

0.000000

-0.500000

.'- ".

X(I+I)

0.500000

0.500000

'\);

',-"' .... 1:\-'

"'

.......

.,~ .

'

"\ .

..

'

0.000000

!r'
,

-0.500000

.j--~-~-~-~-~-~~-~--,
x(t)

-0.500000

F;~.

( Mee s

0.000000

eA

1\/)

0.500000

10 0-. --- --- --- --- --- --- --- .;. .-- --- ,
....

80
70

.....
'

............
'.

....

60

'"

....

50

""".....

". ".

". ".

40

". ". ".

".

30- -L. .... ,... ---- -... .,.- ---- -,.. ---- --,. .--1
1

Pre dic tio n Tim e


o

Tesse lation

+ BP

100""':""- - - - - - - - - - - - - - - . . . . ; . ; . . . - - - - - - - - . . ,
90-

807060-

50-

'0..

40-

.... , ..

30-

'.

20100-t --- --- --- --- --- --- ---

--- '-. .., ""' O-~

-10 .......--,, ;--- ---- --r,--- ---- ".-- ---- ---" T"" "-- I
1
2
3
4

Pre dic tio n Tim e

You might also like