You are on page 1of 11

Neural Networks, Vol. 2, pp. 375-385. 1989 0X93-60X0/X9$3.00 + .

oo
Printed in the USA. All rights reserved. Copyright I( 1989 Pcrgamon Press plc

ORIGINAL CONTRIBUTION

Adaptive Neural Oscillator Using Continuous-Time


Back-Propagation Learning

KENJI DOYA AND SHUJI YOSHIZAWA


Faculty of Engineering, University of Tokyo

(Received 5 September 1988; revised and accepted 18 January 1989)

Abstract-A neural network model of temporal pattern memory in animal motor systems is proposed. First,
the network receives an external oscillatory input with some desired wave form, then, after sufficient learning,
the network autonomously oscillates in the previously given wave form. The network has three layers. Each of
the units is a continuous-time continuous-output model neuron, which is not oscillatory by itself. The wave form
of the autonomous oscillation is memorized in the connecting weights between the units. The back-propagation
learning algorithm is modified and applied to the continuous-time recurrent networks. The abilities of the network
and the learning algorithm are examined by computer simulations. Studies on such artificial neural oscillators
will be helpful in understanding the roles of complex synaptic connections between the motorneurons and the
interneurons observed in the motor nervous systems of animals.

Keywords-Adaptive neural oscillator, Continuous-time back-propagation, Central pattern generator, Tem-


poral pattern memory, Recurrent neural network. Periodic attractor.

1. INTRODUCTION lower animals are expected to exist in the higher


animals.
When we learn an unexperienced motion, we have In this paper we propose a neural network model
to be conscious of the trajectory of the movement of temporal pattern memory in animal motor sys-
of each part of our bodies. However, after repeated tems, which is called the adaptive neural oscillator.
learning, we can perform the motion unconsciously The network has two operation modes. First, in the
of the detailed motions of our bodies. This suggests memorizing mode, the network receives an external
that there are, in the hierarchy of our motor nervous periodic input with some desired wave form. Then,
systems, some neural networks which undertake to in the regenerating mode, the network cuts off the
generate the motor command signals which were re- external input and begins autonomous oscillation with
peatedly given by the higher centers. But how are the previously given wave form (Figure 1). In the
the various patterns of motion stored in our motor following sections we will show the structure and the
nervous systems? learning algorithm of the network model. Typical
Physiological studies of some vertebrates and in- abilities of the network model related to temporal
vertebrates have revealed that their rhythmical pat- pattern memory are examined by computer simula-
terns of motion are generated by the neural oscil- tions.
latory circuits-the central pattern generators-
in their spinal cords or ganglia (Grillner, 1975; Frie-
sen & Stem, 1977; Kristan et al., 1988; Miller & 2. DYNAMICS OF THE ADAPTIVE
Scott, 1977; Selverston, 1988a, 1988b; Stent et al., NEURAL OSCILLATOR
1978). For the higher animals, it is not well under-
stood how the temporal patterns of their skilled mo- The structure of the network is shown in Figure 2.
tions are stored and regenerated in their nervous The network consists of one input unit (indexed by
systems. But the similar mechanisms to those of the 0), IZhidden intermediate units (indexed by 1, . . ,
n), and one output unit (indexed by 12 + 1). The
input unit works as an input selector. Each of the
Requests for reprints should be sent to Kenji Doya, Depart-
ment of Mathematical Engineering and Information Physics.
hidden and the output units is a continuous-time con-
Faculty of Engineering, University of Tokyo, 7-3-l Hongo, tinuous-output model neuron, which is a cascade of
Bunkyo-ku, Tokyo 113, Japan. a weighted summator, time lag of first order, and a
375
376 K. DoYti and S. Yoshizuwu

thI:IFI---_. forced oscillation

autonomous oscillation
LX_.J

FIGURE 2. The structure of the adaptive neural osciktor.


FIGURE 1. A model of the motor pattern leamlng eystem.

sigmoid output function. The hidden units are lat- Wijis the connecting weight from the jth unit to the
erally connected with random asymmetric connect- ith unit. There supposed to be no self-connections
ing weights Wiiwhich can take either positive or neg- (wii = 0). t is the decay time constant of the hidden
ative values. and the output units. g(x) is a biased sigmoid func-
When the external input signal is fed to the hidden tion, whose value takes g(0) = 0 a%rdg(m) = 2 1.
units from the input unit, after some transient time, If we assume that the learning has completed
most of the hidden units fall into forced oscillation ideally in the memorizing mode, that is, the wave
with the same period to that of the input signal, but form of the output unit y,, I(t) has become exactly
they have various relative phases and wave forms by equal to that of the external input B(t), switching of
the effect of randomness of the lateral connecting the input from d(t) to y,+ r(f) makes no difference
weights. in the wave forms of the hidden units, and conse-
The network has two operating modes: the mem- quently to that of the output unit. This means that
orizing mode and the regenerating mode. the wave form of the external signal d(t) has become
In the memorizing mode, each hidden unit is forced a periodic solution of the closed loop dynamics- in
to oscillate by the external input, and learning is the regenerating mode.
executed so as to make the wave form of the output In general, however, when the learning has con-
as close as possible to that of the input. verged, the output y,+r(t) may not be exactly equal
In the regenerating mode, the output signal is fed to the input d(t). In this case, d(r) is not the periodic
back to the input. If the learning has been sufficiently solution of the closed loop network. Moreover, even
accomplished, an oscillation in some wave form sim- if the learning is complete and there exists a periodic
ilar to the previously given one may arise. solution, it may not be stable as a solution of the
The dynamics of the network is defined as follows: autonomous system and the oscillation with the wave
form may not be sustained.
d(t) (memorizing mode),
Y(t) = (regeneratingmode), (1) To examine the existence and stability of the pe-
1y,+,(t) riodic oscillation in this network, we should inves-
tigate the functional mapping I; from ye(t) to yn+1(t).
5 $x,(t) = -4(t) + i w,y,(t) The fixed points of the mapping F correspond to the
,-I?
solutions of the closed loop network. Since the map-
(i = 1, . . . , n), (2)
ping is very complicated, it seems to be almost im-
y,(t) = &(t)) (i = 1, . . . . n), (3) possible to elucidate its properties analytically.
However, if ~,,+~(r) is sufficiently close to the pe-
riodic input d(t) in the memorixing mode and if F is
a contraction mapping, there is a ~&ablefixed point
of F near d(t). This corresponds to a periodic solu-
y,*,(t) = g(x,+*(O), (5) tion in the regenerating mode. Sii each unit has a
1 - e-2 2 saturating output function, F is ewted~to be a con-
g(x) = - = - - 1.
1 + eex 1 + e-J traction mapping under some conditions of the wave
form d(t) and the connecting weights. Thus, at ieast
Here, xi(t) and yi(t) represent the inner state in some favorite cases, the network can be organized
(membrane potential) and the output level (firing to have a stable periodic solution which is not far
rate) of the ith unit, and d(t) is the external input. from the external input d(t).
Adaptive Neural Oscillator 377

3. CONTINUOUS-TIME BACK- and for the output unit,


PROPAGATION LEARNING
u,+,(t) = 2 w,-l,YAr). (15)
j
I
In the primitive model of the adaptive neural oscil- I=

lator, adaptive filter learning (Fujita, 1982; Widrow I

I+r$
& Stearns, 1985; Widrow & Winter, 1988) was per-
formed only in the output unit. The input-to-hidden
x,+,(t)=
i ! 4, I I (t) (16)

and the hidden-to-hidden connecting weights were Y,,+ t(t) = g(x,,+,)(t), (17)
fixed with random values. These networks could where ui(t) is the total input to the ith unit and
memorize and regenerate various wave forms, but (1 + r d/&-l 1s an operator of first order time lag.
their abilities were greatly limited by the fixed ran- x(l) = (1 + r dldt))u(t) is given by the solution of
dom connecting weights. the equation
To make the connections to the hidden units also
adaptive, we modified the back-propagation learning T $X(t) = -X(t) + U(t). (18)
algorithm (Rumelhart, Hinton, & Williams, 1986)
Its derivative D(1 f 5 dldt)) = (1 + r dldt)),
and applied them to the continuous-time neural net-
since it is a linear operator.
works.
The goal of the learning is to make the output
In the continuous-time neuron network, the pres-
y,,+,(f) as close as possible to the external input d(t).
ent output of a unit is affected by the preceding input
Thus, we define the error function
wave forms, not only by the present values of the
inputs. Therefore, we must deal with the derivative E(f) = ; (Y, r(t) - d(t)). (19)
relations between the functions of time instead of the
relations between the states of the units at each time Then, the partial derivatives of E(t) with respect to
step. the weights W,+li(i = 1, . . , n) are calculated as
In general, when a mapping F : x(t) ---, y(t) is follows.
given, the derivative of Fat x(t) is defined as a linear aE
- = y,.,(t) - d(t),
operator DF(x(t)) such that, for any small function aytl
e(r), aE aE ah+, ax,,.,au,,+,
-_=----
F(x(t) + E(I)) = F(x(t)) + DF(x(t))E(t) +o(c(t)), (7)
aw,,,, ay,,-,ahI au,,., w,-,,
= (y,z_,(t) - d(t)) - ;+1(t)2
where O(E) is a smooth function satisfying

y{(t).
(21)

To calculate the partial derivatives of E(t) with


For composed map G o F of respect to the weights to the hidden units, we as-
y(t) = F(x(t)) and z(t) = G(y(t)), (9)
sumed that the contribution of y,(t) to E(t) is made
mainly through the direct connection w,+,,. Thus,
we have the chain rule considering only the direct error propagation from
D(G 0 F)(x(t)) = DG(y(t))DF(x(t)). (10) the output unit to the hidden units, we have the
following partial derivatives (see Appendix A).
With the analogy of the formulations of the conven-
dE _--- a.5 ah+, ax+, au,,
tional back-propagation algorithm, we will denote it
G - ah+, ax,,+, ha+, av,
a.2
-=_- a2 ay
ax ay ax (11) = (yn+,(t) - d(t)) - ;@

Now we rewrite the dynamical equations from (2)


to (5) by regarding them as a serial composition of
the mappings between functions. For the hidden dE ay, ax, au
units (i = 1, . . . , n), -aE = ---._-_L
aw,, ay, ax, au, aw,,

u,(t) = i: W,,Y,W (12) = (y,+,(t) - d(t)) - ;*I()


, _0

(13)
1 - y,(t)'
Yr(t> = gk(r))? (14) 2
(23)
378

From these partial derivatives, we can derive the the desired output (external input) given by the form
following learning equations. d(t) = g(&(ak sin 2&/T, + bk cm 2rct/T,)).
d
2 Wil, (t) = -VI 5 n*I, (i = 1, , n), (24)
Responses of the Units in the Memmrizkg Mode
In the memorizing mode, the response of the output
f wJt> = -&g (i = 1, . n; unit to the external input d(t) changes with the change
1,
of the hidden-to-output connecting weights. The re-
j = 0, 1, . , n), (25) sponses of the hidden units also change gradually
with the change of weights of the input-to-hidden
where q1 and t/2 are sufficiently small positive con- and the hidden-to-hidden connections by the back-
stants. propagation learning.
One example of the changing responses of the
hidden and the output units are shown in Figure 3.
4. COMPUTER SIMULATIONS
There were four hidden units. y;. . . y, are the
We examined the abilities of the network by com- responses of the hidden units and y5 is the response
puter simulations. In the following simulations, IZis of the output unit. The input signal d(t), which is
the number of the hidden units, T is the period of also the desired output, is shown by the dotted curve.
external input d(t), and t,,,is the time of memorizing. The wave form of y, gradually went closer to d(t)
The initial connecting weights were given randomly with the change of connecting weights. With insuf-
with a uniform distribution between -6 and + 6. ficient memorizing time (t, = 50). when the network
The decay time r = 1.0 for all units. was set to regenerating mode, the oscillation could
Since the output wave forms of this network are not be sustained. With memorizing time t,: =: 100,
smooth and bounded between - 1 and + 1, we used ys(t) was sufficiently close to d(t) and the oscillation

Yl -i
fr

50 55 60 65 70 Tp5, 80 85 90 95 0 5 10 15 20 ,p5 xl 35 40 45 50
I I

so;(c)~mode,tm = i
unitsn = 4. Yr Is- rwyy@ xwtwkm
is&sothe-utq2ut.m TwEarldmnlnRsd
t-1.0, +1.0],mld4ebambg~bntss, = 1)2= 2.0.
Adaptive Neural Oscillator 379

could be sustained in the regenerating mode. After layer, the output could not be made close to d(t)
additional memorizing (r, = 150), the period of the and the regenerated wave form contained only the
autonomous oscillation went closer to that of the lower frequency component of d(t).
memorized input.
Number of the Hidden Units
In Figure 5, changes in time of the mean square error
Learning in the Hidden Layer
are shown in relation to the numbers of hidden units
We examined the significance of the learning in the 12and the period T of the external input d(t). Ten
hidden layer by comparing the wave forms of the simulations were executed for each pair of n and T
oscillations with and without the back-propagation starting from different random initial connections. In
learning starting from the same initial connecting general, the error decreases faster with the increase
weights. One of the results is shown in Figure 4. The in numbers of the hidden units. It takes longer time
wave forms in (a) to (d) were made by the back- to memorize signals with short periods (T = 1.0,
propagation learning (ul = q2 = 0.5), while those 2.0). In the networks with two or four hidden units,
in (f) to (h) were made without the learning in the the errors did not converge to zero with the input
hidden layer (q, = 0.5, v2 = 0). period T = 50.
By the effect of continuous-time back-propaga-
tion learning, the wave forms of the hidden units Examples of Regenerated Wave Forms
were changed in the memorizing mode so as to make Variations of regenerated wave forms are shown in
the output y9(t) close to the complex wave form of Figures 6, 7, and 8. Each of the wave forms shown
the input d(t) = g(2.0 sin 27c/5.0t + 2.0 cos 2~/2.5t). in the Figures 6. 7, and 8(a) is regenerated with the
On the other hand, without the learning in the hidden connecting weights which made the least mean square

(4 (b)

Yl
t

Y2 t/-, \_,y.J ? ,_,-._,

Y3[-._ *-

Y9[-

0 10
TCk

(h)
FIGURE 4. The effect of learning in the hidden units. (a) memorizing mode, f, = 0 to 10 with 7, = n2 = 0.5; (b) memorizing
mode, f,,, = 500 to 510 with nr = n2 = 0.5; (c) memorizing mode, 1, = 1000 to 1010 with n, = nz = 0.5; (d) regenerating
mode, t = 1000 with n1 = nz = 0.5; (e) regenerating mode, t = 0; (f) memorizing mode, f,,, = 500 with n, = 0.5, n2 = 0;
(g) memorizing mode, f,,, = 1000 with 9, = 0.5, I)? = 0; (h) regenerating mode, f,,, = 1000 with t), = 0.5, nz = 0. f,,,is the time
of memorizfng. The number of the hidden units n = 8. The dotted curve is the external input d(f) = g(2.0 sin 2~f15.0 + 2.0
COS 2d2.5), which is Sk0 the desfred output. The decay time T = 1.0 for all units. The random initial connecting weights
w,, were uniform in [ - 3.0, + 3.01.
380 K. Dow and S. Yashizawa

1. 0 1.0
n=2 n=2 n-2
T = 1.0 trror / T = 2.0 Error
I T = 5.0
I

1.q 1.ut 1. O}
n=4 n=4 / n=4
Error T = 1.0 EWOP T = 2.0 Error T = 5.0

1.0 i.0
n=8
T= 1.0 Error T = 2.0 ErKV T = 5.0
0.
5 0.
5

II 0
0 2uo 400 600 800
Tim?

1.q 1.0 1.0


n=2 n=2 n=2
Error/ T = 10.0 Error T = 20.0 EWW T = 50.0
0.
5 0.5

Oo 100 200 300 400 loo 200 3uO 400


Time Time TlliR

1.0 1c-
1.0I
;rroi /
n=4
T = 10.0 EWlr
n=4
T = 20.0 Error
!
n=4
T = 50.0

c
Time
1.0. 1.01 i.o-
n=8 n=8 72=8
EWOP T = 10.0 Error T = 20.0 Error T = 50.0

0 0 0

Tim Time
381
Adaptive Neural Oscillator

Y3l_ /-A,__/] A
0 5 15 20-l
TCL

Cd)
FIGURE 6. Wave forms regenerated with two hidden units. (a) T = 1.0, t, = 2500; (b) T = 2.0, f, = 1500; (c) T = 5.0, 1, =
1500; (d) 7 = 10.0, f,,, = 400; (e) T = 20.0, t, = 400. The external input d(f) = g(2.0 sin 2~tiT). The learning coefficients
lj, = q* = 1.0.

error in the ten trials of the simulations in Figure 5. In some cases, one network has two or more stable
Some of these weight matrices are shown in Appen- periodic solutions. In the case of Figures 8(b) and
dix B. The wave forms with short periods (T = 1.0, (c), two symmetric wave forms were regenerated
2.0) tend to be lengthened in regeneration, and the with different initial states x,(O) (i = 1, . . , II +
opposite holds for long periods (T = 20, 50). 1). In this network, the positive feedback loop be-

Yl
1 ...---_
Yl
I--
Y2 Y2
i I--
Y3 ------+A Y3
t t
Y4 I-- ---.-_/ - Y4 1

(4 (e)
FIGURE 7. Wave forms regenerated with four hidden units. (a) r = 1.0, tm = 2500; (b) r = 2.0, t, = 600; (c) r = 5.0, tm =
600; (d) T = 10.0, f,,, = 400; (e) 7 = 20.0, t, = 400. The external input d(t) = g(2.0 sin PMT). The learning coefficients
q, = TJ2= 1.0.
382 K. Dqyu muiS. Yoshizawu

_.,,--.-.,, ,----
Y2
! ,I._

,, -
.Y

,,*,7

Y3i
ii \..._._-._--i
__
/-j :?
Y4 ..,'T
__ ,__.r-\_
t ,_._
r ,----

YSi

. /--_]_i

I*.-

0 5 10 15 20 & 30 35 4n 45 !io

(4 (b)

I
;G

RGURE 8. Wave forms regenemtad wtth eigM hldden unlts. (a) d(t) = g(2.0 sin ant/T), q, = qz = 1.0, tm = 400; (b),
(c) d(t) = g(2.0 dn 2mtB.O + 2.0 cos 2at/2B), q, = q2 = 0.5, r, = 2OW (d) d(t) = g(2.0 sin 2mtt5.0 + 2.0 sin 21Ff/2.5),
q1 = rl2 = 0.5, t = 2ooo.

tween the units 6 and 8 worked like a switching ele- good approximation for a hidden unit which is not
ment and the states of the two units (yb, y, > 0 or strongly connected to the output unit but to other
y6, ys < 0) determined the modes of the autonomous hidden units. In other words, the learning algorithm
oscillation. used is not strictly a gradient descent, however, it
In the case of Figure 8(d), a quasi periodic au- has successfully decreased the error function as is
tonomous oscillation was observed. The output wave shown by the computer simulations.
form wandered between the original wave form d(t) Back-propagation learning can be applied to al-
and the opposite wave form -d(t) with the period locate the fixed points of the recurrent neural net-
of about 56 unit time. works (Pineda, 1987). Similar formulation can be
made to derive the gradient descent learning algo-
5. DISCUSSIONS rithm for the adaptive neural oscilkttor network (Ap-
pendix A). But we have not tried to use it in simu-
lations for the compIexi$y of computation.
Recanelat CMIWC&W in the #3Mde11Layer If the connections of the network are restricted to
In deriving the learning rule for the connecting be one directional (Wii = 0 for i 5 j), we can have
weights to the hidden units, we have neglected the a simple form of the gradient descent. But the lack
effect of the indirect connections from a hidden unit of the recurrent inhibitory connections in the hidden
to the output units via other hidden units. Acxxxd- layer may spQi1the variety of the output wave forms
ingly, the partial derivatives given by (22) are not a of the hidden units.
Adaptive Neural Oscillator

Initial Condition Dependence to the adaptive neural oscillator are working in those
areas, or some other places in the brain.
It is well known that the back-propagation learning It is not probable that back-propagation learning
process can often be trapped in some local minima, is performed in the biological neural networks. We
especially when the desired input-output relation is should find some other learning principles without
complicated. Whether it falls in a global minimum (the backward error propagation. It was shown that the
error goes to zero) or not depends on the initial perceptron like learning is actually performed in the
connecting weights of the network, which are usually neural networks in the cerebellum of the rabbit (Ito,
set by some small random numbers. Sakurai, & Tongroach, 1982). In the present model,
The continuous-time back-propagation learning if there supposed to be sufficiently many hidden units
algorithm inherits the same problem of initial con- with random lateral connections, the simple percep-
dition dependence. When the desired wave form d(t) tron learning in the output unit is sufficient to mem-
is simple and there are enough numbers of hidden orize varied wave forms.
units, the error went to zero with any small random
The structure of the central pattern generators of
initial weights. But in the case of complex waves the lobster (Selverston, 1988a) and the leech (Kristan
(e.g., Figure S(b)), it was often trapped in some local
et al., 1988) are identified through exhaustive ex-
minima and the error did not go to zero. The rate
periments of intracellular recording with multiple
of successful learning is dependent on the wave form
electrodes. This technique requires a great deal of
d(t), the number n of the hidden units, the param-
time and effort. Moreover, it is difficult to apply to
eters of learning q,, q2, and the size of initial random
the mammals, since their neurons are very small and
connection h. At present, the choice of these param-
the networks are much more complicated. But even
eters can be made only by trial and error.
when we can only know the output waveform of the
central pattern generator, there is a possibility of
Existence and Stability of the Periodic Solutions predicting the structure of the network from its out-
In the construction of the adaptive neural oscillator put wave form by the simulating experiments of the
network, we have made a hypothesis; if the output adaptive neural oscillator.
m+,(t) is close enough to the input d(t) in the mem-
orizing mode, then the autonomous dynamical sys-
tem in the regenerating mode has a stable periodic Application to Robotics
solution in some wave form similar to d(t). In other In the animal motor control system, it is known that
words, we have supposed that the functional map- the rhythm and the wave form of a central pattern
ping F : d(f) -+ Y,, + I (t) is a contraction mapping, so generator is modified by the command signals from
that it has a stable fixed point near d(t). the higher centers, the synchronizing signals from
This hypothesis was true, in the present simula- other central pattern generators and the sensory
tions, when the wave forms of d(t) were simple. But feedback signals. It is possible to modify the adaptive
in some simulations with complex wave forms, even neural oscillator to work with such top down, lateral
when y,, I (t) was very close to d(t) in the memorizing and bottom up inputs, which enable us to develop a
mode, the wave forms were deformed in the regen- parallel principle of robotic control systems.
erating mode. In the adaptive neural oscillator network, contin-
uous-time back-propagation learning is used to make
Neurobiological Aspects the wave form of the output equal to that of the
input. In general, however, it can be used to make
In this paper, we employed the bipolar output neu-
various temporal input-output relations, for exam-
ron model ( - 1 < y(t) < + 1). If we regard the output
ple, the inverse dynamics model of robotic ma-
y(t) as the impulse frequency of the neuron, it cannot
nipulators (Kawato, Furukawa, & Suzuki, 1987;
be less than zero. So we must regard y(t) as the
Miyamoto, Kawato, Setoyama, & Suzuki, 1988).
deviation of the impulse frequency from the average
of each neuron.
On the basis of experiments with monkeys and
5. CONCLUSION
observations of humans (Luria, 1980), it is supposed
that the premotor area of the cerebral cortex partic- We proposed a neural network model of temporal
ipates in the control of skilled motions. It is also pattern memory, the adaptive neural oscillator, and
proposed that the pathway from the association cor- a new learning algorithm, continuous-time back-
tex to the lateral cerebellum is involved in the con- propagation learning. It was shown by computer sim-
trol of preprogrammed motions (Allen & Tsukahar, ulations that the adaptive neural oscillator network
1974). We can suppose that some networks similar can memorize the external inputs with various wave
384 K. Doyu und S. Yoshizawu

forms and regenerate them as the wave forms of the Selverston, A. I. (1988b). A consideration of invertebrate central
autonomous oscillation. pattern generators as computational data bases. Neural Nn-
works, 1, 109-117.
Most of the past studies on the dynamics of re-
Stent, G. S., Kristan, W. B., Jr, Friesen, W. 0.. Ort, C. O., Peon,
currently connected neural networks dealt with the M., & Calabrese, R. L. (1978). Neuronal generation of the
steady state attractors (Amari, 1988; Hopfield, 1984). leech swimming movement. Science, ZOO.1348-1356.
And most of the studies on the periodic attractors Widrow, B., & Stearns, S. D. (1985). Adaptive signal processing.
of the networks dealt with the problem: What wave New Jersey: Prentice Hall.
Widrow, B., & Winter, R. (1988). Neural nets for adaptive fil-
form is generated when a connection is given (Mat-
tering and adaptive pattern recognition. Computer, 21,2S-39.
suoka, 1987)? The learning of the adaptive neural
oscillator gives one solution to the inverse problem:
APPENDIX A
What connection is needed to generate a given wave
form? The back-propagation learning algorithm was generalized to
the recurrent neural networks by Pineda (Pineda, 1987). His for-
This computational approach will contribute to mulation is based on the equilibrium equations at fixed points.
the understanding of the mechanisms of animal mo- Similar derivation can be applied for the mapping between the
time functions.
tor nervous systems and the development of parallel
Let g,(t) = (1 - ~,(t)~)12 and the operator h zz
control mechanism of robotic systems. (1 + r didt)-. From equations (12) to (19), the derivatives of
the error function E(t) with respect to the weights to the hidden
units are calculated as follows.

REFERENCES aE
-=--WI_ dE a~,,, ax,,, au.,:
aw c/ a~, (, ax,+, au,, /, w,
Allen, G. I., & Tsukahara, N. (1974). Cerebro cerebellar com-
munication systems. Physiological Reviews, 54, 957-1006. = (yl,+,(t) -- d(t))g:+,(wI 2:
! 1
y(.,.Ik 2 (Al)
Amari, S. (1988). Statistical neurodynamics of associative mem-
ory. Neural Networks, 1, 63-73. ah ayi ax, au,
-=--_
Friesen, W. O., & Stent, G. S. (1977). Generation of locomotory au, aw,,
rhythm by a neural network with recurrent cyclic inhibition.
aw,, axk

Biological Cybernetics, 28, 27-40.


Fujita, M. (1982). Adaptive filter model of the cerebellum. Bi-
= g;O)h & ia
wh,Li
ological Cybernetics, 45, 195-206.
= g;(Oh &y,(t) + i
i-i-HZ,)
Grillner, S. (1975). Locomotion in vertebrates: Central mecha-
nisms and reflex interaction. Physiological Reviews, 55, 247- t 1,

304. Now let


Hopfield, J. J. (1984). Neurons with graded response have col-
fJYh
lective computational properties like those of two-state neu- zq=hz (A31
I/
rons. Proceedings of National Academy of Science of the United
States of America, 81, 3088-3092. Z = (ziI....> 2).
n (A4)
Ito, M., Sakurai, M., & Tongroach, P. (1982). Climbing fibre
Then from equation (AZ),
induced depression of both mossy fibre responsiveness and
glutamate sensitivity of cerebellar Purkinje cells. Journal of
h-zf = g;(t) &hy,(r) + 2 wlr,z)i , WI
Physiology, 324, 113-134. )
Kawato, M., Furukawa, K., & Suzuki, R. (1987). A hierarchical h-k* = G(t)Wz + G(l) ; Y,(t)e,. (A6)
neural-network model for control and learning of voluntary
movement, Biological Cybernetics, 57, 169-185. where matrices G(t) = diag(g;(t)), W = {wj,}, and e, is the ith
unit vector. Thus we have a nonhomogeneous linear differential
Kristan, W. B., Jr, Wittenberg, G., Nusbaum, M. P., 8c Stern-
equation
Tomlinson, W. (1988). Multifunctional interneurons in behav-
d
ioral circuits of the medicinal leech. Experientia, 44, 383-389. z ; ZJ = (G(r)W - E)zJ + G(t)hy,(t)e,. C47)
Luria, A. R. (1980). Higher cortical functions in man. New York:
Basic Books Inc. We can calculate the numerical solution 2. Then
Matsuoka, K. (1987). Mechanism of frequency and pattern control
of the neural rhythm generator. Biological Cybernetics, 56. -dYk = 1 + 5$ z;(l).
345-353. dw,, i 1
Miyamoto, H., Kawato, M., Setoyama, T., & Suzuki, R. (1988). Substituting (Al) with (A8), we have an improved form of the
Feedback-error-learning neural network for trajectory control partial derivative aEldwii instead of the equation (W), which we
of a robotic manipulator. Neural Networks, 1, 251-265. used in the computer simulations. If we neglect the second term
Miller, S., & Scott, P. D. (1977). The spinal locomotor generator. in (AZ), or G(t)W in (A7), (Al) is equal io the approximated
Experimental Brain Research, 30, 387-403. equation (23).
Pineda, F. J. (1987). Generalization of backpropagation to re-
current neural networks. Physical Review Letters, 18, 2229- APPENDIX B
2232.
Here we show some examples of the connecting weights which
Rutnelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). were obtained by the computer s~&~&s.
Learning representations by back-propagating errors. Nature, In the regenerating mode, if we put wi, +, = w,~,equations (1)
323, 533-536. to (5) can be summarized as foIlows.
Selverston, A. I. (1988a). Switching among functional states by
means of neuromodulators in the lobster stomatogastric gan- r $xi(t) = -x,(t) + i w,,g(x,(t)) (I = 1, )n 3 I).
glion. Experientia, 44, 376-383. ,=I
Adaptive Neural Oscillator 385

Thus we can represent all the connecting weights of the network Figure 6(c): n = 2. T = 5.0, h = 1.0, q1 = I> = 1.O. t,, =
in a (n + 1) dimensional matrix {w,,}. The bottom row is the 1500.
hidden-to-output connecting weights, the rightmost column is the
input-to-hidden connection, and the rest are the hidden-to-hidden
connections.
Figures 4(a), (e): n = 8, 6 = 3.0, t, = 0 (initial random 0
i -6.1
-1.5 01.4 -2.2
-6.8 ~ 01.3!
connection). Figure 6(e): n = 2. T = 20.0, ii = 1.0. qI = qz = 1.0, r,,, =
400.
0 -3.0 -2.5
0.6 0 -2.0
-3.7 -2.1 0
Figure 7(a): n = 4. T = 1.0. (S = 1.0. q, = Ir = 1.0, t,,, =
2500.
0 -1.4 -1.6 1.7 -5.6
i -3.5
2.3
0.7
0.1
3.6
02.1
1.6
1.5 -0.8
-3.4
-2.4
00.9
3.4
2.7
3.7
2.5 -0.2
-2.9
-2.7
-3.8
03.2
3.8
3.0
1.1 -2.8
-2.0
-3.9
-1.9
-3.8
02.3
1.8 -1.3
-3.5
-0.5
0.0
0
2.7
3.0
1.1
I.0 -1.5
-1.9
-2.9
-0.7
0.8
2.0
2.8
01.7 -1.1
-3.5
-0.5
-1.0
-1.2
02.6
0.4 -1.3
-3.0
03.2
2.1
0.4
0.3
2.9
1.7 -2.7
-2.6
-3.4
- 00.0
-3.1
2.4
1.9 I
1.4 0.7 0 0.3 -1.0 3.5

Figures 4(c). (d): n = 8. 6 = 3.0, d(t) = g(2.0 sin 1.0 -0.3 -2.2 0 -7.6
2nti5.0 + 2.0 cos 2nt/2.5), q-, = I: = 0.5, I, = 1000. 0.1
7.0 -4.1
0.6 -7.0
0 -0.3
9.3 0 I
6.0
Figure 7(c): n = 4, T = 5.0, b = 1.0, q, = q~ = 1.0, t,,, =
800.
0 1.2 -0.7 -1.7 0.1
2.3 0 0.6 - 1.6 0.2
2.8 2.5 0 -0.9 -0.6
1.0 1.6 0.1 0 -0.8
-2.0 -3.0 -3.9 -1.6 0
I -0.8
-4.1
2.7
0.7
0.0
3.6
02.0
1.6 -2.9
-3.9
0.2
3.3
3.0
03.7
2.6 -3.3
-2.6
-0.1
-3.9
2.8
03.0
0.4
2.1 -4.6
-3.8
-1.5
-1.9
0.6
01.2
1.7 -3.2
-4.1
-0.7
0.4
03.1
3.0
1.6
1.1 -1.9
-1.6
-3.0
0.1
0
2.6
1.8
1.9 -2.6
-3.6
-0.3
-2.5
-0.8
00.3
0.7
2.7 -3.9
-2.0
-1.3
04.0
3.8
1.2
1.6
1.7 --3.6
-2.3
-0.6
-3.0
-2.6
0I.6
0.9
1.0
Figure 7(e): n = 4, T = 20.0. h = 1.0. ,I, = 7: = 1.0, t, =
400.
Figure 6(a): n = 2, T = 1.0. 6 = 1.0, q, = q! = 1.0. I, =
0 0.0 0.4 -0.2 - I .h
2500.
1.1 0 0.9 0.3 1.5
0 - 1.7 -9.6 2.3 -0.1 0 -0.3 1.9
- 1.6 1.1 -0.5 0 - 1.3
11.9
-3.8 -7.8
0 05.9 i -1.1 1.4 2.5 -2.3 0

You might also like