Professional Documents
Culture Documents
ThC15.5
I. INTRODUCTION
3446
y (t ) = f1 ( Z (t 1)) + K + f n ( Z (t n)) =
f (Z (t i)) ,
i
(2)
i =1
(6)
n 2 (t + 1) = n 1 (t ) f n 1 (Z (t ) )
n 1 (t + 1) = v (t ) f n (Z (t ) )
1 (t + 1) = 2 (t ) f 2 (Z (t ) )
1 (t + 1) = 2 (t ) C 2 2 (W2 Z (t ) )
M
n 2 (t + 1) = n 1 (t ) C n1 n 1 (Wn 1 Z (t ) )
st
1 sub-layer
(7)
.
(8)
n 1 (t + 1) = v (t ) C n n (Wn Z (t ) )
n-th sub-layer
Fig. 1. Structure of neural network representing MIMO ANARX model
y (t ) =
C (W Z (t i)) ,
i i
(3)
i =1
where
u (t ) = F ( y (t ),1 (t )) ,
(4)
F = f1 (Z (t ) ) = 1 (t )
(5)
3447
desired accuracy.
First sub-network of neural network depicted in figure 1
can be simulated as a separate system with random inputs
11
(9)
T = C1 W1 .
y (t ) = C1 W1 Z (t 1) +
C (W Z (t i)) .
i i
(11)
y (t ) = [ y1 (t ), K , y m (t )] ,
Z (t ) = [ y1 (t ), K , y m (t ), u1 (t ), K , u r (t )] .
l ( m+ r )
(12)
(13)
(14)
m l
Wi i
and Ci i are input and output matrixes
of synaptic weights of the corresponding i-th sub-layer of
the hidden layer of the network depicted in figure 1 and
representing ANARX structure. Here li is the number of
neurons in the i-th sub-layer.
3448
(17)
i=2
(16)
(15)
(18)
y (t ) = 1 (t 1) +
Z (t ), Z (t 1), K , Z t n p + 1
constitute
the
3449
C (W Z (t i )) ,
i
(19)
i =2
1 (t ) = C1 W1 Z (t ) .
(20)
where
n
1
n
m
a
1
a
m
(21)
[y (t ),K, y (t )] = C W Z (t 1)
(22)
[y
(23)
n
1
n
m
and
a
1
(t ), K , yma (t )] = Ci i (Wi Z (t i )) .
n
i =2
[y (t ), K, y (t )] = [y (t ),K, y
a
1
a
m
[1,1 (t ), K ,1,m (t )]
is
{y(t i ), u(t i ), y
(t i ), i = 1, K , n p }
is
used
as
the
(25)
[u1 (t ), K , u r (t )]
HSA[ y1 (t ), K , y m (t )]
[ (t ), K , (t )]
1,m
1,1
System (21) was simulated and the obtained set of inputoutput data was used for training of MIMO NN-based
SANARX structure (27) with two sub-layers corresponding
to the second order of model (11).
(27)
As it was shown in [4], minimal order of NN-SANARX
model representing a nonlinear dynamic system is 2 (n=2),
because of linearity of the first sub-layer. The first layer
consisted of 2 linear neurons ( l1 = 2 ) and the second layer
consisted of 5 nonlinear neurons ( l 2 = 5 ) with hyperbolic
tangent sigmoid activation functions.
Levenberg-Marquardt (LM) training algorithm was
chosen to perform off-line training of the model since neural
network representing ANARX model is a restricted
connectivity network and LM algorithm is much more
efficient compared to other techniques when the network
contains no more than a few hundred weights [18]. Also the
training speed of LM algorithm is much higher and the feed
forward neural network trained with it can better model the
nonlinearity [19]. Identified parameters of model (27) can be
found in [4] where nonadaptive control of system (26) by
dynamic output feedback linearization of NN-SANARX
model is demonstrated.
By using these parameters, NN-SANARX model based
3450
y1 (t + 1) = (0.4 + 1 (t )) y1 (t ) + (1 + 2 (t ))
y 2 (t + 1) = 0.2 y 2 (t ) + (1 + 3 (t ))
u1 (t )
1 + u12 (t )
u 2 (t )
1 + u 22 (t )
+ 0.2u13 (t ) + 0.5u 2 (t )
(29)
where
0.15e 5104 t ,
t < 1400
1 (t ) =
,
0.2 cos(0.0027 t ) 0.2, t 1400
(30)
6.5 10 4 t ,
t < 610
2 (t ) =
,
3.5 10 4 t + 0.15, t 610
(31)
3 (t ) = 2.5 10 4 t ,
(32)
)
)
(33)
Change of coefficients
[2]
[3]
[4]
0
-0.5
1
-1
500
[5]
1000
1500
Time Steps
2000
2500
[6]
Disturbance level
0.2
0
[7]
-0.2
-0.4
-0.6
-0.8
500
[8]
1000
1500
Time Steps
2000
2500
[9]
0.5
Outputs
[10]
0
[11]
-0.5
reference signal for the first output
reference signal for the second output
first output of the system
second output of the system
-1
0
500
1000
1500
Time Steps
2000
[12]
2500
[13]
VI. CONCLUSION
[15]
3451
[14]
[16]
[17]
[18]
[19]