Professional Documents
Culture Documents
( )
L L
= X j - mapping part; (2a)
( ) 2 2
L L
= X g - demapping part; (2b)
( ) 2 1 2 2 2
( ) ( ) ( ), ( )
2
n n n n
t
t t t t
+
D
= +
X X X y
( ) 2 2 1 1
( ), ( )
n n
t t
+ +
y X , (2c)
where:
2
L
j and
2
L
g are map-
ping and demapping transformations linking interface vari-
ables with states, respectively;
2
y is the integration function;
is parameter vector describing conditions for prediction of
state vector (Section III).
III. GENERALIZED ANN TRAINING SETS
The synthesis of the ANN-based equivalent depends on: 1)
the selection of interface variable set in the boundary nodes
(
L
), and 2) available knowledge about physical structure and
parameter database for the retained subsystem. The initial
training requires sufficiently long measurement records. When
the equivalent is later used on-line, we are interested in how
different the operating conditions are from those in the training
set. If the discrepancy is deemed substantial, then additional
training is needed. Criteria for similarity of training and op-
eration interface variables from database include: 1) Mean
square error between interface variables at the input and output
layer of trained bottleneck ANN; 2) Minimal mean square de-
viation between the operation interface variables and the
closest training pattern in the database.
Even when a minimal measurement set at boundary nodes is
available, the training of the bottleneck ANN can be per-
formed with a complete (redundant) interface variable set, as
we can use the model of subsystem 1 to reconstruct the non-
measured variables (we assume that measurement set is com-
posed with active and reactive power flow in interconnections,
as in Section II):
T T
L Lm Lc Lm Lm Lc Lc
= =
P Q V q , (3)
where indices m and c signify measured and calculated val-
ues, respectively. The symbol
Lm
P (
Lm
Q ) denotes vector of
measured active (reactive) power flows, while
Lc
V (
Lc
q ) de-
notes vector of calculated voltage magnitude (angle) in bound-
ary nodes. This interface variable set induces the following
structure of mapping transformation function (same holds for
demapping function
2
L
g ):
2 2 2 2 2 2 2
T T
L Lm Lc LP LQ LV Lq
j j j j
= =
j j j . (4)
The generalization of the training set also helps in improv-
ing the effectiveness of the bottleneck and recurrent ANN, and
quality of solution obtained by ANN-based equivalent. With
our particular selection of an implicit ODE integrator structure
(2c), we want to improve its numerical stability [9]. An im-
plicit integrator has advantages in systems with widely differ-
ent time scales (stiff DAEs), such as power system. Because
of that, we preferred implicit ODE integrator application in our
equivalencing procedure. From generalized equation (2c) we
see that the integration transformation function
2
y depends
both the state vector
2
X
from recurrent ANN (2c), with constraints on demap-
ping transformation function for parameter vector
(5) on the output from bottleneck ANN (2b):
T
ANN ANN ANN
Lc Lc Lc
= =
V q
2 2
2 2
2 2
( )
( )
( )
LV
Lc
Lq
g
g
= =
g
X
X
X
, (6)
Step 2: Calculation of other initial interface variables (speci-
fied as measured in the classically modeled subsys-
tem, index m in eq. (3)) on the output from demap-
ping part of bottleneck ANN (2b):
T
ANN ANN ANN
Lm Lm Lm
=
P Q
2 2
2 2
2 2
( )
( )
( )
LP
Lm
LQ
g
g
= =
g
X
X
X
. (7)
Step 3: Calculation of prediction state vector in retained sub-
system (
1
X ) and subvector of calculated interface
variables
T
cl cl cl
Lc Lc Lc
=
V q for interface variable
subvector
T
ANN ANN ANN
Lm Lm Lm
= =
P Q from Step 2.
Step 4: Verification of the convergence criterion:
( )
2
cl ANN
Lc Lc
e - . (8)
If criterion (8) is satisfied, transient analysis is per-
forms for next time prediction (Step 1). Otherwise,
calculation continues with Step 5.
Minimal measurement
set (training duration)
Classical
(retained)
subsystem
ANN-based
(reduced)
subsystem
Other calcu-
lated inter-
face variable
Off-line (training) phase
Step 5: Calculation of corrected prediction of state vector
'
2
X
from recurrent ANN (2c), with constant parameter
vector
T
cl cl cl
Lc Lc Lc
= =
V q , calculated in Step 3.
Step 6: Calculation of corrected parameter vector
' ' ANN
Lc
= (5) on the output from demapping part
of bottleneck ANN (2b), for corrected prediction of
state vector
'
2
X from Step 5.
Step 7: Calculation of other corrected interface variables
(
' ANN
Lm
) - this step is same as Step 2.
Fig. 2. Two-way interaction between classical and ANN-based reduced sub-
systems in calculation of transient analysis (on-line (prediction) phase).
In the case of the interaction for initial data segment algo-
rithm is simplified, as only the bottleneck ANN is used. Note
that the algorithm can be divided into the initial prediction
phase (Step 1-Step 4) and correction phase (Steps 5, 6, 7, 3
and 4, or closed loop in Fig. 2). While it is possible to train the
recurrent ANN with the extended parameter vector
T
L Lm Lc
= =
, the resulting ANN would be sig-
nificantly larger and much harder to train. If such ANN were
used, Step 2 (Step 7) would be removed from the algorithm,
since a solution for all interface variables would be obtained
directly from the recurrent ANN (Step 1 and Step 6).
V. APPLICATION
The effectiveness of our two-way interactive transient
analysis procedure will be evaluated on the example of New
England/New York power system from Fig. 3 [11]. The system
comprises 68 buses and 19 generators. The full model is char-
acterized with a 144-dimensional state vector. The system can
be naturally divided in two parts connected with three lines.
Points for active and reactive power flow measurements in
interconnections on Fig. 3 is shown as . The left part of the
system, described by a 42-dimensional state vector, is retained,
while the right portion (shaded on Fig. 3), originally described
by a 102-dimensional state vector, is equivalenced. The
equivalent is sought for the case of a three-phase fault (started
at 0.5s) on generator G15 in the retained part. For ANN-
training purposes, measurements of active and reactive power
flows in interconnections (simulated by ETMSP [12]) are
used, corresponding to fault clearing times in the range
0.05-0.5s. ANNs were trained with the Levendberg-Marquardt
back-propagation training algorithm [13]. Bottleneck ANN
was trained using the gray box methodology [9]. For this pur-
pose, the right part of the system in Fig. 3 is equivalenced by
software package DYNRED [14]. After reduction, reduced-
order subsystem comprises only two equivalent generators
with generator speed and generator absolute angle as state
variables (thus assuming the classical model of equivalent syn-
chronous generator) and a 46-dimensional reduced overall
system state vector. Since the parameters and waveforms at
equivalent generators are very similar, for initialization (gray
box) of bottleneck ANN only two states (or two neurons in
middle, bottleneck layer) from one generator are used.
The size of the measurement training subvector
Lm
is 6
(active and reactive power flows in interconnections 1-2,
1-27 and 9-8, Fig. 3), while the dimension of the calculated
training subvector
Lc
is 4 (voltage magnitude and angle in
measurements nodes 1 and 9, Fig. 3).
Other basic data for ANNs training are:
- Number of training measurement responses: 5.
- Length of training waveforms for bottleneck ANN: 0.5-9s.
- Duration of training waveforms for recurrent ANN: 1-9s.
- Dimension of training pattern for fault (post fault) bottle-
neck ANN: 129 (1375).
- Dimension of training pattern for recurrent ANN: 190.
- Number of neurons in fault (post fault) bottleneck ANN:
10; 10; 2; 10; 10 (10; 10; 2; 25; 10).
- Number of neurons in recurrent ANN: 14; 20; 20; 2.
Our integrated dynamic ANN-based model reduction and
transient analysis software package is realized in Matlab envi-
ronment, with the help of the Power System Toolbox [11].
In Figs. 4-7 we show responses obtained by applying the
generalized training set to the ANN equivalent. Figure 4 dis-
plays three traces of generator speed the first obtained solely
from the bottleneck ANN, the second by combining the bottle-
neck and recurrent ANN, and the third from a classical
equivalencing procedure (DYNRED, also used for the initiali-
Initial prediction
2
X , with
constraint on
ANN
Lc
=
(recurrent and demapping
part of bottleneck ANN)
Initial (corrected) predic-
tion
ANN
Lm
(demapping part
of bottleneck ANN)
Convergence
criterion
satisfied?
Next time
prediction
Yes
No
Corrected prediction
' ' ANN
Lc
= (demapping
part of bottleneck ANN)
Corrected prediction
'
2
X ,
with
cl
Lc
=
(recurrent ANN)
Predictions
1
X and
cl
Lc
, Users Guide,
Version 4, The Math Works Inc., 2001.
[14] * * * Dynamic reduction program (DYNRED), Users Manual, Version
1.1, EPRI TR-102234 Project 2447-1, Final Report, Oct. 1993.
VIII. BIOGRAPHIES
Aleksandar M. Stankovi (1960) obtained the Dipl. Ing. degree from the
University of Belgrade, Yugoslavia in 1982, the M.S. degree from the same
institution in 1986, and the Ph.D. degree from Massachusetts Institute of
Technology in 1993, all in electrical engineering. He has been with the De-
partment of Electrical and Computer Engineering at Northeastern University,
Boston since 1993, presently as a Professor. He serves as an Associate Editor
for IEEE Transactions on Power Systems and he served IEEE Transactions on
Control System Technology in the same capacity from 1997 to 2001.
Andrija T. Sari (1962) received the B.Sc. degree from the University of
Belgrade, Yugoslavia, in 1988, and the M.Sc. and Ph.D. degrees in Electrical
Engineering in 1992 and 1997, respectively, from the same university. He is
an Associate Professor of Electrical Engineering at aak College of Engi-
neering, University of Kragujevac, Yugoslavia. Presently, he is a Visiting
Professor with the Department of Electrical and Computer Engineering,
Northeastern University, Boston, USA. His main areas of interest are power
system control, analysis, planning and optimization, as well as application of
artificial intelligence methods in these areas.
0 1 2 3 4 5 6 7 8 9
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
R
e
a
c
t
i
v
e
p
o
w
e
r
(
p
u
)
Time (sec)
Original system
Gray box (bottleneck ANN)
Gray box (recurrent ANN)
0 1 2 3 4 5 6 7 8 9
-2
-1.5
-1
-0.5
0
0.5
A
c
t
i
v
e
p
o
w
e
r
f
l
o
w
(
p
u
)
Time (sec)
Original system
DYNRED equivalent
ANN equivalent