Professional Documents
Culture Documents
3. Training
Back propagation learning, more precisely described as
steepest descent supervised learning using back propagation of
error, has been utilized to train the ANN[7]. Training data were
generated by ILFA, since it meets most of the desirable
properties of loss allocation except the requirement of a
reasonable computation time. Typical 24 hour load variation in
the system was considered for weekdays and weekend and the
Output layer O
generators were dispatched economically to minimize running
Input layer i Hidden layer j cost. The system has a peak load of 2494 MW. Fig 3 and 4
(One or more)
show the 24-hour real and reactive loads at various buses for a
Fig. 1 A feed forward neural network. weekday. Fig. 5 shows the corresponding generation at various
generation buses.
BUS 21 3.5
BUS 18 BUS 22
BUS 17 18
3.0
Gen B
BUS 23 2.5
Real loads (p.u.)
BUS 16 BUS 19
Load B BUS 20 BUS 13
14
BUS 15 Synch. 2.0
Cond.. 3 15
BUS 14
BUS 11 1.5 20 7 13
BUS 24 BUS 12
BUS 3 Load A 10
BUS 9 1.0 16 6
BUS 10
Sync.
BUS 4 cond 1
BUS 6 0.5 5 4
0.0
BUS 8
Generator
0 4 8 12 16 20 24
BUS 1 BUS 2 BUS 7
Gen A
Load Hour
Bus
Fig.2 IEEE 24 Bus reliability Test System. Fig. 3 24-hour real loads at different buses for a weekday.
The IEEE 24-Bus Test System has been utilized for this
work. It is assumed that two bilateral contracts, A and B, as
shown in Fig. 2 exist in this system. Contract A exists between The loss due to each bilateral contract was calculated by
Generator A at Bus 7 and Load A at Bus 9 and Contract B ILFA. 2600 training patterns were generated by varying all 54
2204
0.8 b) Dual activation functions:- The loss attributed to a
transaction can be positive or negative (in case of counter
0.7
flow). This aspect can be handled by the use of a hyperbolic
Reactive loads (p.u.)
0.6 tangent function. It was also observed that the reactive part of
transmission loss is 3/5 times that of the real part. Therefore,
0.5
two activation functions were used in the output layer for two
4 3
0.4 18 different types of output. Figure 6 shows the range of the
15 outputs and the activation functions used.
0.3 13
8
10
0.2
6 Y=a*tanh(b*x)
0.1 16 0.6
2 5 4
a
0.0 0.4
0 4 8 12 16 20 24
0.2
Hours Y
0
Fig. 4 24-hour reactive loads at different buses for a weekday. -4 -2 0 2 4
-0.2 Range of
real loss
6 -0.4 Range of
13 -a
reactive loss
-0.6
5
18 b*X
Generation (p.u.)
4
Fig. 6 Activation functions and output ranges.
3
1
2205
1.0E-05
1.8E-06
8.0E-06
2.0E-05
1.8E-05 5. Results
1.6E-05
Based on the convergence characteristics an ANN with a
Mean sqaure error
1.4E-05 Single activation single hidden layer with 29 neurons was chosen for loss
1.2E-05 function allocation. Two activation functions, one for real power and
1.0E-05 Dual activation the other for reactive power, were chosen with amplitudes of
8.0E-06 functions 0.5115 and 0.1116 in respective manner. ‘b’ was chosen as
6.0E-06 0.667 for both activation functions. A value of 3.58 for γ was
4.0E-06 chosen for the adaptive learning. A value of 0.48 for α was
2.0E-06 chosen for weight adaptation and a value of 0.27 for the same
0.0E+00 constant was chosen for threshold adaptation. The test patterns
were derived by varying the 54 inputs to simulate 24-hour load
0 1000 2000 3000 4000 5000
patterns for weekdays and weekends. Results obtained from the
Iterations trained ANN and the ILFA show that the ANN can allocate
Fig. 8 Convergence characteristics with two hidden layers losses to bilateral contracts with very good accuracy. Real and
reactive share of losses for peak and off-peak hours obtained
by both methods are shown in Figs. 11-14.
1.0E+00
0.05
1.0E-01
0.04
NN loss A
Mean sqaure error
1.0E-02
a=1 0.03
1.0E-03 ILFA_loss A
a=0.8 0.02 NN loss B
1.0E-04
a=0.65 0.01 ILFA loss B
1.0E-05
a=0.51 0
1.0E-06
0.20
0.35
0.50
0.65
0.80
0.95
1.10
1.25
1.40
1.55
1.70
-0.01
1.0E-07
-0.02
0 1000 2000 3000 4000 5000 6000
Iteraions Real load (p.u.)
Fig. 9 Convergence characteristics for different values of “a”. Fig. 11 Real loss allocation for an off-peak hour.
2206
6. Conclusions
0.20
NN loss A A new transmission loss allocation method based on
Loss allocation (p.u.)
0.15 ILFA loss A artificial neural network that can allocate losses to bilateral
contracts with good accuracy has been presented. The
0.10
proposed ANN can be trained with little difficulty. The trained
NN loss B ANN can provide solution in a quick manner. The proposed
0.05
ILFA loss B ANN can yield negative loss allocation to reward generators or
0.00 loads that causes counter flow in the network. Although ILFA
0.06
0.11
0.15
0.20
0.24
0.29
0.33
0.38
0.42
0.47
0.51
-0.05 was utilized to generate training data, any other method of loss
allocation can be utilized for that purpose. The use of two
-0.10 activation functions with threshold adaptation improved
Reactive load (p.u.) convergence characteristics of the ANN.
0.35
0.50
0.65
0.80
0.95
1.10
1.25
1.40
1.55
1.70
ILFA loss B
0.1 5. Y. Tsukamoto , I, Iyoda “Allocation of Fixed
Transmission Cost to Wheeling Transactions by
0 Cooperative Game Theory”, IEEE Transaction on
0.06
0.11
0.15
0.20
0.24
0.29
0.33
0.38
0.42
0.47
0.51
2207