You are on page 1of 9

Electric Power Systems Research 74 (2005) 417425

Forecasting regional electricity load based on recurrent


support vector machines with genetic algorithms
Ping-Feng Pai a, , Wei-Chiang Hong b
a

Department of Information Management, National Chi Nan University, 1, University Rd. Puli, Nantou, Taiwan 545, ROC
b School of Management, Da-Yeh University, 112 Shan-Jiau Road, Da-Tusen, Changhua, Taiwan 51505, ROC
Received 10 October 2004; received in revised form 14 January 2005; accepted 15 January 2005
Available online 8 April 2005

Abstract
Accompanying deregulation of electricity industry, accurate load forecasting of the future electricity demand has been the most important
role in regional or national power system strategy management. Electricity load forecasting is complex to conduct due to its nonlinearity
of influenced factors. Support vector machines (SVMs) have been successfully employed to solve nonlinear regression and time series
problems. However, the application for load forecasting is rare. In this study, a recurrent support vector machines with genetic algorithms
(RSVMG) is proposed to forecast electricity load. In addition, genetic algorithms (GAs) are used to determine free parameters of support vector
machines. Subsequently, examples of electricity load data from Taiwan are used to illustrate the performance of proposed RSVMG model.
The empirical results reveal that the proposed model outperforms the SVM model, artificial neural network (ANN) model and regression
model. Consequently, the RSVMG model provides a promising alternative for forecasting electricity load in power industry.
2005 Elsevier B.V. All rights reserved.
Keywords: Recurrent neural networks (RNNs); Support vector machines (SVMs); Recurrent support vector machines (RSVM); Genetic algorithms (GAs);
Electricity load forecasting

1. Introduction
With introduction of deregulation into electricity industry,
accurate load forecasting of the future electricity demand has
been the most important role regarding the areas of distribution system investments, electricity load planning and management strategies in regional or national systems. Inaccurate
load forecasting may increase operating costs [1,2]. Bunn and
Farmer [1] pointed out that a 1% increase in forecasting error implied a 10 million increase in operating costs. Therefore, overestimation of future load results in an unnecessary
spinning reserve, and the excess supply is also unwelcome
to international energy networks. On the contrary, underestimation of future load causes failure in providing sufficient
reserve and implies high costs per peaking unit. It is necessary for international electricity production cooperation that

Corresponding author. Tel.: +886 4 85 11 890; fax: +886 4 92 91 5205.


E-mail address: paipf@yahoo.com.tw (P.-F. Pai).

0378-7796/$ see front matter 2005 Elsevier B.V. All rights reserved.
doi:10.1016/j.epsr.2005.01.006

every member is able to forecast its demands accurately. Load


forecasting approaches are generally classified into time series [35], state space and Kalman filtering technology [5],
regression models [1,5,6], artificial intelligence techniques
[7,8] and fuzzy logic methods [9]. Time series model, known
as BoxJenkins ARIMA model, uses historical load data to
infer the future electricity load. Time series approaches are
convenient for modeling especially when only the electricity
load data are available. On the other hand, the disadvantage
of time series model is the ignorance of other factors that
influence electricity loads. State space and Kalman filtering
technology treats the periodic component of load as a random
process and uses 310 historical data to establish the periodic
load variation for estimating the dependent variables (load or
temp erature) of the power system. The regression model
establishes the causeeffect relationships between electricity load and independent variables such as climate factors,
social activities and seasonal factors. Knowledge-based expert system (KBES) and artificial neural networks (ANNs)

418

P.-F. Pai, W.-C. Hong / Electric Power Systems Research 74 (2005) 417425

are the popular representatives of artificial intelligence techniques for load forecasting in the recent decade. The KBES
model forms new rules based on received information, including daily temperature, day types and load from the previous day. Artificial intelligence techniques for load forecasting
are superior to traditional forecasting approach. However, the
training procedure of an artificial intelligence model is time
consuming. Therefore, some approaches were proposed to
accelerate the speed of converge [8]. The fuzzy logic model
is useful in forecasting electricity load particularly while the
historical data are represented by linguistic terms.
The support vector machines (SVMs) are based on the
principle of structural risk minimization (SRM) rather than
the principle of empirical risk minimization, which conducted by most of traditional neural network models. With introduction of Vapniks -insensitive loss function [10], SVMs
have been extended to solve nonlinear regression estimation problems in financial time series forecasting, air quality
prediction, production value forecast of machinery industry,
engine reliability prediction, etc. Recurrent neural networks
(RNNs) are based on the main concept in which every unit
is considered as an output of the network and the provision
of adjusted information as input in a training process. RNNs
are extensively applied in long term load time series forecasting [11] and can be classified in three types, Jordan networks [12], Elman networks [13], and Williams and Zipser
networks [14]. Both Jordan and Elman networks use mainly
past information to capture detailed information. Williams
and Zipser networks take much more information from the
hidden layer and back into themselves. Therefore, Williams
and Zipser networks are sensitive when models are implemented (Tsoi and Back [15]). Jordan and Elman networks
are suited to time series forecasting (Jhee and Lee [16]). In
this investigation, the Jordan network is used as a basis for the
proposed RSVMG model. Traditionally, RNNs are trained by
back-propagation algorithms. In this work, SVMs with genetic algorithms are used to determine the weights between
nodes. Finally, the proposed RSVMG model is applied to
forecast electricity load. A numerical example in the literature [7] is employed to demonstrate the forecasting accuracy
of the proposed model.

r(C) = C

N
1 
1
(ai , fi ) + ||w||2
N
2

(2)

i=1

where

(a, f ) =

if |a f |

0,

|a f | , otherwise

(3)

and C and are prescribed parameters. In Eq. (2), (a, f) is


called the -insensitive loss function. The loss equals zero if
the forecasted value is within the -tube (Eq. (3) and Fig. 1).
The second term, 21 ||w||2 , measures the flatness of the function. Therefore, C is considered to specify the trade-off between the empirical risk and the model flatness. Both C and
are user-determined parameters. Two positive slack variables
and * , which represent the distance from actual values to
the corresponding boundary values of -tube (Fig. 1), are
introduced. Then, Eq. (2) is transformed into the following
constrained form:
Minimize
r(w, , )

N


1
2

= ||w|| + C
(i + i )
2

(4)

i=1

with the constraints,


wi (xi ) + bi ai + i , i = 1, 2, . . . , N
ai wi (xi ) bi + i , i = 1, 2, . . . , N
i = 1, 2, . . . , N
i , i 0,
This constrained optimization problem is solved using the
following primal Lagrangian form;
L(wi , b, , , i , i , i , i )
N


1
(i + i )
= ||w||2 + C
2
i=1

2. Recurrent support vector machines with genetic


algorithms

N


i [wi (xi ) + b ai + + i ]

i=1

2.1. Support vector machines with genetic algorithms

N

i=1

The basic concept of the SVM regression is to map nonlinearly the original data x into a higher dimensional feature
space. Hence, given a set of data G = {(xi , ai )}N
i=1 (where
xi is the input vector, ai the actual value and N is the total
number of data patterns), the SVM regression function is:
f = g(x) = wi i (x) + b

where i (x) is the feature of inputs, and both wi and b are


coefficients. The coefficients (wi and b) are estimated by minimizing the following regularized risk function:

(1)

N

i=1

i [ai wi (xi ) b + + i ]
(i i + i i )

(5)

Eq. (5) is minimized with respect to primal variables


wi , b, and * , and maximized with respect to nonnegative Lagrangian multipliers i , i , and i . Finally,
KarushKuhnTucker conditions are applied to the regres-

P.-F. Pai, W.-C. Hong / Electric Power Systems Research 74 (2005) 417425

419

Fig. 1. Parameters used in support vector regression [17].


Fig. 2. The architecture of a SVMG model.

sion, and Eq. (4) thus yields the dual Lagrangian,


(i , i ) =

N

i=1

ai (i i )
N

N

i=1

(i + i )

1 
(i i )(j j )K(xi , xj )
2

(6)

by passing it from generation to generation. Fig. 2 presents


the framework of the proposed SVMG model. GAs are used
to yield a smaller MAPE by searching for better combinations
of three parameters in SVMs. Fig. 3 depicts the operation of
a GAs, which is described below.

i=1 j=1

subject to the constraints,


N

i=1

(i i ) = 0

0 i C, i = 1, 2, . . . , N
0 i C, i = 1, 2, . . . , N
The Lagrange multipliers in Eq. (6) satisfy the equality
i i = 0. The Lagrange multipliers and i , are calculated
and an optimal desired weight vector of the regression hyperplane is,
w =

N

i=1

(i i )K(x, xi )

(7)

Hence, the regression function is Eq. (8).


g(x, , ) =

N

i=1

(i i )K(x, xi ) + b

(8)

Here, K(xi , xj ) is called the Kernel function. The value of the


Kernel equals the inner product of two vectors, xi and xj , in the
feature space (xi ) and (xj ); that is K(xi , xj ) = (xi )(xj ).
Any function that meets Mercers condition [10] can be used
as the
function.
In this work, the Gaussian function,
 Kernel

2 
||x
x
||
i
j
is used in the SVMs.
exp 21

The selection of three parameters, , and C, of a SVM


model is important to the accuracy of forecasting. However,
structural methods for confirming efficiently the selection of
parameters efficiently are lacking. Therefore, GAs are used
in the proposed SVM model to optimize parameter selection. Holland first proposed genetic algorithms [18]. Such
algorithms are based on the survival principle of the fittest
member in a population, which retains genetic information

Fig. 3. The procedure of genetic algorithms.

420

P.-F. Pai, W.-C. Hong / Electric Power Systems Research 74 (2005) 417425

Step 1 (Initialization). Generate randomly an initial population of chromosomes. The three free parameters, , C and
, are encoded in a binary format; and represented by a chromosome.
Step 2 (Evaluating fitness). In this study, a negative mean
absolute percentage error (MAPE) is used as the fitness
function. The MAPE is as follows:

N
1  ai fi
(9)
MAPE =
a 100%
N
i
i=1

where ai and fi represent the actual and forecast values and


N is the number of forecasting periods.
Step 3 (Selection). Based on fitness functions, chromosomes
with higher fitness values are more likely to yield offspring
in the next generation. The roulette wheel selection principle
(Holland [18]) is applied to choose chromosomes for reproduction.

Fig. 4. The architecture of Jordan networks [12].

Step 4 (Crossover and mutation). Mutations are performed


randomly by converting a 1 bit into a 0 bit or a 0 bit
in to a 1 bit. The single-point-crossover principle is employed. Segments of paired chromosomes between two determined break-points are swapped. The rates of crossover
and mutation are probabilistically determined. In this study,
the probabilities of crossover and mutation are set to 0.5 and
0.1, respectively.

where vij are weights between the input and the hidden layer;
wikv are weights between the context and the hidden layer
with k delay periods and s is the total number of context
layers in past output data, in the proposed RSVMG model,
there is only one context layer (i.e., s = 1) due to only one
output neuron (i.e., r = 1).
Back-propagation yields gradients for adapting weights
of a neural network. The back-propagation algorithm is presented as follows. First, the output of the nth neuron in Eq.
(11) is rewritten as:

Step 5 (Next generation). Form a population for the next


generation.

fn (t) = h(xT (t)(t))

(12)
xT (t)

Step 6 (Stop conditions). If the number of generations equals


a given scale, then the best chromosomes are presented as a
solution; otherwise go back to Step 2.

and fn (t);
where h() is the nonlinearity function of
xT (t) = [x1 (t), . . ., xP (t)]T is the input vector; (t) = [1 (t), . . .,
P (t)]T is the weight vector; a cost function is then presented
to be the instantaneous performance index,

2.2. Recurrent support vector machines with genetic


algorithms

J((t)) =

In this work, the Jordan network is specified as a recurrent


neural network framework. All neurons in a layer except those
in the context layer are connected with all neurons in the next
layer. A context layer is a special hidden layer. Interactions
only occur between neurons in the hidden layer and those in
the context layer. Fig. 4 shows the architecture of a Jordan
network. For a Jordan network with p inputs, q hidden and r
output neurons, the output of the nth neuron, fn (t), is [19]:
fn (t) =

q


Wi i (t) + bi (t)

(10)

i=1

2
2
1
1
d(t) fn (t) = d(t) h(xT (t)(t))
(13)
2
2

where d(t) = [dl (t), . . ., dP (t)]T is the desired output.


The instantaneous output error at the output neuron and
the revised weight vector in the next moment are given by
Eqs. (14) and (15), respectively.
e(t) = d(t) fn (t) = d(t) h(xT (t)(t))

(14)

(t + 1) = (t) J((t))

(15)

where is the learning rate.


Third, the gradient J((t)) can be calculated as:
J((t)) =

where Wi is weight between the hidden and output layer, and


i (t) is the output function of the hidden neurons, which is:

r
P
s 


i (t) = g
vij xj (t) +
wikv fv (t k) + bi (t)
j=1

k=1 v=1

(11)

e(t)
J((t))
= e(t)
(t)
(t)

= e(t)h (xT (t)(t))x(t)

(16)

where h () is the first derivative of the nonlinearity h(). Finally, the weight is revised as:
(t + 1) = (t) + e(t)h (xT (t)(t))x(t)

(17)

Year

1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
MAPE

Northern regional

Central regional

Southern regional

Eastern regional

Actual RSVMG SVMG ANN

Regression Actual RSVMG SVMG ANN

Regression Actual RSVMG SVMG ANN

Regression Actual RSVMG SVMG ANN

Regression

3,388
3,523
3,752
4,296
4,250
5,013
5,745
6,320
6,844
7,613
7,551
8,352
8,781
9,400
10,254
10,719
11,222
11,642
11,981
12,924

3,430
3,494
3,933
4,277
4,395
4,986
5,594
6,238
6,753
7,292
7,736
8,345
8,917
9,419
10,073
10,921
11,262
12,162
12,395
13,122

1,867
1,893
2,098
2,256
2,289
2,564
2,858
3,145
3,424
3,685
3,804
4,150
4,355
4,532
4,831
5,307
5,361
5,711
5,780
6,131

2,227
2,263
2,488
2,697
2,796
3,126
3,409
3,701
3,979
4,267
4,551
4,887
5,120
5,418
5,805
6,208
6,493
6,868
7,013
7,481

124
126
141
153
156
175
195
216
234
251
265
288
305
321
343
373
380
407
413
440

3,288
3,623
3,852
4,079
4,427
4,962
5,645
6,348
6,944
7,397
7,788
8,252
8,853
9,500
9,956
10,956
11,252
11,644
12,219
12,826

2,988
3,392
3,645
3,896
4,258
4,754
5,345
5,993
6,648
7,213
7,610
7,952
8,531
9,467
10,334
10,319
11,213
11,747
12,173
12,543

3,424
3,491
3,926
4,263
4,398
4,993
5,607
6,287
6,769
7,311
7,788
8,318
8,958
9,470
10,091
10,838
10,991
11,643
11,804
12,834

0.7498

1.3981 1.0600 2.4500

1,663
1,829
2,157
2,219
2,190
2,638
2,812
3,265
3,376
3,655
4,043
4,425
4,594
4,771
4,483
4,935
5,061
5,246
5,233
5,633

1,615
1,839
2,066
2,295
2,525
2,755
2,986
3,214
3,441
3,665
3,885
4,101
4,311
4,515
4,712
4,700
5,065
5,231
5,385
5,522

1,713
1,872
2,034
2,207
2,398
2,613
2,858
3,130
3,426
3,734
4,040
4,324
4,568
4,752
4,862
4,885
5,060
5,203
5,230
5,297

1,833
1,864
2,079
2,257
2,323
2,602
2,868
3,143
3,369
3,593
3,864
4,134
4,364
4,614
4,894
5,197
5,112
5,301
5,350
5,572

1.3026

1.8146 1.7300 8.52

2,272
2,346
2,494
2,686
2,829
3,172
3,351
3,655
3,823
4,256
4,548
4,803
5,192
5,352
5,797
6,369
6,336
6,318
6,259
6,804

2,172
2,383
2,542
2,685
2,853
3,072
3,341
3,636
3,923
4,187
4,448
4,747
5,100
5,452
5,670
6,279
6,200
6,156
6,261
6,661

2,192
2,399
2,565
2,718
2,886
3,092
3,340
3,617
3,903
4,185
4,468
4,772
5,112
5,467
5,769
5,916
6,265
6,389
6,346
6,513

2,235
2,269
2,494
2,697
2,786
3,113
3,405
3,705
3,989
4,279
4,550
4,894
5,132
5,419
5,794
6,206
6,305
6,476
6,537
6,672

1.7530

2.0243 2.4800 8.2900

122
127
148
142
143
176
206
227
236
243
264
292
307
325
343
363
358
397
401
420

109
125
141
157
173
189
206
222
238
255
271
287
303
319
335
336
367
381
401
416

110
126
142
158
174
191
207
224
240
257
274
291
307
324
341
357
358
373
397
408

124
127
143
153
157
175
194
216
232
248
259
284
307
325
346
371
378
403
410
435

1.8955

2.6475 3.6200 4.1000

P.-F. Pai, W.-C. Hong / Electric Power Systems Research 74 (2005) 417425

Table 1
Taiwan regional electricity load (from 1981 to 2000) and forecasting results of RSVMG, SVMG, ANN and regression models (unit: 106 Wh)

421

422

P.-F. Pai, W.-C. Hong / Electric Power Systems Research 74 (2005) 417425
Table 4
Wilcoxon signed-rank test
Region

Wilcoxon signed-rank test


= 0.025, W = 0

= 0.05, W = 0

Northern region
RSVMG vs. SVMG
RSVMG vs. ANN
RSVMG vs. regression

1
1
0

1
1
0

Southern region
RSVMG vs. SVMG
RSVMG vs. ANN
RSVMG vs. regression

1
0
0

1
0
0

Central region
RSVMG vs. SVMG
RSVMG vs. ANN
RSVMG vs. regression

0
0
0

0
0
0

Eastern region
RSVMG vs. SVMG
RSVMG vs. ANN
RSVMG vs. regression

0
0
0

0
0
0

Fig. 5. Architecture of RSVMG model.

3. A numerical example and experimental results


Table 2
Training and testing data sets of the proposed model
Data sets

RSVMG model

Training data
Validation data
Testing data

19811992
19931996
19972000

ANN model
19811996
19972000

Fig. 5 shows the architecture of the proposed RSVMG model.


The output of RSVMG (f (t)) is
f (t) =

P


W T (xT (t)) + b(t)

(18)

i=1

Then, Eq. (18) replaces Eq. (1) in the SVMG algorithms,


to run the loop of SVMG in the search for values of three
parameters. Finally, the forecast values f (t) are calculated
using Eq. (18). Eq. (18) yields the forecast value f (t).

This study employed Taiwan regional electricity load data


to show the forecasting performances of RSVMG models
comparing with those of ANN model and regression model
proposed by Hsu and Chen [7]. The total load values from
1981 to 2000 serve as experimental data. Totally, 20 load data
for Taiwan regional electricity load are available, as listed in
Table 1. To conduct the forecast performance on the same
basis, it is necessary to divide total data into the same subsets. Therefore, the data are divided into three data sets: the
training data set (12 years, from 1981 to 1992), the validation
data set (4 years, from 1993 to 1996) and the testing data set
(4 years, from 1997 to 2000) [7]. The three data sets are listed
in Table 2. The forecasting accuracy is measured by absolute
percentage error (MAPE), as given by Eq. (9).
In the training stage, the training data set of each region (including total 12 load data) are fed into the RSVMG
model, and the structural risk minimization principle is em-

Table 3
Forecasting results and parameters of SVMG model and RSVMG model
Regions

SVMG parameters

Northern
Central
Southern
Eastern

0.30
0.90
0.50
7.00

2.10 1010
1.85 1010
1.00 1010
0.600 1010

400
50
80
1

Regions

RSVMG parameters

Northern
Central
Southern
Eastern

0.50
4.10
0.47
8.00

MAPE of testing (%)

1.3981
1.8146
2.0243
2.6475
MAPE of testing (%)

C
2.50 1010
1.95 1010
1.35 1010
0.60 1010

100
10
100
5

0.7498
1.3026
1.7530
1.8955

P.-F. Pai, W.-C. Hong / Electric Power Systems Research 74 (2005) 417425

423

Fig. 6. Forecasting values for different models (northern region).

ployed to minimize the training error. While training errors


improvement occurs, the three kernel parameters, , C and
of RSVMG model adjusted by GAs are employed to calculate the validation error. Then, the adjusted parameters with
minimum validation error are selected as the most appropriate parameters. Finally, a four-steps-ahead policy is used to
forecast electricity load in each region. Note that the testing
data sets are not used for modeling but for examining the
accuracy of the forecasting model. Then, the kernel parameters, , C and , in the RSVMG model with the smallest
testing MAPE value is used as the most suitable model for
this example. The forecasting results and the suitable param-

eters for the different regional SVMG models and RSVMG


models are illustrated in Table 3.
Table 1 also lists the MAPE values of various forecasting
models. In each region electricity load forecasting, based on
the same forecasting period, the proposed RSVMG model
has smaller MAPE values than SVMG, ANN and regression
models (the latter two models were proposed by Hsu and
Chen [7]), particularly for the southern region and the eastern
region. The ANN model failed to capture the load decreasing
trend from 1997 to 1998 in the southern region, similarly,
the ANN model also failed to capture the load increasing
rate from 1998 to 2000 in the eastern region. Figs. 69 illus-

Fig. 7. Forecasting values for different models (central region).

424

P.-F. Pai, W.-C. Hong / Electric Power Systems Research 74 (2005) 417425

Fig. 8. Forecasting values for different models (southern region).

Fig. 9. Forecasting values for different models (eastern region).

trate real values and forecasting values of different models


regarding each region.
To verify the significance of accuracy improvement of
RSVMG, the statistical test, namely Wilcoxon signed-rank
test, was conducted. The test was performed at the 0.025
and 0.05 significance levels n one-tail-tests. The test results (Table 4) showed that almost the RSVMG model yields
improved forecast results and significantly outperforms the
other three forecasting models, only except versus SVMG
(in northern region and southern region) and ANN (northern
region) models.

4. Conclusions
Accurate load forecasting is crucial for an energy-limited
economy system, like Taiwan. The historical electricity load
data of each region in Taiwan shows a strong growth trend,
particularly in northern region. Although this is a common
phenomenon in developing countries, overproduction or
underproduction electricity load influence the sustainable
development of economy a lot. This study introduced a novel
forecasting technique, RSVMG, to investigate its feasibility
in forecasting annual regional electricity loads in Taiwan.

P.-F. Pai, W.-C. Hong / Electric Power Systems Research 74 (2005) 417425

The experimental results indicate that the RSVMG model


outperformed the ANN and regression models in terms
of forecasting accuracy. The superior performance of the
RSVMG model has several causes. First, the RSVMG model
has nonlinear mapping capabilities and thus can more easily
capture electricity load data patterns than can the ANN and
regression models. Second, improper determining of these
three parameters will cause either over-fitting or under-fitting
of a SVM model. In this work, the GAs can determine suitable
parameters to forecast electricity load. Third, the RSVMG
model performs structural risk minimization rather than minimizing the training errors. Minimizing the upper bound on the
generalization error improves the generalization performance
compared to the ANN and regression models. Finally, Jordan
recurrent networks can continually capture data patterns
from the output layer with past values into the hidden layer.
This investigation is the first to apply the recurrent
neural network and SVM model with GAs to electricity
load forecasting. The empirical results obtained in this
study demonstrate that the proposed model offers a valid
alternative for application in the electricity industry. In
the future, regional climate factors, social activities, and
seasonal factors can be included in the RSVMG model for
forecasting electricity load. In addition, some other advanced
searching techniques for suitable parameters selection can
be combined with RSVM to forecast electricity load.

Acknowledgements
This research was conducted with the support of National Science Council (NSC 93-2213-E-212-001 & NSC
93-2745-H-212-001-URD). Mr. Chih-Shen Lin helped with
data analysis.

References
[1] D.W. Bunn, E.D. Farmer, Comparative Models for Electrical Load
Forecasting, John Wiley & Sons, New York, 1985.

425

[2] D.W. Bunn, Forecasting loads and prices in competitive power markets, Proc. IEEE 88 (2000) 163169.
[3] G.E.P. Box, G.M. Jenkins, Time Series Analysis, Forecasting and
Control, Holden-Day, San Francisco, 1970.
[4] S. Saab, E. Badr, G. Nasr, Univariate modeling and forecasting of
energy consumption: the case of electricity in Lebanon, Energy 26
(2001) 114.
[5] J.H. Park, Y.M. Park, K.Y. Lee, Composite modeling for adaptive short-term load forecasting, IEEE Trans. Power Syst. 6 (1991)
450457.
[6] J.W. Taylor, R. Buizza, Using weather ensemble predictions in electricity demand forecasting, Int. J. Forecasting 19 (2003) 5770.
[7] C.C. Hsu, C.Y. Chen, Regional load forecasting in Taiwan: applications of artificial neural networks, Energy Convers. Manage. 44
(2003) 19411949.
[8] B. Novak, Superfast autoconfiguring artificial neural networks and
their application to power systems, Electr. Power Syst. Res. 35
(1995) 1116.
[9] A.M. Al-Kandari, S.A. Soliman, M.E. El-Hawary, Fuzzy short-term
electric load forecasting, Electr. Power Energy Syst. 26 (2004)
111122.
[10] V. Vapnik, S. Golowich, A. Smola, Support vector machine for
function approximation, regression estimation, and signal processing, Adv. Neural Inf. Process. Syst. 9 (1996) 281287.
[11] B. Kermanshahi, Recurrent neural network for forecasting next 10
years loads of nine Japanese utilities, Neurocomputing 23 (1998)
125133.
[12] M.I. Jordan, Attractor dynamics and parallelism in a connectionist
sequential machine, in: Proceeding of 8th Annual Conference of the
Cognitive Science Society, Hillsdale, 1987, pp. 531546.
[13] J.L. Elman, Finding structure in time, Cogn. Sci. 14 (1990) 179211.
[14] R. Williams, D. Zipser, A learning algorithm for continually running
fully recurrent neural networks, Neural Comput. 1 (1989) 270280.
[15] A.C. Tsoi, A.D. Back, Locally recurrent globally feedforward networks: Acritical review of architectures, IEEE Trans. Neural Netw.
5 (1994) 229239.
[16] W.C. Jhee, J.K. Lee, Performance of neural networks in managerial
forecasting, Int. J. Intell. Syst. Accounting Finance Manage. 2 (1993)
5571.
[17] K. Vojislav, Learning and Soft ComputingSupport Vector Machines, Neural Networks and Fuzzy Logic Models, The MIT Press,
Massachusetts, 2001.
[18] J. Holland, Adaptation in Natural and Artificial System, University
of Michigan Press, AnnArbor, 1975.
[19] E. Ayaz, S. Seker, B. Barutcu, E. Turkcan, Comparisons between
the various types of neural networks with the data of wide range
operational conditions of the Borssele NPP, Prog. Nucl. Energy 43
(2003) 381387.

You might also like