Professional Documents
Culture Documents
Back-Propagation neural
network
in
data
forecasting
Le Hai Khoi, Tran Duc Minh
Institute Of Information Technology VAST
Acknowledgement
The authors want to Express our
thankfulness to Prof. Junzo WATADA who
read and gave us worthy comments.
Authors
CONTENT
Introduction
Steps in data forecasting modeling
using neural network
Determine networks topology
Application
Concluding remarks
Introduction
Neural networks are Universal Approximators
To find a suitable model for the data forecasting problem is very difficult and
in reality, it might be done only by trial-and-error
We may take the data forecasting problem for a kind of data processing
problem
Data collecting
and analyzing
Pre-processing
Neural Networks
Postprocessing
Data pre-processing
Analysis and transform values of input and output data to emphasize the
important features, detect the trends and the distribution of data.
Normalize the input and output real values into the interval between max and
min of transformation function (usually in [0, 1] or [-1, 1] intervals). The most
popular methods are following:
SV = ((0.9 - 0.1) / (MAX_VAL - MIN_VAL)) * (OV - MIN_VAL)
Or:
SV = TFmin + ((TFmax - TFmin) / (MAX_VAL - MIN_VAL)) * (OV - MIN_VAL)
where:
SV: Scaled Value
MAX_VAL: Max value of data
MIN_VAL: Min value of data
TFmax: Max of transformation function
TFmin: Min of transformation function
OV: Original Value
The training set is usually the biggest set employed in training the network. The
test set, often includes 10% to 30% of training set, is used in testing the
generalization. And the verification set is set balance between the needs of
enough patterns for verification, training, and testing.
=> Issue 2 and 3 can only be done by trial and error since it is depended on the
problem that we are dealing with.
Training
Training tunes a neural network by adjusting the weights and biases
that is expected to give us the global minimum of performance
index or error function.
When to stop the training process ?
1. It should stop only when there is no noticeable progress of the error
function against data based on a randomly chosen parameters set?
2. It should regularly examine the generalization ability of the network by
checking the network after a pre-determined number of cycles?
3. Hybrid solution is having a monitoring tool so we can stop the training
process or let it run until there is no noticeable progress.
4. The result after examining of verification set of a neural network is most
persuadable since it is a directly obtained result of the network after
training.
Implementation
This is the last step after we determined the factors related to networks
topology, variables choosing, etc.
1. Which environment: Electronic circuits or PC
2. The interval to re-train the network: might be depended on the times and
also other factors related to our problem.
R1
x1
a1
W1
S1xR1
b1
S1x1
n1
a2
W2
S 1x
S2xS1
S1x1
n2
S2x1
S2x1
b2
S2x1
where:
P: input vector (column vector)
Wi: Weight matrix of neurons in layer i. (SixRi: Si rows (neurons), Ri columns
(number of inputs))
i
b : bias vector of layer i (Six1: for Si neurons)
ni: net input (Six1)
fi: transformation function (activate function)
ai: net output (Six1)
: SUM function
i = 1 .. N, N is the total number of layers.
wij
wjk
bias
wkl
x2
1
Output
xn
Input layer
Hidden layers
Output layer
Back-propagation algorithm
Step 1: Feed forward the inputs through networks:
a0 = p
am+1 = fm+1 (Wm+1 am + bm+1), where m = 0, 1, ..., M 1.
a = aM
2 F
s F n
m
n t a
m 1 T
where m = M 1, ..., 2, 1.
Step 3: Finally, weights and biases are updated by following formulas:
W m k 1 W m k s m a m 1
b m k 1 b m k s m
(Details on constructing the algorithm and other related issues should be found on text book Neural Network Design)
Using Momentum
This is a heuristic method based on the observation of training results.
The standard back-propagation algorithm will add following item to the weight as
the weight changes:
Wm(k) = - sm (am 1) T,
bm(k) = - sm .
When using momentum coefficient, this equation will be changed as follow:
Wm(k) = Wm(k 1) (1 ) sm (am 1)T,
bm(k) = bm(k 1) (1 ) sm .
Application
LAYER
class
friend
Output layer
Hidden layer
NEURAL NET
class
Application
Application
Application
Concluding remarks
The determination of the major works is important and realistic. It will help
develop more accuracy data forecasting systems and also give the
researchers the deeper look in implementing the solution using neural
networks
In fact, to successfully apply a neural network, it is depended on three major
factors:
First, the time to choose the variables from a numerous quantity of data as well as perform
pre-processing those data;
Second, the software should provide the functions to examine the generalization ability,
help find the optimal number of neurons for the hidden layer and verify with many input sets;
Third, the developers need to consider, examine all the possible abilities in each time
checking networks operation with various input sets as well as the networks topologies
so that the chosen solution will exactly described the problem as well as give us the most
accuracy forecasted data.