You are on page 1of 3

Project Title

Prediction of Reservoir water level by using Artificial Neural Network.

Introduction
Malaysia currently facing an El Niño phenomenon. That is associated with a band of warm
ocean water temperatures that periodically develops off the Pacific coast of South America.
The effect of El Niño causing the water level in water reservoir in Malaysia rapidly depleted.
The effect of this water depletion is very worrying. Thus, this idea of predicting the water
level by using Artificial Neural Network appeared.

Literature Review
Artificial Neural Network(ANN)
In recent years, Artificial Neural Networks (ANNs)have become extremely popular for
prediction and forecasting in a number of areas, including finance, power generation,
medicine, water resources and environmental science. Although the concept of artificial
neurons was first introduced in 1943 (McCulloch and Pitts, 1943), research into applications
of ANNs has blossomed since the introduction of the back propagation training algorithm for
feed forward ANNs in 1986 (Rumelhart et al.,1986a). ANNs may thus be considered a fairly
new tool in the field of prediction and forecasting.

Previous Research

Application of Artificial Neural Networks for Temperature Forecasting(Mohsen Hayati


et.al and Zahra Mohebi, 2007)
Mohsen Hayati et.al, studied about Artificial Neural Network based on MLP was trained and
tested using ten years (1996-2006) meteorological data. The results show that MLP network
has the minimum forecasting error and can be considered as a good method to model the
short-term temperature forecasting [STTF] systems.

Improving Air Temperature Prediction with Artificial Neural Networks(Brian A.


Smith, Ronald W. McClendon, and Gerrit Hoogenboom, 2007)
Brian A. Smith et.al, focused on developing ANN models with reduced average prediction
error by increasing the number of distinct observations used in training, adding additional
input terms that describe the date of an observation, increasing the duration of prior weather
data included in each observation, and re-examining the number of hidden nodes used in the
network. Models were created to forecast air temperature at hourly intervals from one to 12
hours ahead. Each ANN model, having a network architecture and set of associated
parameters, was evaluated by instantiating and training 30 networks and calculating the mean
absolute error (MAE) of the resulting networks for some set of input patterns.

Neural Network for Recognition of Handwritten Digits(Mike O'Neill)


Mike O'Neill focus on two major practical considerations: the relationship between the
amounts of training data and error rate (corresponding to the effort to collect training data to
build a model with given maximum error rate) and the transferability of models‟ expertise
between
different datasets (corresponding to the usefulness for general handwritten digit
recognition).Henry A. Rowley eliminates the difficult task of manually selecting nonface
training examples, which must be chosen to span the entire space of nonface images. Simple
heuristics, like using the fact that faces rarely overlap in images, can further improve the
accuracy. Comparisons with more than a few other state-of-the-art face detection systems are
presented; showing that our system has comparable performance in terms of detection and
false-positive rates.

Methodology
The first stage of the methodology is to manipulate the data used in the study followed by the
initialization of the model parameters. This stage is followed by one experiment which is
Artificial Neural Network experiment(ANN). A performance analysis is executed and that is
followed by the determination of Artificial Neural Genius(ANG). The ANG is that ANN
architecture that outperforms all the other models in the ANN experiment.

Expectation
The ANN experiment has two architectures to investigate, and in turn, these architectures
have many
different activation functions. For the sake of simplicity, the experiments of the two
architectures are separated and the results are compared.

1) The MLP experiment


The MLP network is trained using three different output unit activation functions and three
different training algorithms. The activation functions are ‘linear’, ‘logistic’ and ‘softmax’.
The three different training algorithms are the scaled conjugate gradient (SCG), conjugate
gradient (Conjgrad) and quasinewton (Quasinew) . The softmax activation function gives a
straight line approximation and hence its results are redundant. The experiment is therefore
conducted with the other two activation functions and the three different optimization
algorithms.

2) The RBF Experiment and Results


The RBF network is trained in a manner that assesses the effects of three different activation
functions. First, a network with Gaussian activations (Gaussian) is created and a two-stage training
approach is used. It uses a small number of iterations of the Expectation-Maximization (EM)
algorithm to position the centres of the network and then the pseudo-inverse of the design matrix to
find the second layer weights. The second layer has thin plate spline (TPS) activation functions and it
makes use of the centres from the previous network to calculate the second layer weights. The third
layer has r logr 4 (R4logr) activation functions.
Reference
M.S. Khan, and P. Coulibaly, “Application of Support Vector Machine
in Lake Water Level Prediction,” J. Hydrologic. Engrg, vol. 11, no. 3,
pp. 199-205, Jun 2006.

H. R. Maier, G. C. Dandy, “Neural networks for the prediction and


forecasting of water resources variables: a review of modelling issues
and applications,” Environmental Modelling & Software, pp
101–124, Jan 2000.

M.S. Khan, and P. Coulibaly, “Application of Support Vector Machine


in Lake Water Level Prediction,” J. Hydrologic. Engrg, vol. 11, no. 3,
pp. 199-205, Jun 2006.

S. Mukherjee, E. Osuna and F. Girosi, “Nonlinear Prediction of


Chaotic Time Series Using Support Vector Machines”, IEEE
NNSP’97, pp 24–26

You might also like