You are on page 1of 3

Literature review:

2.1 Artificial neural network:The word neural network is referred to a network of biological neurons in
the nervous system that process and transmit information The artificial
neural networks are made of interconnecting artificial neuronswhich may share some properties
of biological neural networks.Neural Network Architectures An Artificial Neural Network is a d
ata processingsystem, consisting kr ab large number of simple highly C C ha artificial neuronin a
network interconnected elements consisting a set V of vertices and a set Eof edges.
2.2 J. A. Starzyk, and H. He, [1.2
] Traditionally neural network was used to refer as network or circuit
of biologicalneurones, but modern usage of the term often refers to ANN. ANN is
mathematical model or computational model, an information processingparadigm i.e. inspired by
the way biological nervous system, such as braininformation system.Although there are useful ne
tworks which contain only one layer, or even oneelement, most applications require networks tha
t contain at least the three normal types of layers -input, hidden and output Training an Artificial
Neural Network Once a network has been structured for a particular application, that network
is ready to be trained.Supervised training involves a mechanism of providing the network with t
hedesired output either by manually "Grading" the network's performance or byproviding the des
ired outputs with the inputs.The current commercial network development packages provide tool
s tomonitor how well an artificial neural network is converging on the ability topredict the right a
nswer.

2.3 Z. Zhu, H. He, J.A. Starzyk, and C.Tseng:Architectures and Applications ANNs have been successfully
applied in the fields of mathematics, engineering, medicine, economics,
meteorology, psychology, neurology, and many others.The difference is treated as the error of th
e network which is thenbackpropagated through the layers, from the output to the input layer, an
d theweights of each layer are adjusted such that with each backpropagation cyclethe network g
ets closer and closer to producing the desired output.We used the Neural Network Toolbox
TM 6 of the software Matlab 2008, version 7.6 to develop a three layer feed forward neural
network.

2.4 J. A. Starzyk, M. Ding and Y. Liu.


We propose a framework based on two separate
butcomplementary topics: data stratification and input variable selection.Thus, we promote an A
NN prediction stack in which each predictor is trainedbased on input spaces defined by the IVS
application on different stratified sub-samples The input space variability leads to

ANN stacks that outperform an ANN stackmodel trained with 100% of the
available information but with a random selection of dataset used in the early stopping
method.In the case of the basins in semiarid areas, the results found by Vos with
echostate networks using the same database analysed in this study, leads us toconsider the need
toinclude various structures in the ANN stack. .

2.5 Artificial neural network:This research paper is proposed that the artificial neural network is an extremely powerful and
exciting field. Its only going to become more important and ubiquitous moving forward, and
will certainly continue to have very significant impacts on modern society. Artificial neural
networks (ANNs) and the more complex deep learning technique are some of the most capable
AI tools for solving very complex problems, and will continue to be developed and leveraged in
the future. While a terminator-like scenario is unlikely any time soon, the progression of
artificial intelligence techniques and applications will certainly be very exciting to watch!

2.6 Dynamic probability estimator for machine learning:This research paper states that an efficient algorithm for dynamic estimation of probabilities
without division on unlimited number of input data is presented. This method estimates
probabilities of the sampled data from the raw sample count, while keeping the total count value
constant. Accuracy of the estimate depends on the counter size, rather than on the total number of
data points. Estimator follows variations of the incoming data probability within a fixed window
size, without explicit implementation of the windowing technique. Total design area is very small
and all probabilities are estimated concurrently. The performance of this implementation is
evaluated in terms of the area efficiency and execution time. This method is suitable for the
highly integrated design of artificial neural networks where a large number of dynamic
probability estimators can work concurrently.

2.7 Self Organizing Learning Array System for Power Quality Classification
based on Wavelet Transform:This research paper proposed a novel approach for the Power Quality (PQ) disturbances
classification based on the wavelet transform and self organizing learning array (SOLAR)
system. Wavelet transform is utilized to extract feature vectors for various PQ disturbances based
on the multiresolution analysis (MRA). These feature vectors then are applied to a SOLAR
system for training and testing. SOLAR has three advantageous over a typical neural network:
data driven learning, local interconnections and entropy based self-organization. The
disadvantage of this system is that the (PQ) system is becoming prevalent and of critical
importance for power industry recently.

2.8 Artificial neural network:Artificial neural networks commonly referred as the neural networks are the information or
signal processing mathematical model that is based on the biological neuron. A neural network is
a complex structure which consist a group of interconnected neurons which provides a very
exciting alternatives for complex problem solving and other application which can play
important role in todays computer science field. The neural network are used in adaptive
learning self organization, real time opration, pattern recognition etc. these are the advantages of
neural network.

2.9 Neural network with memory cognitive function:In this research paper the analysis of a new class of distributed memories which is known as Rnets. These networks are similar to Hebbian networks, but are relatively sparsly connected. Rnets use simple binary neurons and trained links between excitatory and inhibitory neurons. They
use inhibition to prevent neurons not associated with a recalled pattern from firing. They are
shown to implement associative learning and have the ability to store sequential patterns, used in
networks with higher cognitive functions. This work explores the statistical properties of such
networks in terms of storage capacity as a function of R-net topology and employed learning and
recall mechanisms.

2.10 NFL theorem for multiobjectives:In this theorem we confirm that the classic NFL theorem holds the general multiobjective fitness
spaces, and show how this follows from a 'single-objective' NFL theorem. We also show that,
given any particular Pareto Front, an NFL theorem holds for the set of all multiobjective
problems which have that Pareto Front. This theorem follows the given any 'shape' or class of
Pareto fronts and NFL theorem holds for the set of all multiobjective problems in that class.
Such NFL results are cast in the typical context of absolute performance, assuming a
performance metric which returns a value based on the result produced by a single algorithm.
But, in a multiobjective search we commonly use comparative metrics, which return
performance measures based non-trivially on the results from two (or more) algorithms.

You might also like