You are on page 1of 25

Neural Network

Definition
• A neural network is a series of algorithms that endeavors to recognize
underlying relationship in a set of data through a process that mimics the
way the human brains operates.
• A neural network is a connectionist computational system.
• As Howard Rheingold said “ the neural network is this kind of technology
that is not an algorithm, it is a network that has weights on it, and you can
adjust the weights so that it learns. You teach it through trials”
• Artificial Neural Network is a computational nonlinear model based on the
neural structure of the brain that is able to learn to perform task like
classification, prediction, decision – making, visualization and others.
Biological
Neural
Network
Artificial Neural Network
Types of Neural Network
• Feedforward Neural Network – Artificial Neuron
• Radial Basis Function Neural Network
• Multilayer Perceptron
• Convolutional Neural Network
• Recurrent Neural Network (RNN) – Long Short Term Memory
• Modular Neural Network
• Sequence – to – Sequence Models
Feedforward Neural Network – Artificial
Neuron
• simplest type of ANN
• the data passes through the different input nodes till it reaches the
output node.
• also known as a front propagated wave which is usually achieved by
using classifying activation function.
• unlike in more complex types, there is no backpropagation and data
moves in one direction only.
• may have a single layer or it may have hidden layers.
• the sum of the products of the inputs and their weights are
calculated. This then fed to the output.
Feedforward Neural Network – Artificial
Neuron
• used in technologies like face recognition and computer vision, this is
because the target classes in these applications are hard to classify.
• is equipped to deal with data which contains a lot of noise.
• are relatively simple to maintain.
Radial Basis Function Neural Network
• A radial basis function considers the distance of any point relative to
the center.
• Such neural networks have two layers, in the inner layer, the features
are combined with the radial basis function, then the output of these
is taken into account when calculating the same output in the next
time – step.
Radial Basis Function Neural Network
• The radial basis
function neural
network is applied
extensively in
power restoration
systems.
Multilayer Perceptron
• has three or more layers.
• it is used to classify data that cannot be separated linearly.
• it is a type of artificial neural network that is fully connected, this is
because every single node in a layer is connected to each node in the
following layer.
• a multilayer perceptron uses a nonlinear activation function (mainly
hyperbolic tangent or logistic function).
Multilayer Perceptron
• This type of neural network is applied extensively in speech
recognition and machine translation technology.
Convolutional Neural Network
• A convolutional Neural Network uses a variation of the multilayer
perceptron.
• A CNN contains one or more than one convolutional layers, these
layers can either be completely interconnected or pooled.
• Before passing the result to the next layer, the convolutional layer
uses a convolutional operation on the input, due to this convolutional
operation, the network can be much deeper but with much fewer
parameters.
• due to this ability, CNN show effective results in image and video
recognition, natural language processing and recommender systems.
Convolutional Neural Network
• CNN also show great results in semantic parsing and paraphrase
detection. They are also applied in signal processing and image
classification.
• CNNs are also being used in mage analysis and recognition in
agricultural where features are extracted from satellites to predict the
growth and yield of a piece of land.
Recurrent Neural Network – Long Short Term
Memory
• A RNN is a type of artificial neural network in which the output of a
particular layer is saved and fed back to the input. This helps predict
the outcome of the layer.
• The first layer is formed in the same way as it is in the feedforward
network, that is, with the product of the sum of the weights and
features. However, in subsequent layers, the RNN process begins.
• from each time – step to the next, each node will remember some
information that it had in the previous time – step. The neural
network begin with the front propagation as usual but remembers
the information it may need to use later.
Recurrent Neural Network – Long Short Term
Memory
• if the prediction is wrong, the system self – learns and works towards
making the right prediction during backpropagation.
• this type of neural network is very effective in text – to – speech
conversion technology.
Modular Neural Network
• has a number of different networks that function independently and
perform sub – tasks.
• as a result, a large and complex computational process can be done
significantly faster by breaking it down into independent components.
Sequence – to – Sequence Models
• a sequence to sequence model consists of two recurrent neural
networks.
• there’s an encoder that process the input and a decoder that process
the output.
• the encoder and decoder can either use the same or different
parameters.
• This model is particularly applicable in those cases where the length
of the input data is not the same as the length of the output data.
• sequence – to – sequence models are applied mainly in chatbots,
machine translation and question answering systems.
Training Algorithms for ANN
• Gradient Descent Algorithm – this is the simplest training algorithm
used in case of supervised training model. In case, the actual output
is different from target output, the difference or error is find out. The
gradient descent algorithm changes the weights of the network in
such a manner to minimize this mistake.
• Back Propagation Algorithm – it is an extension of the gradient –
based delta learning rule. Here, after finding an error (the difference
between desired and target), the error is propagated backward from
the output layer to the input layer via the hidden layer. It is used in
case of Multilayer Neural Network.
Learning Data Sets in ANN
• Training Data Set – a set of examples used for learning, that is to fit
the parameters (i.e. weights) of the network. One approach
comprises of one full training cycle on the training set.
• Validation Set Approach – a set of examples used to tune the
parameters of the network (i.e. architecture) of the network. For
example to choose the number of hidden units in a Neural Network.
• Making Test Set – a set of examples used only to assess the
performance (generalization) of a fully specified network or to apply
successfully in predicting output whose input is known.
ANN Architecture
• A typical Neural Network contains a large number of artificial
neurons called units arranged in a series of layers.
• In a typical ANN, comprises different layers:
 Input Layer – it contains those units (artificial neurons) which receive input
from the outside worlds on which network will learn, recognize about or
otherwise process.
 Output layer – it contains units that respond to the information about how
it’s learned any task.
 Hidden layer – these units are in between input and output layers. The job of
the hidden layer is to transform the input into something that output unit can
use in some way.
Learning Techniques in ANN
• Supervised learning – the training data is input to the network, and the
desired output is known weights are adjusted until production yields
desired values.
• Unsupervised learning – the input data is used to train the network whose
output is known. The network classifies the input data and adjusts the
weight by feature extraction in input data.
• Reinforcement learning – the value of the output is unknown, but the
network provides the feedback on whether the output is right or wrong. It
is Semi – Supervised Learning.
• Offline learning – the adjustment of the weight vector and threshold is
made only after the training set is presented to the network. It is also
called Batch learning.
• Online learning – the adjustment of the weight and threshold is made
after presenting each training sample to the network.
Application of Neural Networks
• Pattern Recognition – probably the most common application: facial
recognition, optical character recognition etc.
• Time Series Prediction – neural networks can be used to make
predictions.
• Signal Processing – cochlear implants and hearing aids need to filter
out unnecessary noise and amplify the important sounds. Neural
networks can be trained to process an audio signal and filter it
appropriately.
• Control – self driving cars - Neural networks are often used to
manage steering decisions of physical vehicles ( or simulated ones)
Application of Neural Networks
• Soft Sensors – a soft sensors refers to the process of analyzing a
collection of many measurements. A thermometer can tell the
temperature of the air, but hat if we need to know the humidity,
barometric pressure, dewpoint, air quality, air density, etc? Neural
Networks can be employed to process the input data from many
individual sensors and evaluate them as a whole.
• Anomaly Detection – because neural network are so good at
recognizing patterns, they can also be trained to generate output
when something occurs that doesn’t fit the pattern.
Advantages of Neural Networks
• A neural network can perform tasks that a linear program cannot.
• When element of the neural network fails, it can continue without
any problem by heir parallel nature.
• A neural network learns and does not need to be reprogrammed.
• It can be implemented in any application.
• It can performed without any problem.
Limitations of Neural Networks
• The neural network needs training to operate.
• The architecture of a neural network is different from the
architecture of microprocessor, therefore, needs to be emulated.
• Requires high processing time for large neural networks.

You might also like