You are on page 1of 35

UNIT I

Introduction To Neural Networks


Biological Neural Networks
Characteristics of Neural Networks
Models of Neurons

What is Neural Network?

An Artificial Neural Network (ANN) is an


information processing paradigm that is
inspired by the way biological nervous
systems, such as the brain, to process
information.

An artificial neural network is a system


based on the operation of biological
neural networks, in other words, is an
emulation of biological neural system.

Why learn Neural Networks?

Neural networks, with their remarkable ability


to derive meaning from complicated or
imprecise data, can be used to extract
patterns and detect trends that are too
complex to be noticed by either humans or
other computer techniques.

A trained neural network can be thought of


as an "expert" in the category of information
it has been given to analyze.

Advantages

Adaptive learning: An ability to learn how to do tasks


based on the data given for training or initial
experience.
Self-Organization: An ANN can create its own
organization or representation of the information it
receives during learning time.
Real Time Operation: ANN computations may be
carried out in parallel, and special hardware devices
are being designed and manufactured which take
advantage of this capability.
Fault Tolerance via Redundant Information Coding:
Partial destruction of a network leads to the
corresponding degradation of performance. However,
some network capabilities may be retained even with
major network damage.

Neural networks versus conventional


computers

Conventional computers use an algorithmic approach


i.e. the computer follows a set of instructions in order to
solve a problem.

Neural networks process information in a similar way


the human brain does. The network is composed of a
large number of highly interconnected processing
elements (neurons) working in parallel to solve a
specific problem. Neural networks learn by
example. They cannot be programmed to perform a
specific task.

Neural networks and conventional algorithmic


computers are not in competition but complement
each other.

Biological Neural Networks

A neural network is mans crude way


of trying to simulate the brain
electronically. So to understand how
a neural net works we first must
have a look at how the human brain
works.

Neuron

The human brain contains about 10 billion


nerve cells, or neurons.

Each neuron is connected to thousands of


other neurons and communicates with them
via electrochemical signals.

Signals coming into the neuron are received


via junctions called synapses, these in turn
are located at the end of branches of the
neuron cell called dendrites.

Characteristics of Neural Network

parallel, distributed information


processing
high degree of connectivity among basic
units
connections are modifiable based on
experience
learning is a constant process, and
usually unsupervised
learning is based only on local information
performance degrades gracefully if some
units are removed

Summary

Neural Networks exhibits brain like


behaviors like:
Learning
Association
Categorization
Generalization
Feature

Extraction
Optimization
Noise Immunity

Models of Neuron

An artificial neuron is simply an


electronically modeled biological neuron.
How many neurons are used depends on
the task at hand.

Mathematical Model

McCulloch Pitts Model

The early model of an artificial


neuron is introduced by Warren
McCulloch a neuroscientist and
Walter Pitts a logician in 1943.

The McCulloch-Pitts neural model is


also known as linear threshold gate.

An Example
Object

Purple?

Round?

Eat?

Blueberry

Golf Ball

Banana

Problems

Set the threshold so that the bird


eats any object that is round or
purple or both.

Set the threshold so that the bird


eats all the objects.

Excitatory & Inhibitory

The signals are called excitatory


because they excite the neuron toward
possibly sending its own signal.

So as the neuron receives more and


more excitatory signals, it gets more
and more excited, until the threshold
is reached, and the neuron sends out
its own signal.

The Inhibitory signals have the effect of


inhibiting the neuron from sending a signal.

When a neuron receives an inhibitory signal, it


becomes less excited, and so it takes more
excitatory signals to reach the neuron's
threshold.

In effect, inhibitory signals subtract from the


total of the excitatory signals, making the
neuron more relaxed, and moving the neuron
away from its threshold.

Perceptron

A perceptron which was introduced by


Frank Rosenblatt in 1958.
Essentially the perceptron is an MCP
neuron where the inputs are first
passed through some "preprocessors"
which are called association units.
These association units detect the
presence of certain specific features in
the inputs.

Perceptron

In essence an association unit is


also an MCP neuron which is 1 if a
single specific pattern of inputs is
received, and it is 0 for all other
possible patterns of inputs.

Adaline (Adaptive linear element)

An important generalization of the perceptron


training algorithm.

It was presented by Widrow and Hoff as the


'least mean square' (LMS) learning
procedure, also known as the delta rule.

The main functional difference with the


Perceptron training rule is the way the output
of the system is used in the learning rule.

Neural Network Topologies

The arrangements of the processing


units, connections, and pattern
input /output is referred to as
topology.

Connections can be made in two ways


Interlayer
Intralayer

Feedforward & Feedback

Feed-forward ANNs allow signals to travel


one way only; from input to output.

There is no feedback (loops) i.e. the output


of any layer does not affect that same layer.

They are extensively used in pattern


recognition. This type of organization is also
referred to as bottom-up or top-down.

Feedforward Network Diagram

Basic Learning Rules

Learning: The ability of the neural network


(NN) to learn from its environment and to
improve its performance through learning.

Learning is a process by which the free


parameters of a neural network are adapted
through a process of stimulation by the
environment in which the network is
embedded.

The type of the learning is determined by the


manner in which the parameter changes
take place.

Learning Rules

Hebbs Law

Perceptron Learning Law

Delta Learning Law

Widrow and Hoff LMS Learning Law

Correlation Learning Law

Instar ( Winner take all)

Outstar Learning Law

Supervised Learning

During the training session of neural


network, an input stimulus is applied
that results in output response.

This response is compared with an


priori desired output signal, the target
response.

If the actual response differs from the


target response, the neural network
generates an error signal.

This error signal is used to calculate the


adjustments that should be made to
networks synaptic weights so that the
actual o/p matches the target output.

This error minimization process requires a


special circuit known as supervisor or
teacher, hence the name Supervised
learning.

A neural net is said to learn supervised, if


the desired output is already known.

Unsupervised Learning

Unsupervised learning does not require a


teacher; that is there is no target output.

During the training session, the neural net


receives at its input many different
excitations or input patterns.

The neural net arbitrarily organizes the


patterns into categories.

When the stimulus is applied later, the neural


net provides an output response indicating
the class to which the stimulus belongs.

If a class cannot be found for the input


stimulus, a new class is generated.

Even though unsupervised learning does not


require a teacher, it requires guidelines to
determine how it will form groups.

Reinforced Learning

Reinforced learning requires one or more


neurons at the output layer and a teacher.

Unlike supervised learning, teacher does not


indicate how close the actual output is to the
target output but whether the actual output
is the same with target output or not.

The teacher does not present the target


output to the network, but presents only a
pass/fail indication.

Contd..

The error signal generated during the


training session is binary: pass or fail.

If the teachers indication is bad, the


network readjusts its parameters and tries
again and again until it gets its output
response right.

There is no clear indication if the output


response is moving in the right direction or
how close it is to the correct response it is.

Competitive Learning

It is another form of supervised learning


that is distinctive because of its
characteristics operation and
architecture.

Several neurons are at the output layer.


When an input stimulus is applied, each
output neuron competes with the others
to produce the closest output signal to
the target.

Contd..

This output becomes the dominant one,


and the other outputs cease producing an
output signal for that stimulus.

For another stimulus, another output


neuron becomes the dominant one.

Thus each output neuron is trained to


respond to a different input stimulus.