You are on page 1of 11

BY

T. Sreedevi

E-mail: sreedu2002@yahoo.co.in

IV CSE

G.PULLA REDDY ENGINEERING COLLEGE, KURNOOL.


ABSTRACT

The brain is principally composed of about 10 billion neurons, each connected to


about 10,000 other neurons. An Artificial Neural Network (ANN) is an information
processing paradigm that is inspired by the way biological nervous systems, such as the
brain, process information. Various types of neural networks are explained and
demonstrated, applications of neural networks like ANNs in medicine are described, and
historical background is provided. The connection between the artificial and the real
thing is also investigated and explained. Finally, the mathematical models involved are
presented and demonstrated.

CONTENTS
1. What is a neural network?
2. How do human brains learn?
3. From neural networks to artificial neurons
4. What can a perceptron do?
5. Structure of simple neuron and complicated neuron
6. History
7. Conventional computing Vs Artificial Neural networks
8. Top down and bottom up learning
9. Architecture of neural networks
Applications
Image Recognition
♦ Medicine
♦ Electronic noses
♦ Image Compression
♦ Loans and credit cards
♦ stock market prediction
Fuzzy logic
What is a neural network?

An Artificial Neural Network (ANN) is an information processing paradigm that


is inspired by the way biological nervous systems, such as the brain, process information
The key element of this paradigm is the novel structure of the information processing
system. It is composed of a large number of highly interconnected processing elements
(neurons) working in unison to solve specific problems. ANNs, like people, learn by
example.
2. How do human brains learn?

Components of a neuron The synapse

In the human brain, a typical neuron collects signals from others through a host of
fine structures called dendrites. The neuron sends out spikes of electrical activity through
a long, thin stand known as an axon, which splits into thousands of branches. At the end
of each branch, a structure called a synapse converts the activity from the axon into
electrical effects that inhibit or excite activity from the axon into electrical effects that
inhibit or excite activity in the connected neurones.
3. From human Neurons to Artificial Neurons

The perceptron is a mathematical model of biological neuron. While in actual


neurons the dendrite receives electrical signals from the axons of other neurons, in the
perceptron these electrical signals are represented as numerical values. At the synapses
between the dendrite and axons, electrical signals are modulated in various amounts.
This is also modeled in the perceptron by multiplying each input value by a value called
the weight.
Input vector: All the input values of each perceptron are collectively called the input
vector of that perceptron.
Weight Vector: All the weight values of each perceptron are collectively called the
weight vector of that perceptron.

4. What can a perceptron do?


Let us assume that there are two input values, x and y for a certain perceptron P.
Let the weights for x and y be A and B for respectively, the weighted sum could be
represented as Ax+By.

Output of P= 1 if Ax+By>C
0 if Ax+By<=C
Ax+By>C and Ax+By<C are the two regions on the xy plane separated by the line
Ax+By+C=0. Perceptron tells us which region on the plane to which this point, (x,y)
belongs. Such regions, since they are separated by a single line, are called linearly
separable regions.
Some logic functions such as boolean AND, OR and NOT operators are linearly
separable.

Not all logical operators are linearly separable


Example : XOR function

5. Structure of Simple neuron

Training Mode: Train the neuron when to fire


Using Mode: When a taught input pattern is
detected at the input, its associated output
becomes the current output.

Complicated Neuron:

It is called McCulloch and Pitts model

Neuron fires if and only if;

X1W1 + X2W2 + X3W3 + ... > T

It has ability to adapt to a particular situation


by changing its weights and/or threshold.

6. History
• 1943 --- Neurophysiologist Warren McCulloch and mathematician Walter Pitts wrote a paper
on how neurons might work.

• 1949 --- Donald Hebb wrote The Organisation of Behavior, pointing fact that neural pathways
are strengthened each time they are used.

• 1950s --- Simulated a hypothetical network.


7.Conventional computing versus artificial neural networks
In order to best illustrate these differences one must examine two different types
of learning . They are

8. Top-down learning :
Traditional computing methods work well for problems that have a definite
algorithm, problems that can be solved by a set of rules.Let’s say you want to write an
algorithm to decide whether a human should go to sleep. The program may be simple
if(IsTired()){ GoToSleep();
}else{ StayAwake(); }
But in real world, it is not that simple. If you are in class, you do not want to sleep since
you would be missing valueable information. Or if you are working on an important
programming assignment, you may be determined to finish it before you go to sleep.
Revised verstion may thus look like

If(IsTired()){ if(!IsInClass() && !WorkingOnProject()){ GoToSleep();}


else{ if(IsInClass()) { StayAwake();}
else{ if(WorkingOnProject())
{if(AssignmentIsDueTomorrow())
{if(AssignmentIsCompleted())
{GoToSleep();}
else{StayAwake();}
complex tasks may never be programmed using the top-down approach without any
mistakes.

Bottom-up learning:
The bottom up approach learns more by example or by doing than a complex set
of cascading if/else statements. It tries to do something in a particular fashion, and if
that doesn't work, it tries something else. By keeping track of the actions that didn’t
work and the ones that did, one can learn. Moreover, the program is inherently self-
modifying.
9.Architecture of Neural networks:
Feed Forward Networks:
• Allows the signals to travel in only
one way
• No feedback loop.
• Used in pattern Recognition
problems

Feed back networks

• Signals can travel in either way


• More powerful and complicated
• They are dynamic with changing states
until an equilibrium is obtained

10. Applications
1. Image Recognition:
All the images which are to be trained are stored in a data base. We build an artificial
network and train all the images. Using Back propagation algorithm propagate the error
back and adjust the weights so that network learns all the images.
Back propagation algorithm:
Initialize all the weights to small random numbers.
Until satisfied, Do
For each training example, Do
1. Input the training example to the network and compute the network outputs.
2. For each output unit k
3. For each output unit k

4. Update each network weight wi,j

All these three images are identified to be of the same person.


2. Medicine
Artificial Neural Networks (ANN) are currently a 'hot' research area in medicine and it is
believed that they will receive extensive application to biomedical systems in the next
few years. At the moment, the research is mostly on modelling parts of the human body
and recognising diseases from various scans (e.g. cardiograms, CAT scans, ultrasonic
scans, etc.).
Neural networks are ideal in recognising diseases using scans since there is no
need to provide a specific algorithm on how to identify the disease. Neural networks
learn by example so the details of how to recognise the disease are not needed. What is
needed is a set of examples that are representative of all the variations of the disease. The
quantity of examples is not as important as the 'quantity'. The examples need to be
selected very carefully if the system is to perform reliably and efficiently.

3. Electronic nose
Electronic nose is composed of a chemical sensing system and an artificial neural
network, which recognizes certain patterns of chemicals.
Different applications
Environment: identification of toxic wastes, analysis of fuel mixtures, detection of oil
leaks, monitoring air quality, monitoring factory emission and testing ground water for
odors
Medical:
• To examine odors from the body to identify and diagnose problems
• Odors in breath, infected wounds and body fluids all can indicate problems
• Detects tuberculosis.
Food: To inspect food, grading quality of food, fish inspection, fermentation
control, monitoring cheese ripening etc.,

4.Image compression:

This structure is referred to as a


bottleneck type network and consists of input
layer and output layers of equal sizes with
intermediate layer of smaller size in between.

Compression ratio: Ratio of size of input layer to size of intermediate layer.


Transmitter encodes and transmits and receiver decodes.
Original image: 64 pixels* 8 bits each=512 bits
Compressed image: 16 pixels* 3 bits each=48 bits
1/10th of original
5.Loans and credit cards
Neural networks are useful to decide whether the bank should approve the
loan or not. It analyzes past failures and make current decisions based upon

past experience .
6.Stock Market prediction:
The day-to-day business of the stock market is extremely complicated. Many
factors weigh in whether a given stock will go up or down on any given day. Since
neural networks can examine a lot of information quickly and sort it all out, they can be
used to predict stock prices.

7.Fuzzy logic:
Fuzzy logic is a type of logic that recognizes more than simple true and false
values, hence better simulating the real world. For example, the statement today is sunny
might be 100% true if there are no clouds, 80% true if there are a few clouds, 50% true if
it’s hazy, and 0% true if rains all day. Hence it takes into account concepts like – usually,
somewhat, and sometimes.

CONCLUSION
The computing world has a lot to gain from neural networks. Their ability to learn
by example makes them very flexible and powerful. Furthermore there is no need to
devise an algorithm in order to perform a specific task; i.e. there is no need to understand
the internal mechanisms of that task. They are also very well suited for real time systems
because of their fast response and computational times, which are due to their parallel
architecture.
Neural networks also contribute to other areas of research such as neurology and
psychology. They are regularly used to model parts of living organisms and to investigate
the internal mechanisms of the brain.
Perhaps the most exciting aspect of neural networks is the possibility that some
day 'conscious' networks might be produced. There is a number of scientists arguing that
consciousness is a 'mechanical' property and that 'conscious' neural networks are a
realistic possibility.
Finally, I would like to state that even though neural networks have a huge potential we
will only get the best of them when they are intergrated with computing, AI, fuzzy logic
and related subjects.
1.Neural Computing Theory and Practice
Bibiliography:
-Philip D.Wasserman
2.Optical Bi-Directional Associative Memories
-Kosko.B.
3. http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11
/report.html#Introduction to neural networks
4. Http://www.cs.cmu.edu/~tom/mlbook.html

You might also like