Professional Documents
Culture Documents
T. Sreedevi
E-mail: sreedu2002@yahoo.co.in
IV CSE
CONTENTS
1. What is a neural network?
2. How do human brains learn?
3. From neural networks to artificial neurons
4. What can a perceptron do?
5. Structure of simple neuron and complicated neuron
6. History
7. Conventional computing Vs Artificial Neural networks
8. Top down and bottom up learning
9. Architecture of neural networks
Applications
Image Recognition
♦ Medicine
♦ Electronic noses
♦ Image Compression
♦ Loans and credit cards
♦ stock market prediction
Fuzzy logic
What is a neural network?
In the human brain, a typical neuron collects signals from others through a host of
fine structures called dendrites. The neuron sends out spikes of electrical activity through
a long, thin stand known as an axon, which splits into thousands of branches. At the end
of each branch, a structure called a synapse converts the activity from the axon into
electrical effects that inhibit or excite activity from the axon into electrical effects that
inhibit or excite activity in the connected neurones.
3. From human Neurons to Artificial Neurons
Output of P= 1 if Ax+By>C
0 if Ax+By<=C
Ax+By>C and Ax+By<C are the two regions on the xy plane separated by the line
Ax+By+C=0. Perceptron tells us which region on the plane to which this point, (x,y)
belongs. Such regions, since they are separated by a single line, are called linearly
separable regions.
Some logic functions such as boolean AND, OR and NOT operators are linearly
separable.
Complicated Neuron:
6. History
• 1943 --- Neurophysiologist Warren McCulloch and mathematician Walter Pitts wrote a paper
on how neurons might work.
• 1949 --- Donald Hebb wrote The Organisation of Behavior, pointing fact that neural pathways
are strengthened each time they are used.
8. Top-down learning :
Traditional computing methods work well for problems that have a definite
algorithm, problems that can be solved by a set of rules.Let’s say you want to write an
algorithm to decide whether a human should go to sleep. The program may be simple
if(IsTired()){ GoToSleep();
}else{ StayAwake(); }
But in real world, it is not that simple. If you are in class, you do not want to sleep since
you would be missing valueable information. Or if you are working on an important
programming assignment, you may be determined to finish it before you go to sleep.
Revised verstion may thus look like
Bottom-up learning:
The bottom up approach learns more by example or by doing than a complex set
of cascading if/else statements. It tries to do something in a particular fashion, and if
that doesn't work, it tries something else. By keeping track of the actions that didn’t
work and the ones that did, one can learn. Moreover, the program is inherently self-
modifying.
9.Architecture of Neural networks:
Feed Forward Networks:
• Allows the signals to travel in only
one way
• No feedback loop.
• Used in pattern Recognition
problems
10. Applications
1. Image Recognition:
All the images which are to be trained are stored in a data base. We build an artificial
network and train all the images. Using Back propagation algorithm propagate the error
back and adjust the weights so that network learns all the images.
Back propagation algorithm:
Initialize all the weights to small random numbers.
Until satisfied, Do
For each training example, Do
1. Input the training example to the network and compute the network outputs.
2. For each output unit k
3. For each output unit k
3. Electronic nose
Electronic nose is composed of a chemical sensing system and an artificial neural
network, which recognizes certain patterns of chemicals.
Different applications
Environment: identification of toxic wastes, analysis of fuel mixtures, detection of oil
leaks, monitoring air quality, monitoring factory emission and testing ground water for
odors
Medical:
• To examine odors from the body to identify and diagnose problems
• Odors in breath, infected wounds and body fluids all can indicate problems
• Detects tuberculosis.
Food: To inspect food, grading quality of food, fish inspection, fermentation
control, monitoring cheese ripening etc.,
4.Image compression:
past experience .
6.Stock Market prediction:
The day-to-day business of the stock market is extremely complicated. Many
factors weigh in whether a given stock will go up or down on any given day. Since
neural networks can examine a lot of information quickly and sort it all out, they can be
used to predict stock prices.
7.Fuzzy logic:
Fuzzy logic is a type of logic that recognizes more than simple true and false
values, hence better simulating the real world. For example, the statement today is sunny
might be 100% true if there are no clouds, 80% true if there are a few clouds, 50% true if
it’s hazy, and 0% true if rains all day. Hence it takes into account concepts like – usually,
somewhat, and sometimes.
CONCLUSION
The computing world has a lot to gain from neural networks. Their ability to learn
by example makes them very flexible and powerful. Furthermore there is no need to
devise an algorithm in order to perform a specific task; i.e. there is no need to understand
the internal mechanisms of that task. They are also very well suited for real time systems
because of their fast response and computational times, which are due to their parallel
architecture.
Neural networks also contribute to other areas of research such as neurology and
psychology. They are regularly used to model parts of living organisms and to investigate
the internal mechanisms of the brain.
Perhaps the most exciting aspect of neural networks is the possibility that some
day 'conscious' networks might be produced. There is a number of scientists arguing that
consciousness is a 'mechanical' property and that 'conscious' neural networks are a
realistic possibility.
Finally, I would like to state that even though neural networks have a huge potential we
will only get the best of them when they are intergrated with computing, AI, fuzzy logic
and related subjects.
1.Neural Computing Theory and Practice
Bibiliography:
-Philip D.Wasserman
2.Optical Bi-Directional Associative Memories
-Kosko.B.
3. http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11
/report.html#Introduction to neural networks
4. Http://www.cs.cmu.edu/~tom/mlbook.html