Professional Documents
Culture Documents
CONTENTS
Introduction Basic neuron design Artificial neuron model Algorithm associated with ANN Implementing ANN WITH VLSI Neuron circuit description Operation of vlsi artificial NN Advantage and disadvantage Application Conclusion Reference
INTRODUCTION
An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurones) working in unison to solve specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurones. This is true of ANNs as well.
COMPARASION BETWEEN
Biological Soma
Artificial Neuron
Dendrite
Axon Synapse
Input
Output Weight
ALGORITMS
1.FEED FORWARD PROPAGATION
Information flow is unidirectional. Data is presented to the input. Passed on to the hidden layer. Information is distributed. Processing is parallel. No cycle or loop present. Inputs
Feedforward
Outputs
ALGORITHM
Consider a network of three layers. Let us use i to represent nodes in input layer, j to represent nodes in hidden layer and k represent nodes in output layer. wij refers to weight of connection between a node in input layer and node in hidden layer. The following equation is used to derive the output value Yj of node j
Yj
1 X
ALGORITHM The error of output neuron k after the activation of the network on the n-th training example (x(n), d(n)) is: ek(n) = dk(n) yk(n) The network error is the sum of the squared errors of the output neurons:
E(n)
e2 k (n)
The total mean squared error is the average of the network errors of the training examples.
EAV
1 N
E (n)
n 1
ALGORITHM
The Backprop weight update rule is based on the gradient descent method: It takes a step in the direction yielding the maximum decrease of the network error E. This direction is the opposite of the gradient of E.
ALGORITHM
E w ij - w ij
reduced dimensions and power consumption high parallelism of data processing; high speed of computations compatible with the precision required; high error tolerance;
SYNAPSE DESIGN
Neural network learn by modifying weight(synapse) REQUIREMENT: To get a storagable synaptic weight. Need for an alterable, non -volatile, on chip analog memory. Use of a sample and hold circuit. Need for a capacitive storage.
Addition of a new gate floating gate between original gate and channel. Original gate is called control gate. Floating gate is an isolated conducting island. Electrons put in the floating gate stay for a long time.
When electrons are present on the floating gate: Attract holes from p substrate. Enhanced holes change the conductivity of the channel.
It means logic 0
Accomplished by: i) FN tunnelling ii) Channel hot electron injection
WORKING PRINCIPLE
High voltage at the control gate. Electrons at the floating gate. Holes accumulated at the channel region. Barrier is reduced. Tunnelling is enhanced.
DISADVANTAGE OF FGMOS
Defect in the tunnel oxide. Electrical stressing due to programming. Difficult to scale down the oxide.
SOLUTION
Data retention is not sensitive to oxide defect. Programming speed increase. FN tunnelling consumes less power. Thin tunnelling oxide.
Concept is to provide a small change in weight M1 and M2 operate in triode region. Equal Vgs Equal Vth I proportional to(Vgs2Vgs1). Vgs1< Vgs2(+ve weight) Vgs1> Vgs2 (-ve weight) Adaptive w achieved using control unit
PROGRAMMABLE SYNAPSE:DESIGN
CONTROL UNIT
Consist of: 8 bit input (B0-B7) 1:8 analog MUX A mux with control code (S0,S1,S2) Enable terminal E to enable the mux. Output current I applied to the CMOS switch.
Vdd
SUMMING ELEMENT Designed in weak inversion region M4-M5 operate in sub threshold region. Common mode input should be wide. Input voltage of p-channel:
Vss< Vcommon< (Vdd-Vsd(sat)-Vsg,p)
Vss
The current generates I proportional to difference between two Vds of the differential pair that represent tanh function.
ADVANTAGE
Massively parallelism neural system. Full potential of sillicon can be exploited. Fault tolerance Low power Real world interface Computational power being derived from massive parallelism
DISADVANTAGE
Mismatch Non-ideal behaviour of the circuit
APPLICATION
OPTICAL CHARACTER
RECOGNIZATION
IMAGE PROCESSING
CONCLUSION
In this paper, a programmable analog VLSI neural network has been introduced. The synaptic weight circuit was realized using four-MOS transistors and SR technique. Moreover, the summing element and the activation function have been designed in sub threshold region. Such system was realized in a standard0.8pm CMOS technology and operated with f 1V power supply. This system has been used for recognizing handwritten words, and zip codes.
REFERENCE
1. E, A. Viltoz , Analog VLSI implementation of neural networks, in IEEE Int. Symp. Circuits Syst. Proc., 1990, PP. 2524-2527. 2. Alan Murray and Lionel Tarassenko , Analogue Neural VLSI- A Pulse Stream Approach, Chapman & Hall Publishing Company, 1994. 3. J. B. Lont and W. Guggenbuhl , Analog CMOS implementation of multilayer perceptron with nonlinear synapses, IEEE Trans. on neural networks, Vol. 3, No. 3, PP. 457-462, May 1992. 4. T. Morishita et ul , A BiCMOS analog neural network with dyniunically updated weights, in IEEE Int. Solid-Statc Circuits Conf. Proc., 1990, PP. 142- 143. 5. M A. Abo -Elsoud, Analog Circuits for electronic neural network, The Proc. of 35* Midwest Symp. on Circuits and 6. Andreas G. Andreou . Kwabena A. Boahen , Philippe 0. Pouliquen , Aleksandra Pavasovic Robert E. Jenkins and Kim Strohbehn , Current-Mode Sub threshold MOS Circuits for Analog VLSI Neural Systems, IEEE Trans. on neural networks, 7. Mead C., Analog VLSI and Neural Systems, Addison-Wesley Publishing Company, 1989. 8. Johan H. Huijsing , Ron Hogervorst , and KIass -Jan de Langen , Low-power Low-voltage VLSI Operational Amplifier Cells, IEEE Trans. on Circuits and Syst.-I: Fundamental theory and applications, Vol. 42, No. I 1, PP. 841 -852, November 1995.
THANK YOU