Professional Documents
Culture Documents
A simple neuron
Takes the Inputs . Calculate the summation of the Inputs . Compare it with the threshold being set during the learning stage.
Firing Rules
A firing rule determines how one calculates whether a neuron should fire for any input pattern. some sets cause it to fire (the 1-taught set of patterns) and others which prevent it from doing so (the 0-taught set)
Example
For example, a 3-input neuron is taught to output 1 when the input (X1,X2 and X3) is 111 or 101 and to output 0 when the input is 000 or 001.
X 1: X 2: X 3:
0
0
0
1
1
0
1
1
0
0
0
1
1
0
1
1
O U T:
0/ 0/ 0/ 1 1 1
0/ 1
Example
Take the pattern 010. It differs from 000 in 1 element, from 001 in 2 elements, from 101 in 3 elements and from 111 in 2 elements. Therefore, the 'nearest' pattern is 000 which belongs in the 0-taught set. Thus the firing rule requires that the neuron should not fire when the input is 001. On the other hand, 011 is equally distant from two taught patterns that have different outputs and thus the output stays undefined (0/1).
X 1: X 2:
0 0 0
0 0 1
0 1 0
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1
X 3:
O U T:
0/ 0/ 1 1
Nearest-neighbour recall, where the output pattern produced corresponds to the input pattern stored, which is closest to the pattern presented, and Interpolative recall, where the output pattern is a similarity dependent interpolation of the patterns stored corresponding to the pattern presented. Yet another paradigm, which is a variant associative mapping is classification, ie when there is a fixed set of categories into which the input patterns are to be classified.
Supervised Learning
Supervised learning which incorporates an external teacher, so that each output unit is told what its desired response to input signals ought to be. During the learning process global information may be required. Paradigms of supervised learning include error-correction learning, reinforcement learning and stochastic learning. An important issue concerning supervised learning is the problem of error convergence, ie the minimisation of error between the desired and computed unit values. The aim is to determine a set of weights which minimises the error. One well-known method, which is common to many learning paradigms is the least mean square (LMS) convergence.
Unsupervised Learning
Unsupervised learning uses no external teacher and is based upon only local information. It is also referred to as self-organisation, in the sense that it self-organises data presented to the network and detects their emergent collective properties. From Human Neurons to Artificial Neurons their aspect of learning concerns the distinction or not of a separate phase, during which the network is trained, and a subsequent operation phase. We say that a neural network learns off-line if the learning phase and the operation phase are distinct. A neural network learns on-line if it learns and operates at the same time. Usually, supervised learning is performed off-line, whereas unsupervised learning is performed on-line.
Back-propagation Algorithm
it calculates how the error changes as each weight is increased or decreased slightly. The algorithm computes each EW by first computing the EA, the rate at which the error changes as the activity level of a unit is changed. For output units, the EA is simply the difference between the actual and the desired output.
Transfer Function
The behaviour of an ANN (Artificial Neural Network) depends on both the weights and the input-output function (transfer function) that is specified for the units. This function typically falls into one of three categories: linear (or ramp) threshold sigmoid For linear units, the output activity is proportional to the total weighted output. For threshold units, the output is set at one of two levels, depending on whether the total input is greater than or less than some threshold value. For sigmoid units, the output varies continuously but not linearly as the input changes. Sigmoid units bear a greater resemblance to real neurones than do linear or threshold units, but all three must be considered rough approximations.
Application
INTRODUCTION
Features of finger prints Finger print recognition system Why neural networks? Goal of the system
Their patterns are permanent and unchangeable for whole life of a person. They are unique and the probability that two fingerprints are alike is only 1 in 1.9x10^15. Their uniqueness is used for identification of a person.
Image acquisiti on
edge detecti on
Ridge extractio n
Thinin g
Feature extracti on
classifi cation
Image acquisition: the acquired image is digitalized into 512x512 image with each pixel assigned a particular gray scale value (raster image). edge detection and thinning: these are preprocessing of the image , remove noise and enhance the image.
Preprocessing system
Preprocessing system
After image is captured ,noise is removed using edge detection, ridge extraction and thinning.
Edge detection: the edge of the image is defined where the gray scale levels changes greatly. also, orientation of ridges is determined for each 32x32 block of pixels using gray scale gradient. Ridge extraction: ridges are extracted using the fact that gray scale value of pixels are maximum along the direction normal to the ridge orientation.
Preprocessing system
Thinning: the extracted ridges are converted into skeletal structure in which ridges are only one pixel wide. thinning should not Remove isolated as well as surrounded pixel. Break connectedness. Make the image shorter.
The first layer has nine perceptrons The hidden layer has five perceptrons The output layer has one perceptron.
The network is trained to output 1 when the input window is centered at the minutiae and it outputs 0 when minutiae are not present.
classification
finger prints can be classified mainly in four classes depending upon their general pattern Arch Tented arch Right loop Left loop
Key Features
Neural network design, training, and simulation Pattern recognition, clustering, and data-fitting tools Supervised networks including feedforward, radial basis, LVQ, time delay, nonlinear autoregressive (NARX), and layer-recurrent Unsupervised networks including self-organizing maps and competitive layers Preprocessing and postprocessing for improving the efficiency of network training and assessing network performance Modular network representation for managing and visualizing networks of arbitrary size Routines for improving generalization to prevent overfitting Simulink blocks for building and evaluating neural networks, and advanced blocks for control systems applications
Network Architectures
Neural Network Toolbox supports a variety of supervised and unsupervised network architectures. With the toolboxs modular approach to building networks, you can develop custom architectures for your specific problem. You can view the network architecture including all inputs, layers, outputs, and interconnections.
Supervised Networks
Supervised neural networks are trained to produce desired outputs in response to sample inputs, making them particularly well-suited to modeling and controlling dynamic systems, classifying noisy data, and predicting future events. Neural Network Toolbox supports four types of supervised networks: Feedforward networks have one-way connections from input to output layers. They are most commonly used for prediction, pattern recognition, and nonlinear function fitting. Supported feedforward networks include feedforward backpropagation, cascade-forward backpropagation, feedforward input-delay backpropagation, linear, and perceptron networks. Radial basis networks provide an alternative, fast method for designing nonlinear feedforward networks. Supported variations include generalized regression and probabilistic neural networks. Dynamic networks use memory and recurrent feedback connections to recognize spatial and temporal patterns in data. They are commonly used for time-series prediction, nonlinear dynamic system modeling, and control systems applications. Prebuilt dynamic networks in the toolbox include focused and distributed time-delay, nonlinear autoregressive (NARX), layer-recurrent, Elman, and Hopfield networks. The toolbox also supports dynamic training of custom networks with arbitrary connections. Learning vector quantization (LVQ) is a powerful method for classifying patterns that are not linearly separable. LVQ lets you specify class boundaries and the granularity of classification.
Unsupervised Networks
Unsupervised neural networks are trained by letting the network continually adjust itself to new inputs. They find relationships within data and can automatically define classification schemes. Neural Network Toolbox supports two types of self-organizing, unsupervised networks: Competitive layers recognize and group similar input vectors, enabling them to automatically sort inputs into categories. Competitive layers are commonly used for classification and pattern recognition. Self-organizing maps learn to classify input vectors according to similarity. Like competitive layers, they are used for classification and pattern recognition tasks; however, they differ from competitive layers because they are able to preserve the topology of the input vectors, assigning nearby inputs to nearby categories.
Improving Generalization
Improving the networks ability to generalize helps prevent overfitting, a common problem in neural network design. Overfitting occurs when a network has memorized the training set but has not learned to generalize to new inputs. Overfitting produces a relatively small error on the training set but a much larger error when new data is presented to the network.
Neural Network Toolbox provides two solutions to improve generalization: Regularization modifies the networks performance function (the measure of error that the training process minimizes). By including the sizes of the weights and biases, regularization produces a network that performs well with the training data and exhibits smoother behavior when presented with new data. Early stopping uses two different data sets: the training set, to update the weights and biases, and the validation set, to stop training when the network begins to overfit the data.
Stock Market Prediction - The day-to-day business of the stock market is extremely complicated. Many factors weigh in whether a given stock will go up or down on any given day. Since neural networks can examine a lot of information quickly and sort it all out, they can be used to predict stock prices. Traveling Salesman Problem- Interestingly enough, neural networks can solve the traveling salesman problem, but only to a certain degree of approximation. Medicine, Electronic Nose, Security, and Loan Applications These are some applications that are in their proof-of-concept stage, with the acceptance of a neural network that will decide whether or not to grant a loan, something that has already been used more successfully than many humans. Miscellaneous Applications - These are some very interesting (albeit at times a little absurd) applications of neural networks.
Application principles
The solution of a problem must be the simple.
If a problem can be solved with a small look-up table that can be easily calculated that is a more preferred solution than a complex neural network with many layers that learns with backpropagation.
Application principles
The speed is crucial for computer game applications.
If it is possible on-line neural network solutions should be avoided, because they are big time consumers. Preferably, neural networks should be applied in an off-line fashion, when the learning phase doesnt happen during the game playing time.
Application principles
On-line neural network solutions should be very simple.
Using many layer neural networks should be avoided, if possible. Complex learning algorithms should be avoided. If possible a priori knowledge should be used to set the initial parameters such that very short training is needed for optimal performance.
Application principles
All the available data should be collected about the problem.
Having redundant data is usually a smaller problem than not having the necessary data.
Application principles
The neural network solution of a problem should be selected from a large enough pool of potential solutions.
Because of the nature of the neural networks, it is likely that if a single solution is build than that will not be the optimal one.
If a pool of potential solutions is generated and trained, it is more likely that one which is close to the optimal one is found.
Problem
Problem analysis: variables modularisation into sub-problems objectives data collection
Example:
Training set: ~ 75% of the data Validation set: ~ 10% of the data Testing set: ~ 5% of the data
Network 11
Network 4
Network 7
5 2.5 0 -2.5 1 2 3 4 1 2 3
5 2.5 5 0 4 -2.5 1 2 3 4 5 1 2 3
7.5 5 5 2.5
0 -2.5 1 2 3 4 5 1 2 3
Summary
Neural network solutions should be kept as simple as possible.
For the sake of the gaming speed neural networks should be applied preferably off-line.
A large data set should be collected and it should be divided into training, validation, and testing data.