Professional Documents
Culture Documents
Outline
UB CS ML UB CS ML
UB CS ML UB CS ML
Neural Network Overview Neural Network Overview
Neural Network Neural Network
Types of Neural Networks Types of Neural Networks
Perceptron Perceptron
Mult-Layer Perceptron (MLP) Mult-Layer Perceptron (MLP)
Backpropagation Backpropagation
UB CS ML UB CS ML
UB CS ML UB CS ML
Neural Network Overview Neural Network Overview
Neural Network Neural Network
Types of Neural Networks Types of Neural Networks
Perceptron Perceptron
Mult-Layer Perceptron (MLP) Mult-Layer Perceptron (MLP)
Backpropagation Backpropagation
UB CS ML UB CS ML
UB CS ML UB CS ML
Neural Network Overview Neural Network Overview
Neural Network Neural Network
Types of Neural Networks Types of Neural Networks
Perceptron Perceptron
Mult-Layer Perceptron (MLP) Mult-Layer Perceptron (MLP)
Backpropagation Backpropagation
UB CS ML UB CS ML
UB CS ML UB CS ML
Neural Network Overview Neural Network Overview
Neural Network Neural Network
Types of Neural Networks Types of Neural Networks
Perceptron Perceptron
Mult-Layer Perceptron (MLP) Mult-Layer Perceptron (MLP)
Backpropagation Backpropagation
UB CS ML UB CS ML
UB CS ML UB CS ML
Neural Network Overview Neural Network Overview
Neural Network Neural Network
Types of Neural Networks Types of Neural Networks
Perceptron Perceptron
Mult-Layer Perceptron (MLP) Mult-Layer Perceptron (MLP)
Backpropagation Backpropagation
UB CS ML UB CS ML
UB CS ML UB CS ML
Neural Network Overview Neural Network Overview
Neural Network Neural Network
Types of Neural Networks Types of Neural Networks
Perceptron Perceptron
Mult-Layer Perceptron (MLP) Mult-Layer Perceptron (MLP)
Backpropagation Backpropagation
UB CS ML UB CS ML
Learning is efficient if weights are not very large. Perceptron can be successfully used for functions such as
AND and OR, but not XOR.
Attributes should be weighted independently.
There is therefore a need for a network of perceptrons: a
Can only learn lines and hyperplanes.
Multi-layer Percepton.
UB CS ML UB CS ML
Neural Network Overview Neural Network Overview
Neural Network Neural Network
Types of Neural Networks Types of Neural Networks
Perceptron Perceptron
Mult-Layer Perceptron (MLP) Mult-Layer Perceptron (MLP)
Backpropagation Backpropagation
Input layer.
One or more hidden layers.
Output layer.
Hidden units must use non-linear activation functions,
otherwise the whole network to one without hidden units.
An MLP can learn any continuous mapping with some
accuracy.
One hidden layer is sufficient for most applications.
UB CS ML UB CS ML
This involves:
The feed-forward of the input training patterns.
The calculation and backpropagation of the associated
error.
The adjustment of the weights.
UB CS ML UB CS ML
Neural Network Overview Neural Network Overview
Neural Network Neural Network
Types of Neural Networks Types of Neural Networks
Perceptron Perceptron
Mult-Layer Perceptron (MLP) Mult-Layer Perceptron (MLP)
Backpropagation Backpropagation
UB CS ML UB CS ML
UB CS ML UB CS ML
Neural Network Overview Neural Network Overview
Neural Network Neural Network
Types of Neural Networks Types of Neural Networks
Perceptron Perceptron
Mult-Layer Perceptron (MLP) Mult-Layer Perceptron (MLP)
Backpropagation Backpropagation
UB CS ML UB CS ML
UB CS ML UB CS ML
Neural Network Overview Neural Network Overview
Neural Network Neural Network
Types of Neural Networks Types of Neural Networks
Perceptron Perceptron
Mult-Layer Perceptron (MLP) Mult-Layer Perceptron (MLP)
Backpropagation Backpropagation
UB CS ML UB CS ML
Weights Update: Neurons in Output Layer Weights Update: Neurons in Hidden Layer
UB CS ML UB CS ML
Neural Network Overview Neural Network Overview
Neural Network Neural Network
Types of Neural Networks Types of Neural Networks
Perceptron Perceptron
Mult-Layer Perceptron (MLP) Mult-Layer Perceptron (MLP)
Backpropagation Backpropagation
Improving the Efficiency of Backpropagation Learning Improving the Efficiency of Backpropagation Learning
If the weights are adjusted to very large values, the total input Momentum can be factored into the weight update
of a neuron can reach very high values, and because of the equation, such that when some data very different from the
sigmoid activation function, the neuron will have an activation majority of training data is encountered, a small learning
very close to zero or one. rate will be used, in order not to disrupt the progress.
Gradient descent or other optimization functions used can get For momentum to be used, the weights from one or more
stuck in local minima, when deeper minima are close by. previous training patterns must be preserved.
Probabilistic methods can help to avoid this trap but can be Large weight adjustments are made as long as the
slow. corrections are in the same general direction for several
The number of hidden units can be increased, leading to patterns.
higher dimensionality of the error space, and a smaller chance The network with momentum proceeds in the direction of a
of getting trapped, but after some number of hidden units, combination of the current gradient and the previous
there is again a high chance of getting trapped in local minima. direction of the weight correction, instead of only
proceeding in the direction of the gradient.
UB CS ML UB CS ML
Neural Network Overview
Neural Network
Types of Neural Networks
Perceptron
Mult-Layer Perceptron (MLP)
Backpropagation