You are on page 1of 60

5/13/2012

NEURAL NETWORKS
Chapter 20.5

Introduction & Basics Perceptrons Perceptron Learning and PLR Beyond Perceptrons Two-Layered Feed-Forward Neural Networks
1
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

WHAT IS NEURAL NETWORK?

Inspired by biological nervous system. composed of a large number of highly interconnected processing elements. It resembles the brain in two respects: Knowledge is acquired by the network through a learning process. Interneuron connection strengths known as synaptic weights are used to store the knowledge.
2
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

3
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

INTRODUCTION

4
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

INTRODUCTION

Known as:
Neural Networks (NNs) Artificial Neural Networks (ANNs) Connectionist Models Parallel Distributed Processing (PDP) Models

Neural Networks are a fine-grained, parallel, distributed computing model.

5
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

INTRODUCTION

NN similar to brain:
knowledge is acquire experientially (learning) knowledge stored in connections (weights)

Brain composed of neurons cells:


dendrites collect input from other neurons single axon sends output to other neurons connected at synapses that have varying strength this model is greatly simplified
6
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

INTRODUCTION

Constraints on human info. processing:


number of neurons: number of connections: neuron death rate: neuron birth rate: connection birth rate: 1011 104 per neuron 105 per day ~0 very slow

performance: about 102 msec, about 100 sequential neuron firings for "many" tasks

7
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

INTRODUCTION

Attractions of NN approach:
can be massively parallel
MIMD, optical computing, analog systems from a large collection of simple processing elements emerges interesting complex global behavior

can do complex tasks


pattern recognition (handwriting, facial expressions, etc.) forecasting (stock prices, power grid demand) adaptive control (autonomous vehicle control, robot control)

is a robust computation
can handle noisy and incomplete data due to fine-grained distributed and continuous knowledge representation
8
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

INTRODUCTION

Attractions of NN approach:
fault tolerant
ok to have faulty elements and bad connections

isn't dependent on a fixed set of elements and connections

degrades gracefully
continues to function, at a lower level of performance, when portions of the network are faulty

uses inductive learning


useful as a psychological model useful for a wide variety of high-performance applcations
9
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

BASICS OF NEURAL NETWORKS

Neural network composition:


large number of units
simple neuron-like processing elements (PEs)

connected by a large number of links


directed from one unit to another

with a weight associate with each link


positive or negative real values means of long term storage adjusted by learning

and an activation level associated with each unit


result of the unit's processing unit's output
10
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

Neural network configurations:


nodes: units

BASICS OF NEURAL NETWORKS

represent as a graph

edges: links

single-layered multi-layered feedback layer skipping fully connected (N2 links)


11
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

Layer 3 Layer 2

Layer 1

5/13/2012

BASICS OF NEURAL NETWORKS

Unit composition:
set of input links
from other units or sensors of the environment

set of output links


to other units or effectors of the environment

and an activation function


computes the activation level based on local info: wthe inputs from neighbors and the weights
1

Inputs

is a simple function of Activation the linear combination of its inputs

Output

wn
12
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

BASICS OF NEURAL NETWORKS

Given n inputs, the unit's activation is defined by: a = g( (w1 * x1) + (w2 * x2) + ... + (wn * xn) )
where: wi are the weights xi are the input values g() is a simple non-linear function, commonly:
let ini be sum of wi xi for all i

step:
sign:
1

activation flips from 0 to 1 when ini >= threshold


activation flips from 1 to +1 when ini >= 0
1

1 sigmoid: activation transistions from 0 to 1 when ini 0 1/(1+e-x) where x is ini T


13

-1

2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

PERCEPTRONS: LINEAR THRESHOLD UNITS (LTU) LTUs where studied in the 1950s!
mainly as single-layered nets

since an effective learning algorithm was known

Perceptrons:
simple 1-layer network, units act independently composed of linear threshold units (LTU)
a unit's inputs, xi, are weighted, wi, and combined
step function computes activation level, a

x1 xi xn

w1

S
wn
14

2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

Threshold is just another weight (called the bias):


(w1 * x1) + (w2 * x2) + ... + (wn * xn) >= t is equivalent to (w1 * x1) + (w2 * x2) + ... + (wn * xn) + (t * -1) >= 0

PERCEPTRONS: LINEAR THRESHOLD UNITS (LTU)

-1 t x1 xn w1 a wn
15
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

AND Perceptron:
inputs are 0 or 1 output is 1 when both x1 and x2 are 1

PERCEPTRONS: AND EXAMPLE

-1

.75
x1 x2 .5 a .5
.5*1+.5*1+.75*-1 = .25 output = 1

2-D input space

.5*1+.5*0+.75*-1 = -.25 output = 0

4 possible .5*0+.5*0+.75*-1 data points = -.75 output = 0 threshold is like a separating line
16

x1

1
0 0 1 x2

2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

OR Perceptron:
inputs are 0 or 1

PERCEPTRONS: OR EXAMPLE

-1

output is 1 when either x1 and/or x2 are 1

.25
x1 x2 .5 a .5
.5*1+.5*1+.25*-1 = .75 output = 1

2-D input space

.5*1+.5*0+.25*-1 = .25 output = 1

4 possible .5*0+.5*0+.25*-1 data points = -.25 output = 0 threshold is like a separating line
17

x1

1
0 0 1 x2

2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

PERCEPTRON LEARNING

How might perceptrons learn?


Programmer specifies:
numbers of units in each layer

connectivity between units

So the only unknown is the weights Perceptrons learn by changing their weights
supervised learning is used
the correct output is given for each training example
an example is a list of values for the input units correct output is a list desired values for the output units
18
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

PERCEPTRON LEARNING: ALGORITHM

1. 2.

Initialize the weights in the network

Repeat until all examples correctly classified or some other stopping criterion is met

(usually with random values)

for each example e in training set do

b.

a. O = neural_net_output(network, e) T = desired output, i.e Target or Teacher's output c. update_weights(e, O, T)


Unlike other learning techniques, Perceptrons need to see all of the training examples multiple times. Each pass through all of the training examples is called an epoch.
19

2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

PERCEPTRON LEARNING: THE RULE

How should the weights be updated?


Determining how to update the weights is a case of the credit assignment problem. Perceptron Learning Rule:
wi = wi + Dwi where Dwi = a * xi * (T - O)
where xi is the value associated with ith input unit

a is a constant between 0.0 and 1.0


called the learning rate
20
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

PERCEPTRON LEARNING: THE RULE


Dwi = a * xi * (T - O)
note it doesn't depend on wi

When wont the weight be change (i.e. Dwi = 0)?


correct output, i.e. T = O gives a * xi * 0 = 0 zero input, i.e. xi = 0 gives a * 0 * (T - O) = 0

What should happen to the weight if T=1 & O=0?


Increase it so that maybe next time the weighted sum will exceed the threshold causing output to be 1

What should happen to the weight if T=0 & O=1?


Decrease it so that maybe next time the weighted sum wont exceed the threshold causing output to be 0
21
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

PERCEPTRON LEARNING: EXAMPLE

In class example, Learning Or

22
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

PERCEPTRON LEARNING RULE (PLR)

PLR is also called the Delta Rule or the Widrow-Hoff Rule PLR is a variant of rule proposed by Rosenblatt in 1960 PLR is based on an idea of Hebb:
the strength of a connection between two units should be adjusted in proportion to the product of their simultaneous activations the product is used as a means of measuring the correlation between the values that are output by the two units
23
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

PERCEPTRON LEARNING RULE (PLR)

PLR is a "local" learning rule only local information in the network is needed to update a weight PLR performs gradient descent in "weight space" this rule iteratively adjusts all of the weights so that at for each training example the error is monotonically non-increasing, i.e. ~decreases

24
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

PERCEPTRON LEARNING RULE (PLR)

Perceptron Convergence Theorem says if a set of examples are learnable, then PLR will find the necessary weights
in a finite number of steps independent of the initial weights

This theorem says that if a solution exists, PLR's gradient descent is guaranteed to find an optimal solution (i.e., 100% correct classification) for any 1-layer neural network
25
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

LIMITS OF PERCEPTRON LEARNING

What are the limitations of perceptron learning?


A single perceptron's output is determined by the separating hyperplane defined by (w1 * x1) + (w2 * x2) + ... + (wn * xn) = t
So, Perceptrons can only learn functions that are linearly separable (in input space).

26
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

XOR Perceptron:
inputs are 0 or 1

PERCEPTRONS: XOR EXAMPLE

-1

output is 1 when x1 is 1 and x2 is 0 or x1 is 0 and x2 is 1

???
x1 x2 .5 a .5

2-D input space with 4 possible data points positives from negatives using a straight line?
27

x1

How do you separate

1
0 0 1 x2

2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

PERCEPTRON LEARNING SUMMARY

In general, the goal of learning in a perceptron is to adjust the separating hyperplane which divides an n-dimensional input space where n is the number of input units by modifying the weights (and biases) until all of the examples with target value 1 are on one side of the hyperplane, and all of the examples with target value 0 are on the other side of the hyperplane.
28
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

BEYOND PERCEPTRONS

Perceptrons as a computing model are too weak because they can only learn linearly-separable functions. To enhance the computational ability, general neural networks have multiple layers of units. The challenge is to find a learning rule that works for multi-layered networks.
29
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

BEYOND PERCEPTRONS

A feed-forward multi-layered network computes a function of the inputs and the weights. Input units (on left or bottom):
activation is determined by the environment

Output units (on right or top):


activation is the result

Hidden units (between input and output units):


cannot observe directly

2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

Perceptrons have input units followed 30 by one layer of output units, i.e. no hidden units

5/13/2012

BEYOND PERCEPTRONS

NN's with one hidden layer of a sufficient number of units, can compute functions associated with convex classification regions in input space. NN's with two hidden layers are universal computing devices, although the complexity of the function is limited by the number of units.
If too few, the network will be unable to represent the function. If too many, the network will memorize examples and is subject to overfitting.

31
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

TWO-LAYERED FEED-FORWARD NEURAL NETWORKS

Inputs Hidden Units Output Units Weights on links from input to hidden Weights on links from hidden to output Network Activations

ak=Ik Wk,j I1 I2

Aj

Wj,i

Ai

I3
I4 I5 I6

a1 = O1

a2 = O2

32
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

TWO-LAYERED FEED-FORWARD NEURAL NETWORKS

ak=Ik Wk,j

Aj

Wj,i

Ai

Two Layered: count layers with units computing an activation Feed-forward: each unit in a layer connects forward to all of the units in the next layer no cycles
- links within the same layer - links to prior layers

I1 I2

I3
I4 I5 I6 Layer 1
33

a1 = O1

a2 = O2

no skipping layers

Layer 2

2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

NEURAL NETWORKS

Chapter 20.5
Two-Layered Feed-Forward Neural Networks Solving XOR Learning in Multi-Layered FeedForward NN Back-Propagation Computing the Change for Weights Other Issues & Applications
34
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

XOR Perceptron?:
inputs are 0 or 1 output is 1 when I1 is 1 and I2 is 0 or I1 is 0 and I2 is 1

CONQUERING XOR

I1

.25 .5 .5 .5

OR

.5 O -.5

I2

Each unit in hidden layer acts like a perceptron learning a separating line

.5 .75

AND I1 1 0 0 1

top hidden unit acts like an OR perceptron bottom hidden unit acts like an AND perceptron
35

I2

2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

XOR Perceptron?:
inputs are 0 or 1 output is 1 when I1 is 1 and I2 is 0 or I1 is 0 and I2 is 1

CONQUERING XOR

I1

.25 .5 .5 .5

OR

.5 O -.5

I2

The output unit combines I1 these separating lines by intersecting the "half-planes" 1 defined by the separating lines
when OR is 1 and AND is 0 0 0
36
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

.5 .75

AND

I2 1

then output O, is 1

5/13/2012

LEARNING IN MULTI-LAYERED FEED-FORWARD NN

PLR doesn't work in multi-layered feed-forward nets since desired values for hidden units aren't known. Must again solve the Credit Assignment Problem
determine which weights to credit/blame for the output error in the network determine which weights in the network should be updated and how to update them

37
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

LEARNING IN MULTI-LAYERED FEED-FORWARD NN

Back-Propogation:
method for learning weights in these networks generalizes PLR

Rumelhart, Hinton, Williams (re)discovered in 1986

Back-Propagation approach:
gradient-descent algorithm to minimize the error on the training data errors are propagated through the network starting at the output units and working backwards towards the input units
38
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

BACK-PROPAGATION ALGORITHM

1. 2.

Initialize the weights in the network (usually random values like PLA) Repeat until all examples correctly classified or other stopping criterion is met for each example e in training set do
forward pass: Oi = neural_net_output(network, e) Ti = desired output, i.e Target or Teacher's output

a. b. c.
i.

calculate error (Ti - Oi) at the output units d. backward pass:


39

compute Dwj,i for all weights from hidden to output layer


ET. AL.

ii. compute Dwk,j for all weights from inputs to hidden layer 2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER,

5/13/2012

COMPUTING THE CHANGE FOR WEIGHTS

Back-propagation performs a gradient descent search in weight space to learn network weights. Given a network with n weights:
each configuration of weights is a vector, W, of length n that defines an instance of the network W can be considered a point in an n-dimensional weight space, where each dimension is associated with one of the connections in the network

40
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

COMPUTING THE CHANGE FOR WEIGHTS

Given a training set of m examples:


each network defined by the vector W has an associated total error, E, on all of training data E the sum of the squared error (SSE) is defined as: E = E1 + E2 + ... + Em where each Ei is the squared error of the network on the ith training example

Given n output units in the network: Ei = ((T1 - O1)2 + (T2 - O2) 2 + ... + (T n - On) 2) / 2
Ti is the target value for the ith output unit Oi is the network output value for the ith output unit
41
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

COMPUTING THE CHANGE FOR WEIGHTS

Visualized as a 2D error surface in weight space Each point in w1 w2 plane E is a weight configuration Each point has a total error E

2D surface represents errors for all weight configurations Goal is to find a lower point on the error surface (local minima) Gradient descent follows the direction of the steepest descent i.e. where E decreases the most

w2
.3

.8

w1

42
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

COMPUTING THE CHANGE FOR WEIGHTS

The gradient is defined as: Gradient_E = [E/w1, E/w2, ..., E/wn] Then change the ith weight by: Dwi = - a * E/wi To compute the derivatives for calculating the gradient direction requires an activation function that is continuous, differentiable, non-decreasing and easily computed.
can't use the step function as in LTU's instead use the sigmoid function 1/(1+e-x) where x is ini the weighted sum of inputs
43
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

COMPUTING THE CHANGE FOR WEIGHTS: TWO-LAYER For weights between hidden and output units generalize PLR for sigmoid NEURAL NETWORK activation is:
5/13/2012

Dwj,i = -a * E/wj,i = -a * -aj * (Ti - Oi) * g'(ini) = a * aj * (Ti - Oi) * Oi * (1 - Oi) a learning rate parameter

wj,i weight on link from hidden unit j to output unit i

aj activation (i.e. output) of hidden unit j


Ti teacher output for output unit i Oi actual output of output unit i
44 g' 2derivative of M E S D . Sactivation function, whichYisCg' D Y g(1-g) sigmoid K R E N T N Y F R O M N O T E S B = ER, 001-2004 JA .
ET. AL.

5/13/2012

TWO-LAYERED FEED-FORWARD NEURAL NETWORKS

a1

w1,2

Dw1,2 = a

product of

I1 I2

learning rate

a1 activation along link (T2 O2) error O2 * (1 O2) g(in2)

I3
I4 I5 I6

O1

O2

45
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

COMPUTING THE CHANGE FOR WEIGHTS: TWO-LAYER For weights between inputs and hidden units: NEURAL NETWORK
5/13/2012

don't have teacher-supplied correct output values

infer the error at these units by "back-propagating"

error at an output units is "distributed" back to each of the hidden units in proportion to the weight of the connection between them
total error is distributed to all of the hidden units that contributed to that error

Each hidden unit accumulates some error from each of the output units to which it is connected
46
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

COMPUTING THE CHANGE FOR WEIGHTS: TWO-LAYER For weights between inputs and hidden units: NEURAL NETWORK Dwk,j = -a * E/wk,j
5/13/2012

= -a * -Ik * g'(inj) * S( wj,i * (Ti - Oi) * g'(ini) ) = a * Ik * aj * (1 - aj) * S( wj,i*(Ti-Oi)*Oi*(1-Oi) ) wk,j weight on link from input k to hidden unit j wj,i weight on link from hidden unit j to output unit i

a learning rate parameter


aj activation (i.e. output) of hidden unit j Ti teacher output for output unit i Oi actual output of output unit i 47
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

input value k

5/13/2012

TWO-LAYERED FEED-FORWARD NEURAL NETWORKS

Dw1,2 =
a*
I1 *

w1,2
product of
learning rate activation along link

a2

W2,i

I1 I2

a2 * (1 a2)

I3
I4 I5 I6

O1

g(in2)

error from outputs distributed back weight error g(ini)

S(w2,i* (Ti-Oi)* Oi*(1-Oi))


= w2,1* (T1-O1)* O1*(1-O1) + w2,2* (T2-O2)* O2*(1-O2)

O2

48
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

EXAMPLE: COMMENTS ABOUT A4

1. 2.

Initialize the weights in the network (usually random values) Repeat until all examples correctly classified or other stopping criterion is met for each example e in training set do

a. b. c.

Forward Pass: Oi = neural_net_output(network, e) compute weighted sum, then sigmoid activation Ti = desired output, i.e Target or Teacher's output calculate error (Ti - Oi) at the output units d.
i.

Backward Pass:
49

compute Dwj,i = a * aj * (Ti - Oi) * Oi * (1 - Oi)

2 0 0ii. - 2 0 0 4 J A M E SDwk,j S Ka E NkT* aj * (1 O Mj) N O T(wS *(Ti-Oi.)*Oi*(1-Oi)) 1 compute D . = R * I N Y F R - a * S E j,i B Y C D Y E R , ET. AL.

5/13/2012

OTHER ISSUES

How should a network's error rate be estimated?


Report the average error rate by using an evaluation method such as cross-validation multiple times with different random initial weights.

How should the learning rate parameter be set?


Use a tuning set or cross-validation to train using several candidate values for alpha, and then select the value that gives the lowest error.

50
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

OTHER ISSUES

How many hidden layers should be used?


usually just one hidden layer is used

How many hidden units should be in a layer?


too few and the concept can't be learn too many:
examples just memorized "overfitting", poor generalization

2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

Use a tuning set or cross-validation to determine experimentally the number 51 of units that minimizes error.

5/13/2012

OTHER ISSUES

How many examples should be in training set?


The larger the better, but training takes longer.

To obtain 1 e correct classification on test set:


training set should be of size approximately n/e:
n is the number of weights in the network

e is test set error fraction between 0 and 1

train to classify 1 - e/2 of the training set correctly e.g. if n=80 and e=0.1 (i.e. 10% error on test set) 52
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, training E T . of L . is 800 set A size

5/13/2012

OTHER ISSUES

When should training stop?


too soon and the concept isn't learned too late:
"overfitting", poor generalization
error rate will go up on the testing set

Train the network until the error rate on a tuning set begins increasing rather than training until the error (i.e. SSE) is minimized.
53
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

APPLICATIONS

NETtalk (Sejnowski & Rosenberg, 1987):


learns to say text by mapping character strings to phonemes

Neurogammon (Tesauro & Sejnowski, 1989):


learns to play backgammon

Speech recognition (Waibel, 1989):


learns to convert spoken words to text

Character recognition (Le Cun et al., 1989):


54
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

learns to convert page image to text

5/13/2012

APPLICATIONS: ALVINN

ALVINN (Pomerleau, 1988): learns to control vehicle steering to stay in the middle of its lane topology: two-layered feed-forward network using back-propagation learning

topology: input
input is 480*512 pixel image 15 times per second
color image is preprocessed to obtain a 30*32 pixel image each pixel is one byte, an integer from 0 to 255 55 corresponding to the brightness of the image

2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

APPLICATIONS: ALVINN

topology: output
output is one of 30 discrete steering positions
output unit 1 means sharp left output unit 30 means sharp right

target output is a set of 30 values


Gaussian distribution with a variance of 10 centered 2 on the desired steering directions: Oi = e[-(i-d) /10]

actual output for steering determined by


computing a least-squares best fit of output units' values to a Gaussian distribution with a variance of 10 peak of this distribution is taken as the steering direction

error for learning is: target output - actual output


2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

topology: hidden

56

5/13/2012

APPLICATIONS: ALVINN

Learning:
continuously learns on the fly by observing
human driver (takes ~5 minutes from random initial weights)

itself (do an epoch of training every 2 seconds there after)

problem with using real continuous data:


there aren't negative examples network may overfit data in recent images (e.g. straight road) at the expense of past images (e.g. road with curves)

solutions
generate negative examples by synthesizing views of the road that are incorrect for current steering
57 maintain a buffer of 200 real and synthesized images that keeps some images in many different 2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, steering. directions ET AL.

5/13/2012

APPLICATIONS: ALVINN

Results:
has driven at speeds up to 70 mph has driven continuously for distances up to 90 miles

has driven across the continent during different times of the day and with different traffic conditions
can drive on:
single lane roads and highways multi-lane highways paved bike paths dirt roads

see for yourself (video) 58


2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

SUMMARY

Advantages
parallel processing architecture robust with respect to node failure

fine-grained, distributed representation of knowledge


robust with respect to noisy data incremental algorithm (i.e. learn as you go) simple computations empirically shown to work well for many problem domains

59
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

5/13/2012

SUMMARY

Disadvantages
slow training (i.e. takes many epochs) poor interpretability (i.e. difficult to get rules)

ad hoc network topologies (i.e. layouts)


hard to debug because distributed representations preclude content checking may converge to local, not global, minimum of error may be hard to describe a problem in terms of features with numerical values not known how to model higher-level cognitive mechanisms with NN model
60
2001-2004 JAMES D. SKRENTNY FROM NOTES BY C. DYER, ET. AL.

You might also like