You are on page 1of 8

Hebbian Learning Rule

When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes
place in firing it, some growth process or metabolic change takes place in one or both cells,
such that A's efficiency, as one of the cells firing B, is increased.
Algorithm:

(1)
(2)
and, for neuron i and input j,

(3)
Description
Notes:

w0 is initialised with small random values.

Purely feedforward, unsupervised learning.

Frequent input patterns will have most influence on the weights.

Potential for unconstrained growth in w (artificially constrain ?).

Correlation-based update rule.

Two worked examples of this Hebbian learning process can be seen:

Threshold Neuron with Hebbian Learning.

Continuous Neuron with Hebbian Learning.

Threshold Neuron with Hebbian Learning


Perceptron with

=>

f ( wT x ) = sgn ( wT x ),

Let w0 =

x0 =

and

x1 =

z0 = w0T x0 = [ 1.0

-1.0

z = wT x

x2 =

0.0 0.5 ]

= 3

The update is:

wk+1 = wk + sgn ( zk ) xk

(4)

giving:

w1 =
continuing:

z1 =

-0.25 ,

z2 = -3.0

w2 = w1 + sgn ( z1 ) x1 = w1 - x1

w3 = w2 + sgn ( z2 ) x2 = w2 - x2

Hebbian learning with a threshold function results in adding or subtracting the input
pattern from the weight vector.

Continuous Neuron with Hebbian Learning

Perceptron with

=1

f ( wT x ) =

=>

Let w0 =

x0 =

x1 =

z0 = w0T x0 = [ 1.0
The update is:

-1.0

z = wT x

and

0.0 0.5 ]

x2 =

= 3

wk+1 = wk + f ( zk ) xk , y = f ( z )

(5)

as = 1, thus giving:

w1 =

+ 0.905

continuing:

y1 =

-0.077 ,

w2 = w1 - 0.077 x1 =

y2 = -0.932 ,

w3 = w2 - 0.932 x2 =

Hebbian learning with a continuous function results in adding or subtracting a fraction


( determined by beta ) of the input pattern from the weight vector.

Application to Common Boolean Functions


The matlab code for the following two examples can be obtained by clicking on code .
Please read the enclosed readme file for details of running the examples.
Implementation of function 'AND'
Consider a two input AND Gate:
Solution details:

2 inputs + 1 bias => 3 weights, graphically this might appear as:

w(0) = [0.1 0.1 0.1] are given as initial values for the weights

Train for 8 epochs with a learning rate of 0.1

Decision surface is given by:

Pattern set, P =

Desired output, T = [ -1 -1 -1 1 ]

Explanation of Graphs

Implementation of function 'OR'


In this case the equation for the decision surface is achieved in exactly the same way as for
the 'AND' function. This time we notice that for the same inputs and bias, the desired
output has changed to correspond to that of an 'OR' function.

Solution details:

2 inputs + 1 bias 3 weights , w(0) = [0.1 0.1 0.1]

Train for 8 epochs with a learning rate of 0.1

Decision surface is given by:

P =

T = [ -1 1 1 1 ]

The understanding of the graphs is also identical. The only difference being there is now
only one 0 and three 1's.

You might also like