You are on page 1of 27

Klinkhachorn:CpE320

Hopfield: an example
Suppose a Hopfield net is to be trained to recall
vectors (1,-1,-1,1) and (1, 1, -1, -1)
Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall
1
2
3
4
w
11
w
12
w
13
w
14
w
44
w
43
Klinkhachorn:CpE320
Hopfield: an example (cont)
Step 1: Calculate weight vector, W = X
T
X (w
ii
= 0)
Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall
W
w
11
w
12
w
13
w
14
w
21
w
22
w
23
w
24
w
31
w
32
w
33
w
34
w
41
w
42
w
43
w
44





1
]
1
1
1

+1 +1
1 +1
1 1
+1 1





1
]
1
1
1
.
+1 1 1 +1
+1 +1 1 1



1
]
1

0 0 2 0
0 0 0 2
2 0 0 0
0 2 0 0





1
]
1
1
1
Klinkhachorn:CpE320
Hopfield: an example (cont)
Step2: For unknown input pattern,
X(0) = (1,-1, 1, 1),
assigning output,
Y(0) = (1,-1,1,1)
Step 3: Iterate (update outputs) until convergence
Assume unit 3 is randomly selected to be updated

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall
y
3
(1) F w
31
w
32
w
33
w
34
[ ].
x
1
x
2
x
3
x
4





1
]
1
1
1





_
,



F 2 0 0 0 [ ].
1
1
1
1





1
]
1
1
1





_
,



F 2 ( ) 1
Klinkhachorn:CpE320
Hopfield: an example (cont)
Step 3: New X(1) = Y(1) = (1,-1,-1,1)
Assume unit 1 is randomly selected to be updated

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall
y
1
(2) F w
11
w
12
w
13
w
14
[ ].
x
1
x
2
x
3
x
4





1
]
1
1
1





_
,



F 0 0 2 0 [ ].
1
1
1
1





1
]
1
1
1





_
,



F 2 ( ) 1
Klinkhachorn:CpE320
Hopfield: an example (cont)
Step 3: New X(2) = Y(2) = (1,-1,-1,1)
Assume unit 2 is randomly selected to be updated

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall
y
2
(3) F w
21
w
22
w
23
w
24
[ ].
x
1
x
2
x
3
x
4





1
]
1
1
1





_
,



F 0 0 0 2 [ ].
1
1
1
1





1
]
1
1
1





_
,



F 2 ( ) 1
Klinkhachorn:CpE320
Hopfield: an example (cont)
Step 3: New X(3) = Y(3) = (1,-1,-1,1)
Assume unit 4 is randomly selected to be updated

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall
y
4
(4) F w
41
w
42
w
43
w
44
[ ].
x
1
x
2
x
3
x
4





1
]
1
1
1





_
,



F 0 2 0 0 [ ].
1
1
1
1





1
]
1
1
1





_
,



F 2 ( ) 1
Repeat until until convergence
X(n) = Y(n) = (1,-1,-1,1) <----> perfect recalled
Klinkhachorn:CpE320
Hopfield: an example (cont)
Step2: For unknown input pattern,
X(0) = (-1,1, -1, -1),
assigning output,
Y(0) = (-1,1,-1,-1)
Step 3: Iterate (update outputs) until convergence
Assume unit 2 is randomly selected to be updated

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall
y
2
(1) F w
21
w
22
w
23
w
24
[ ].
x
1
x
2
x
3
x
4





1
]
1
1
1





_
,



F 0 0 0 2 [ ].
1
1
1
1





1
]
1
1
1





_
,



F 2 ( ) 1
Klinkhachorn:CpE320
Hopfield: an example (cont)
Step 3: New X(1) = Y(1) = (-1,1,-1,-1)
Assume unit 1 is randomly selected to be updated

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall
y
1
(2) F w
11
w
12
w
13
w
14
[ ].
x
1
x
2
x
3
x
4





1
]
1
1
1





_
,



F 0 0 2 0 [ ].
1
1
1
1





1
]
1
1
1





_
,



F 2 ( ) 1
Klinkhachorn:CpE320
Hopfield: an example (cont)
Step 3: New X(2) = Y(2) = (1,1,-1,-1)
Assume unit 4 is randomly selected to be updated

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall
y
4
(3) F w
41
w
42
w
43
w
44
[ ].
x
1
x
2
x
3
x
4





1
]
1
1
1





_
,



F 0 2 0 0 [ ].
1
1
1
1





1
]
1
1
1





_
,



F 2 ( ) 1
Klinkhachorn:CpE320
Hopfield: an example (cont)
Step 3: New X(3) = Y(3) = (1,1,-1,-1)
Assume unit 3 is randomly selected to be updated

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall
y
3
( 4 ) F w
31
w
32
w
33
w
34
[ ].
x
1
x
2
x
3
x
4





1
]
1
1
1





_
,



F 2 0 0 0 [ ].
1
1
1
1





1
]
1
1
1





_
,



F 2 ( ) 1
Repeat until until convergence
X(n) = Y(n) = (1,1,-1,-1) <----> perfect recalled
Hamming Networks
Klinkhachorn:CpE320
Hamming Nets
A minimum error classifier for binary vectors
Where error is defined using Hamming distant.
Consider the following exemplars:
Exemplar#
1 +1 +1 +1 +1 +1 +1
2 +1 +1 +1 -1 -1 -1
3 -1 -1 -1 +1 -1 +1
4 -1 -1 -1 +1 +1 +1
For example, given the input vector, ( 1 1 1 1 -1 1)
The Hamming distances from each of the above four exemplars are
1, 2, 3, and 4 respectively. In this case the input vector is assigned to
category exemplar #1 since its gives the smallest Hamming distant.
Klinkhachorn:CpE320
Hamming Net - Architecture
Klinkhachorn:CpE320
Hamming Net - Feature Layer
n inputs with fully connected m processing elements (m
exemplars)
Each processing element calculates the number of bits
at which the input vector and an exemplar agree
The weights are set in the one-shot learning phase as
follows:
Let X
p
= (x
p1
,x
p2
,x
p3
,..,x
pn
) and p=1..m be the m exemplar vectors.
If x
pi
takes on the values -1 or 1 then the learning phase consists of
setting the weights to be,
w
ji
= 0.5*x
ji
j = 1..m, and i = 1..n
w
j0
= 0.5*n j = 1..m
Klinkhachorn:CpE320
Hamming Net - Feature Layer
Analysis
During recall, an input vector is processed through each processing
element as follows:
n
S
j
= (w
ji
*x
i
) ..for j = 1..m
i=0
n
= 0.5* {(x
ji
*x
i
) +n} ..for j = 1..m
i=1
Since x
ji
and x
i
take on the values of -1 or +1 and
if n
aj
is the number of bits the x
ji
and x
i
agree, and
if n
dj
is the number of bits the x
ji
and x
i
disagree, then
S
j
= 0.5*(n
aj
-n
dj
+n) ..for j =1 ..m
But n=n
aj
+n
dj
Then Sj = 0.5*(n
aj
-n
dj
+n
aj
+n
dj
) = n
aj
Therefore, output, Sj, from each processing element represents the number of bits at
which the input vector and exemplar agree!
Klinkhachorn:CpE320
Hamming Net - Category Layer
The processing element with the largest initial state
(smallest Hamming distant to the input vector) wins out
Competitive learning through lateral connections
Each node, j, is laterally connected to every other node,
k, in the layer through a connection of fixed strength w
kl
Where w
kj
= 1 ..for k=j, and
w
kj
= - ..for kj, 0<<1/m)
Klinkhachorn:CpE320
Hamming Net - Category Layer
Competition through lateral inhibition
Initialize the network with unknown Input Pattern
n
y
j
(0)= s
j
= w
ji
x
i
for j =1..m
i=0
After initialization of the category layer, the stimulus from the input layer is
removed and the category layer is left to iterate until stabilization. At the i
th
iteration, the output of the j
th
processing element is
Y
j
(t+1) = F
t
[y
j
(t)-.y
k
(t)] k=1 to m
kj
Where y
j
(t) is the output of node j at time t, and
F
t
(s) = s if s>0
= 0 if s0
At convergence of the competition in the category layer, only the
corresponding winner is active in the output layer.
Klinkhachorn:CpE320
Hamming Net
Klinkhachorn:CpE320
Hamming Net: an example
Suppose a Hamming net is to be trained to
recognize vectors (1,-1,-1,1) and (1, 1, -1, -1)
x
1
x
2
x
3
x
4
1
2
1
2
Feature Layer
Category Layer
X
0
=1
Klinkhachorn:CpE320
Hamming Net: an example
Feature Layer: (1,-1,-1,1) and (1, 1, -1, -1)
w
w
10
w
11
w
12
w
13
w
14
w
20
w
21
w
22
w
23
w
24



1
]
1

0.5 * 4 0.5 * 1 0.5* 1 0 . 5 *1 0.5 *1


0.5 * 4 0.5 * 1 0.5* 1 0 . 5 *1 0.5 * 1



1
]
1

2 0.5 0.5 0.5 0.5


2 0.5 0.5 0.5 0.5



1
]
1
Klinkhachorn:CpE320
Hamming Net: an example
Feature Layer: For unknown input pattern (1,-1,1,1)
S
s
1
s
2



1
]
1

w
10
w
11
w
12
w
13
w
14
w
20
w
21
w
22
w
23
w
24



1
]
1
.
x
0
x
0
x
1
x
1
x
2
x
2
x
3
x
3
x
4
x
4






1
]
1
1
1
1

s
1
s
2



1
]
1

2 0.5 0.5 0.5 0.5


2 0.5 0.5 0.5 0.5



1
]
1
.
1 1
1 1
1 1
1 1
1 1






1
]
1
1
1
1

s
1
s
2



1
]
1

3
1



1
]
1
Klinkhachorn:CpE320
Hamming Net: an example
Categetory layer: Software implementation
Since s
1
= 3 and s
2
= 1,
Then
s
1
= winner
Klinkhachorn:CpE320
Hamming Net: an example
Categetory layer: Hardware implementation
At t=0,
y
1
(0) = 3
y
2
(0) =1
Let = 1/2, then
A t=1,
y
1
(1) = F
t
[y
1
(0)-.y
2
(0)] = F
t
[3-1/2*1] = 2.5
y
2
(0) = F
t
[y
2
(0)-.y
1
(0)] = F
t
[1-1/2*3] = 0
Since y
1
(1) is the only +ve output
y
1
= winner
Klinkhachorn:CpE320
Hamming Net VS Hopfield Net
Lippman(1987): Hopfield Net cannot do any better
than a Hamming Net when used to optimally
classifies binary vectors.
Hopfield network with n input nodes has n*(n-1)
connections.
Hopfield net has limited capacity, approximately
1.5*n (# of exemplars it can store)
The capacity of a Hamming net is not dependent on
the number of the input vector but instead is equal to
the number of elements m in its category layer which
is independent of n.
The number of connections in a Hamming network
equal to m*(m+n).
Klinkhachorn:CpE320
Hamming Net VS Hopfield Net
Example:
A Hopfield network with 100 inputs might hold 10
exemplars and requires close to 10,000 connections.
The equivalent Hamming net requires only
10*(10+100) = 1,100 connections.
A Hamming net with 10,000 connections and 100 input
components would be able to hold approximately 62
exemplars!
Klinkhachorn:CpE320
Hopfield Net
Klinkhachorn:CpE320
Hopfield Net

You might also like