You are on page 1of 26

Recurrent Neural Networks

Bi-directional Associative Memory

S. Vivekanandan

Cabin: TT 319A
E-Mail: svivekanandan@vit.ac.in
Mobile: 8124274447
BAM
• Kosko developed BAM in 1988.
• It is a hetero associative recurrent neural network consist of
two layers.
• It iterates by sending a signal back and forth between the two
layers until each neurons activations remains constant.
• Here layers are referred as x layer and y layer instead of input
and output and the weights are also bidirectional.
• Three forms of BAM are but architecture remains same
1. Binary
2. Bipolar
3. Continuous

21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 2


Architecture of BAM

21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 3


Architecture contd..

• The H-BAM has ‘n’ units in X-layer and ‘m’ units in y-layer.
• Connections between the layers are bidirectional i.e. if the
weights matrix for signals sent from X –layer to Y-layer is W,
then Y to X is WT.
• It resembles a single layer feed forward network.
• The training process is based on Hebb rule.
• Fully interconnected network, wherein the inputs and the
outputs are different.
• There exist two types of BAM. They are
1. Discrete BAM
2. Continuous BAM

21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 4


Discrete BAM
• The binary and bipolar forms of BAM are closely related. In each case the
weights are found from the sum of the Hebb outer product pf the bipolar
from of the training vector pairs.
• The activation function used is a step activation function with zero
threshold.
• The weights matrix to store the set of inputs and target vectors s(p) : t(p)
where
s(p) = ( S1(p),………S i(p),………….Sn(p))
Can be determined with the help of Hebb rule.
• For binary input vector the weight matrix can be determined by
Wij   p
( 2 Si ( p )  1)(2 stj ( p )  1)

• For bipolar p
W   Si
p 1
( p )tj ( p )

21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 5


21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 6
Continuous BAM

• A continuous BAM has the capacity to transfer the input smoothly and
continuously in to the respective output in the range between [ 0 1]
• It uses logistic sigmoidal function as the activation function for all the
inputs.
• For binary inputs and target vectors s(p) : t(p) the weights are determined.
Wij   p ( 2 Si( p )  1)( 2 stj ( p )  1)
Y-layer
• The logistics sigmoid activation function is given by
1
f (Yin) 
1  e( Yinj )
• If bias is included in calculating the net input then
Yinj  bj   XjWij

21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 7


Y - layer
• The logistics sigmoid activation function is given by
1
f ( Xinj ) 
1  e(  Xinj )
• If bias is included in calculating the net input then

Xini  bi   YjWij
• The memory (storage capacity) is minimum (m,n) where
m = number of units in x-layer
n = number of units in y-layer

21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 8


Application Algorithm
Step 1 : Initialize the weights to store a set of P vectors. Initialize activation 0
Step 2 : For each training input /target output vector (s: t) perform steps 3-8
Step 3 : Set activation for X-layer to current input layer
Step 4 : Input pattern y is presented to the Y-layer
Step 5 : While activation are not converged follow 6 -8
Step 6: Activation unit in Y-layer and the net input are computed as
Yinj   XiWij
Compute activations Yi = f(Yinj)
Send signals to X-layer
Step 7: Update activation unit in X-layer. Net input is computed and
activation is computed
Step 8: Test for convergence.

21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 9


21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 10
21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 11
21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 12
21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 13
21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 14
21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 15
21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 16
21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 17
21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 18
21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 19
21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 20
21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 21
a) To find the weight matrix ‘*’ will be marked as 1 and ‘.’ will be marked as -1. To
store E, the target is (-11) and store H, the target is (1,1)

21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 22


21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 23
21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 24
21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 25
21-03-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 26

You might also like