You are on page 1of 4

1

2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58

close all;
clear all;

% init constants
Ni
= 20; % number of input units
No
= 10; % number of outputs
lr
= .1; % learning rate
neig_width
= ceil(No/2); % initial size of the neighborhood
nartime
= 50; % time of narrowing of the neighborhood
simtime
= 2500; % total simulation time
% init layer
y = zeros(No,simtime);
% init weights with random walues
% uniformly distributed between 0 and 1
W = rand(No,Ni);

%
%
%
%
%
%
%

create the input dataset


each column will be a single
input to the network.
We define 10 different inputs,
each of them is a vector with
a gaussian distribution of values
centered on a different value.

x=linspace(-1,1,Ni); % create a row vector of Ni elements


% in a sequence going from -1 to 1.
X=repmat(x,Ni,1); % transform the x vector in a matrix
% of a NI number of x row vectors.
% Each column vector is a now filled with
% the repetition of a number defining
% its position in the -1:1 sequence.
% the -1:1 sequence is distributed between
% columns.
% try imagesc(X) to understand what is the
% aspect of X.
k=linspace(-1,1,Ni);
K=repmat(k',1,Ni); %
%
%
%
%
%

%
%
%
%
%
%
%
%
X

% same as the first step of the


this time we transform it in a matrix
of a Ni number of k column vectors.
Now the -1:1 sequnce is distributed
between rows.
try imagesc(K) to understand what is the
aspect of K.

We pass the values of the X matrix


through a gaussian function, with mean
defined by K for each value of x.
Now The 10 colimns of X are
gaussians distributions, each centered
on a different row;
try again imagesc(X) to understand what
is the aspect of X now.
= exp(-5*(X-K).^2);

59
60 % training
61 for t = 1:simtime
62
63
% we randomly choose the
64
% index of the input to be
65
% presented in each timestep t.
66
q=ceil(rand*Ni); % rand*Ni gives a real
67
% number in the [0:Ni[ interval.
68
% ceil gives the nearest integer
69
% exceeding the real value
70
% (es. 0.6 -> 1 )
71
72
73
%%%%%%%%%%%%%%%
74
%% SPREADING %%
75
%%%%%%%%%%%%%%%
76
77
% update the potential
78
y(:,t) = W*X(:,q); % weighted sum of the input (matrix X vector)
79
80
% find the maximum activation of y.
81
[ymax ymaxind] = max(y(:,t)); % the function max() returns two values.
82
% The first, ymax, is the value of the
83
% maximum element of y.
84
% The second, ymaxind, is the index on the
85
% y element of the maximum value, so that
86
% y(ymaxind) == ymax
87
88
89
% calculate the neighborhood
90
% of the maximum activation.
91
fn = max(1,ymaxind - neig_width); % this is the index of the
92
% upper neighbor of ymax in the
93
% y vector.
94
ln = min(No,ymaxind + neig_width); % this is the index of the lower
95
% neighbour of ymax in the y vector
96
%
97
% example (neig_width==3 and ymaxind==5):
98
%
99
% 1
2
3
4
5
6
7
8
9
10
100
%
/ \
/ \
/ \
101
%
|
|
|
102
%
lower
|
|
103
%
ymaxind
|
104
%
upper
105
%
106
% The initial value of neig_width must be
107
% so that the initial neighbourhood takes
108
% all the output elements (see line 9).
109
% In such case the network can find the
110
% topology between the categories of input
111
112
NEIGH = zeros(No,Ni); % create a matrix
113
% to mask all weights that
114
% do not reach any of the
115
% neighbours of ymax
116
117
NEIGH( (1:No)>=fn & (1:No<=ln) , : ) = 1; % Each of the elements of NEIGH

118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176

%
%
%
%
%

is set to 1 if it belongs to
a NEIGH row corresponding to a
W row that weights the input
to an output unit belonging to the
neighbourhood of ymax.

%%%%%%%%%%%%%%%
%% LEARNING %%
%%%%%%%%%%%%%%%

%
%
%
%
%
%
%
%
%
%
%
W

update the weights of connections


projecting to the units of the
neighborhood. We use the general
kohonen learning rule:
delta_w = lr*( x - w ).
Element per element multiplication
allows to switch off all
increments of weights not reaching
the neighborhood. The matrix made
by repmat(x(:,k)',No,1) simply
repeats the input for each W row.
= W + lr*(NEIGH.*(repmat(X(:,q)',No,1)-W));

% We narrow the neighbourhood width


% every 'nartime' interval.
if mod(t,nartime)==0 % the function mod(n,d) gives
% the reminder of a division
% by n n times.
neig_width = max(0,neig_width-1); % control that the
% neighbourhood width
% is >=0
% we decrease the learning rate with
% the narrowing of the neighbourhood so to
% refine learning.
lr = lr *.99;
end

% plotting
figure(1); % we set 1 as thefigure to be updated,
% if it does not exists it is created.
% we divide the plot into a 2x3 grid
%
%
__________________
% |__1__|___2__|__3__|
% |__4__|___5__|__6__|
%
% then we cal three subplots
% defining different groups of cells:

% subplot(2,3,1:2)
%
__________________
% |__1______2__|__3__|
% |__4__|___5__|__6__|

177
subplot(2,3,1:2);
178
title('input');
179
bar(1:Ni,X(:,q));
180
181
% subplot(2,3,4)
182
%
_________________
183
% |__1__|__2__|__3__|
184
% |_-4-_|__5__|__6__|
185
186
subplot(2,3,4:5);
187
title('output');
188
bar(1:No,y(:,t));
189
colormap gray;
190
191
% subplot(2,3,3)
192
%
__________________
193
% |__1___|__2__|_-3-_|
194
% |__4______5__|__6__|
195
196
subplot(2,3,3);
197
title('weights');
198
imagesc(W);
199
colormap gray;
200
201
% a little bit of pause
202
% to allow the complate
203
% redrawing of the figure,
204
% before starting the next
205
% iteration
206
207
pause(.1);
208
209 end
210
211

You might also like