You are on page 1of 44

One Week Faculty Development Program on

Artificial Intelligence and Soft Computing Techniques


Under TEQIP Phase II
(28th April 02nd May 2015)

LAB MANUAL
Organized by

Department of Computer Science & Engineering


University Institute of Technology
Rajiv Gandhi Proudyogiki Vishwavidhyalaya
(State Technological University of Madhya Pradesh)
Airport Road, Bhopal 462033
Website: www.uitrgpv.ac.in

INDEX
SNO.

LIST OF EXPERIMENTS

PAGE NO.

1.

WAP to implement Artificial Neural Network

2.

WAP to implement Activation Functions

3.

WAP to implement Adaptive prediction in ADALINE NN

4.

WAP to implement LMS and Perceptron Learning Rule

5.

WAP to implement ART NN

12

6.

WAP to implement BAM Network

14

7.

WAP to implement Full CPN with input pair

15

8.

WAP to implement discrete Hopfield Network

17

9.

WAP to implement Hebb Network

18

10.

WAP to implement Hetro associate neural net for mapping input vectors
to output vectors

19

11.

WAP to implement Delta Learning Rule

20

12.

WAP to implement XOR function in MADALINE NN

22

13.

WAP to implement AND function in Perceptron NN

24

14.

WAP to implement Perceptron Network

26

15.

WAP to implement Feed Forward Network

32

16.

WAP to implement Instar learning Rule

38

17.

WAP to implement Weight vector Matrix

43

Experiment No. 1
AIM: WAP to implement Artificial Neural Network in MATLAB
CODE:
%Autoassociative net to store the vector
clc;
clear;
x=[1 1 -1 -1];
w=zeros(4,4);
w=x'*x;
yin=x*w;
for i=1:4
if yin(i)>0
y(i)=1;
else
y(i)=-1;
end
end
disp('wieght matrix');
disp(w);
if x==y
disp('The vector is a known vector');
else
disp('The vector is a unknown vector');
end

OUTPUT:
Weight matrix
1

-1

-1

-1

-1

-1

-1

-1

-1

The vector is a known vector

Experiment No. 2
AIM: WAP to implement Activation Function in MATLAB
CODE:
>> % Illustration of various activation functions used in NN's
x=-10:0.1:10;
tmp=exp(-x);
y1=1./(1+tmp);
y2=(1-tmp)./(1+tmp);
y3=x;
subplot(231); plot(x,y1); grid on;
axis([min(x) max(x) -2 2]);
title('Logistic Function');
xlabel('(a)');
axis('square')
subplot(232);plot(x,y2); grid on;
axis([min(x) max(x) -2 2]);
title('Hyperbolic Tangent Function');qw
xlabel('(b)');
axis('square');
subplot(233);plot(x,y3); grid on;
axis([min(x) max(x) min(x) max(x)]);
title('Identity Function');
xlabel('(c)');
axis('square');

OUTPUT:
Logistic Function

Hyperbolic Tangent Function


2
10

2
1

-1

-1

-5

-2
-10

0
(a)

10

-2
-10

0
(b)

10

-10
-10

Identity Function

0
(c)

10

Experiment No. 3
AIM: WAP to implement Adaptive Prediction in ADALINE Network
CODE:
% Adaptive Prediction with Adaline
clear;
clc;
%Input signal x(t)
f1=2; %kHz
ts=1/(40*f1); % 12.5 usec -- sampling time
N=100;
t1=(0:N)*4*ts;
t2=(0:2*N)*ts+4*(N+1)*ts;
t=[t1 t2]; %0 to 7.5 sec
N=size(t,2); % N = 302
xt=[sin(2*pi*f1*t1) sin(2*pi*2*f1*t2)];
plot(t, xt), grid, title('Signal to be predicted')
p=4; % Number of synapses
% formation of the input matrix X of size p by N
%use the convolution matrix. Try convmtx(1:8, 5)
X = convmtx(xt, p) ; X=X(:,1:N);
d=xt; % The target signal is equal to the input signal
y=zeros(size(d)); % memory allocation for y
eps=zeros(size(d)); % memory allocation for eps
eta=0.4 ; %learning rate/gain
w=rand(1, p) ; % Initialisation of weight vector
for n=1:N % learning loop
y(n)=w*X(:,n); %predicted output signal
eps(n)=d(n)-y(n); %error signal
w=w+eta*eps(n)*X(:,n)';
end
figure(1)
plot(t, d, 'b',t,y, '-r'), grid, ...
title('target and predicted signals'), xlabel('time[sec]')
figure(2)
plot(t, eps), grid, title('prediction error'), xlabel('time[sec]')

OUTPUT:
target and predicted signals
2

1.5

0.5

-0.5

-1

-1.5
0

4
time[sec]

prediction error
0.4

0.2

-0.2

-0.4

-0.6

-0.8

-1

4
time[sec]

Experiment No. 4
AIM: WAP to implement LMS and Perceptron Learning rule
CODE:
%For the following 2-class problem determine the decision boundaries
%obtained by LMS and perceptron learning laws.
% Class C1 : [-2 2]', [-2 3]', [-1 1]', [-1 4]', [0 0]', [0 1]', [0 2]',
%
[0 3]' and [1 1]'
% Class C2 : [ 1 0]', [2 1]', [3 -1]', [3 1]', [3 2]', [4 -2]', [4 1]',
%
[5 -1]' and [5 0]'
clear;
inp=[-2 -2 -1 -1 0 0 0 0 1 1 2 3 3 3 4 4 5 5;2 3 1 4 0 1 2 3 1 0 1 -1 1 2 -2 1 -1 0];
out=[1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0];
choice=input('1: Perceptron Learning Law\n2: LMS Learning Law\n Enter your choice :');
switch choice
case 1
network=newp([-2 5;-2 4],1);
network=init(network);
y=sim(network,inp);
figure,plot(inp,out,inp,y,'o'),title('Before Training');
axis([-10 20 -2.0 2.0]);
network.trainParam.epochs = 20;
network=train(network,inp,out);
y=sim(network,inp);
figure,plot(inp,out,inp,y,'o'),title('After Training');
axis([-10 20 -2.0 2.0]);
display('Final weight vector and bias values : \n');
Weights=network.iw{1};
Bias=network.b{1};
Weights
Bias
Actual_Desired=[y' out'];
Actual_Desired
case 2
network=newlin([-2 5;-2 4],1);
network=init(network);
y=sim(network,inp);
network=adapt(network,inp,out);
y=sim(network,inp);
display('Final weight vector and bias values : \n');
Weights=network.iw{1};
Bias=network.b{1};
8

Weights
Bias
Actual_Desired=[y' out'];
Actual_Desired
otherwise
error('Wrong Choice');
end
OUTPUT:
1: Perceptron Learning Law
2: LMS Learning Law
Enter your choice :1

Final weight vector and bias values : \n

Weights =

-1

Bias =

Actual_Desired =

1
9

10

Before Training
2
1.5
1
0.5
0
-0.5
-1
-1.5
-2
-10

-5

10

15

20

10

15

20

After Training
2
1.5
1
0.5
0
-0.5
-1
-1.5
-2
-10

-5

11

Best Training Performance is 0.5 at epoch 0

Mean Absolute Error (mae)

10

Train
Best

-0.1

10

-0.2

10

-0.3

10

-0.4

10

-0.5

10

0.2

0.4

0.6

0.8

1.2

1.4

1.6

1.8

2 Epochs

12

Experiment No. 5
AIM: WAP to implement ART Neural Network
CODE:
%ART Neural Net
clc;
clear;
b=[0.57 0.0 0.3;0.0 0.0 0.3;0.0 0.57 0.3;0.0 0.47 0.3];
t=[1 1 0 0;1 0 0 1;1 1 1 1];
vp=0.4;
L=2;
x=[1 0 1 1];
s=x;
ns=sum(s);
y=x*b;
con=1;
while con
for i=1:3
if y(i)==max(y)
J=i;
end
end
x=s.*t(J,:);
nx=sum(x);
if nx/ns >= vp
b(:,J)=L*x(:)/(L-1+nx);
t(J,:)=x(1,:);
con=0;
else
y(J)=-1;
con=1;
end
if y+1==0
con=0;
end
end
disp('Top Down Weights');
disp(t);
disp('Bottom up Weights');
disp(b);

13

OUTPUT:
Top Down Weights
1 1 0 0
1 0 0 1
1 1 1 1
Bottom up Weights
0.5700 0.6667 0.3000
0
0 0.3000
0
0 0.3000
0 0.6667 0.3000

14

Experiment No. 6
AIM: WAP to implement BAM Network
CODE:
%Bidirectional Associative Memory neural net
clc;
clear;
s=[1 1 0;1 0 1];
t=[1 0;0 1];
x=2*s-1
y=2*t-1
w=zeros(3,2);
for i=1:2
w=w+x(i,:)'*y(i,:);
end
disp('the calculated weight matrix');
disp(w);
OUTPUT:
x=
1
1

1 -1
-1 1

y=
1 -1
-1 1
the calculated weight matrix
0 0
2 -2
-2 2

15

Experiment No. 7
AIM: WAP to implement Full Counter Propagation Network with input pair
CODE:
%Full Counter Propagation Network for given input pair
clc;
clear;
%set initial weights
v=[0.6 0.2;0.6 0.2;0.2 0.6;0.2 0.6];
w=[0.4 0.3;0.4 0.3;];
x=[0 1 1 0];
y=[1 0];
alpha=0.3;
for j=1:2
D(j)=0;
for i=1:4
D(j)=D(j)+(x(i)-v(i,j))^2;
end
for k=1:2
D(j)=D(j)+(y(k)-w(k,j))^2;
end
end
for j=1:2
if D(j)==min(D)
J=j;
end
end
disp('After one step the weight matrix are');
v(:,J)=v(:,J)+alpha*(x'-v(:,J))
w(:,J)=w(:,J)+alpha*(y'-w(:,J))

16

OUTPUT:
After one step the weight matrix are
v=
0.4200

0.2000

0.7200

0.2000

0.4400

0.6000

0.1400

0.6000

w=
0.5800

0.3000

0.2800

0.3000

17

Experiment No. 8
AIM: WAP to implement Discrete Hopfield Network
CODE:
% discrete hopfield net
clc;
clear;
x=[1 1 1 0];
tx=[0 0 1 0];
w=(2*x'-1)*(2*x-1);
for i=1:4
w(i,i)=0;
end
con=1;
y=[0 0 1 0]
while con
up=[4 2 1 3];
for i= 1:4
yin(up(i))=tx(up(i))+y*w(1:4,up(i));
if yin(up(i))>0
y(up(i))=1;
end
end
if y==x
disp('convergence has been obtained');
disp('the convergence output');
disp(y);
con=0;
end
end
OUTPUT:
y=
0

convergence has been obtained


the convergence output
1

18

Experiment No. 9
AIM: WAP to implement Hebb Network
CODE:
%Hebb Net to classify two dimensional inputs patterns
clear;
clc;
%Input Patterns
E=[1 1 1 1 1 -1 -1 -1 1 1 1 1 1 -1 -1 -1 1 1 1 1];
F=[1 1 1 1 1 -1 -1 -1 1 1 1 1 1 -1 -1 -1 1 -1 -1 -1];
x(1,1:20)=E;
x(2,1:20)=F;
w(1:20)=0;
t=[1 -1];
b=0;
for i=1:2
w=w+x(i,1:20)*t(i);
b=b+t(i);
end
disp('Weight matrix');
disp(w);
disp('Bias');
disp(b);
OUTPUT:
Weight matrix
Columns 1 through 12
0

Columns 13 through 20
0

Bias
0

19

Experiment No.10
AIM: WAP to implement Hetro associate neural net for mapping input vectors to output
vectors
CODE:
%Hetro associate neural net for mapping input vectors to output vectors
clc;
clear;
x=[1 1 0 0;1 0 1 0;1 1 1 0;0 1 1 0];
t=[1 0;1 0;0 1;0 1];
w=zeros(4,2);
for i=1:4
w=w+x(i,1:4)'*t(i,1:2);
end
disp('weight matrix');
disp(w);
OUTPUT:
weight matrix
2

20

Experiment No. 11
AIM: WAP to implement Delta Learning Rule
CODE:
% Determine the weights of a network with 4 input and 2 output units using
% Delta Learning Law with f(x)=1/(1+exp(-x)) for the following input-output
% pairs:
%
% Input: [1100]' [1001]' [0011]' [0110]'
% output: [11]' [10]' [01]' [00]'
% Discuss your results for different choices of the learning rate parameters.
% Use suitable values for the initial weights.
in=[1 1 0 0 -1;1 0 0 1 -1; 0 0 1 1 -1; 0 1 1 0 -1];
out=[1 1; 1 0; 0 1; 0 0];
eta=input('Enter the learning rate value = ');
it=input('Enter the number of iterations required = ');
wgt=input('Enter the weights,2 by 5 matrix(including weight for bias):\n');
for x=1:it
for i=1:4
s1=0;
s2=0;
for j=1:5
s1=s1+in(i,j)*wgt(1,j);
s2=s2+in(i,j)*wgt(2,j);
end
wi=eta*(out(i,1)-logsig(s1))*dlogsig(s1,logsig(s1))*in(i,:);
wgt(1,:)=wgt(1,:)+wi;
wi=eta*(out(i,2)-logsig(s2))*dlogsig(s2,logsig(s2))*in(i,:);
wgt(2,:)=wgt(2,:)+wi;
end
end
wgt

21

OUTPUT:
Enter the learning rate value = 0.6
Enter the number of iterations required = 1
Enter the weights,2 by 5 matrix(including weight for bias):
[1 2 1 3 1;1 0 1 0 2]

wgt =

1.0088

1.9508

0.9177

2.9757

1.0736

1.0476

0.0418

1.0420

0.0478

1.9104

22

Experiment No. 12
AIM: WAP to implement XOR function for MADALINE NN
CODE:
%Madaline for XOR function
clc;
clear;
%Input and Target
x=[1 1 -1 -1;1 -1 1 -1];
t=[-1 1 1 -1];
%Assume initial weight matrix and bias
w=[0.05 0.1;0.2 0.2];
b1=[0.3 0.15];
v=[0.5 0.5];
b2=0.5;
con=1;
alpha=0.5;
epoch=0;
while con
con=0;
for i=1:4
for j=1:2
zin(j)=b1(j)+x(1,i)+x(1,i)*w(1,j)+x(2,i)*w(2,j);
if zin(j)>=0
z(j)=1;
else
z(j)=-1;
end
end
yin=b2+z(1)*v(1)+z(2)*v(2);
if yin>=0
y=1;
else
y=-1;
end
if y~=t(i)
con=1;
if t(i)==1
if abs(zin(1))>abs(zin(2))
k=2;
else
k=1;
end
b1(k)=b1(k)+alpha*(1-zin(k));
w(1:2,k)=w(1:2,k)+alpha*(1-zin(k))*x(1:2,i);else
23

for k=1:2
if zin(k)>0;
b1(k)=b1(k)+alpha*(-1-zin(k));
w(1:2,k)=w(1:2,k)+alpha*(-1-zin(k))*x(1:2,i);
end
end
end
end
end
epoch=epoch+1;
end
disp('weight matrix of hidden layer');
disp(w);
disp('Bias of hidden layer');
disp(b1);
disp('Total Epoch');
disp(epoch);
OUTPUT:
weight matrix of hidden layer
0.2812 -2.1031
-0.6937

0.9719

Bias of hidden layer


-1.3562 -1.6406

Total Epoch
3

24

Experiment No. 13
AIM: WAP to implement AND function in Perceptron NN
CODE:
%Perceptron for AND function
clear;
clc;
x=[1 1 -1 -1;1 -1 1 -1];
t=[1 -1 -1 -1];
w=[0 0];
b=0;
alpha=input('Enter Learning rate=');
theta=input('Enter Threshold value');
con=1;
epoch=0;
while con
con=0;
for i=1:4
yin=b+x(1,i)*w(1)+x(2,i)*w(2);
if yin>theta;
y=1;
end
if yin<=theta & yin>=-theta
y=0;
end
if yin<-theta
y=-1;
end
if y-t(i)
con=1;
for j=1:2
w(j)=w(j)+alpha*t(i)*x(j,i);
end
b=b+alpha*t(i);
end
end
epoch=epoch+1;
end
disp('Perceptron for AND function');
disp('Final weight matrix');
disp(w);
disp('Final Bias');
disp(b);

25

OUTPUT:
Enter Learning rate=0.6
Enter Threshold value0.8
Perceptron for AND function
Final weight matrix
1.2000

1.2000

Final Bias
-1.2000

26

Experiment No. 14
AIM: WAP to implement Perceptron Network
CODE:
clear;
clc;
p1=[1 1]';p2=[1 2]';
p3=[-2 -1]';p4=[2 -2]';
p5=[-1 2]';p6=[-2 -1]';
p7=[-1 -1]';p8=[-2 -2]';
% define the input matrix , which is also a target matrix for auto
% association
P=[p1 p2 p3 p4 p5 p6 p7 p8];
%we will initialize the network to zero initial weights
net= newlin([min(min(P)) max(max(P)); min(min(P)) max(max(P))],2);
weights = net.iw{1,1}
%set training goal (zero error)
net.trainParam.goal=0.0;
%number of epochs
net.trainParam.epochs=400;
[net, tr]= train(net,P,P);
%target matrix T=P
%default training function is Widrow-Hoff Learning for newlin defined
%weights and bias after the training
W=net.iw{1,1}
B=net.b{1}
Y=sim(net,P);
%Haming like distance criterion
criterion=sum(sum(abs(P-Y)')')
%calculate and plot the errors
rs=Y-P; legend(['criterion=' num2str(criterion)])
figure
plot(rs(1,:),rs(2,:),'k*')
test=P+rand(size(P))/10;
%let's add some noise in the input and test the networkagain
Ytest=sim(net,test);
criteriontest=sum(sum(abs(P-Ytest)')')
figure
output=Ytest-P
%plot errors in the output
plot(output(1,:),output(2,:),'k*')

27

OUTPUT:
weights =

W=
1.0000 -0.0000
-0.0000

1.0000

B=
1.0e-12 *
-0.1682
-0.0100
criterion =
1.2085e-12
Warning: Plot empty.
> In legend at 287
criteriontest =
0.9751
output =
Columns 1 through 7
0.0815

0.0127

0.0632

0.0278

0.0958

0.0158

0.0957

0.0906

0.0913

0.0098

0.0547

0.0965

0.0971

0.0485

Column 8
0.0800
0.0142
28

1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0

-2

x 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

-15

-4

-6

-8

-10

-12

-14

-16
-3

-2.5

-2

-1.5

-1

-0.5
x 10

-13

29

0.1
0.09
0.08
0.07
0.06
0.05
0.04
0.03
0.02
0.01
0
0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

0.09

0.1

Best Training Performance is 1.3086e-26 at epoch 400

Mean Squared Error (mse)

10

10

10

10

10

10

Train
Best
-5

-10

-15

-20

-25

50

100

150

200

250

300

350

400

400 Epochs
30

Validation Checks = 0, at epoch 400


1
0.8
0.6
0.4

val fail

0.2
0
-0.2
-0.4
-0.6
-0.8
-1

50

100

150

200
250
400 Epochs

300

350

400

31

Training: R=0.97897

Validation: R=0.92226
50

Data
Fit
Y=T

40

O u tp u t ~ = 0 .9 1 * T a r g e t + 1 .4

O u tp u t ~ = 0 .9 5 * T a r g e t + 1 .1

50

30

20

10
10

20

30

40

Data
Fit
Y=T

40

30

20

10

50

10

20

Target
Test: R=0.90828
Data
Fit
Y=T

40

30

20

10
10

20

30

Target

40

50

All: R=0.96404
50

O u tp u t ~ = 0 .9 4 * T a r g e t + 1 .1

O u tp u t ~ = 0 .9 1 * T a r g e t + 1 .5

50

30

Target

40

50

Data
Fit
Y=T

40

30

20

10
10

20

30

40

50

Target

32

Experiment No.15
AIM: WAP to implement feed forward network
CODE:
% a)Design and Train a feedforward network for the following problem:
% Parity: Consider a 4-input and 1-output problem, where the output should be
% 'one' if there are odd number of 1s in the input pattern and 'zero'
% other-wise.
clear
inp=[0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1;0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1;...
0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1;0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1];
out=[0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0];
network=newff([0 1;0 1; 0 1; 0 1],[6 1],{'logsig','logsig'});
network=init(network);
y=sim(network,inp);
figure,plot(inp,out,inp,y,'o'),title('Before Training');
axis([-5 5 -2.0 2.0]);
network.trainParam.epochs = 500;
network=train(network,inp,out);
y=sim(network,inp);
figure,plot(inp,out,inp,y,'o'),title('After Training');
axis([-5 5 -2.0 2.0]);
Layer1_Weights=network.iw{1};
Layer1_Bias=network.b{1};
Layer2_Weights=network.lw{2};
Layer2_Bias=network.b{2};
Layer1_Weights
Layer1_Bias
Layer2_Weights
Layer2_Bias
Actual_Desired=[y' out'];
Actual_Desired
OUTPUT:
Layer1_Weights =

1.0765

2.1119

2.6920

2.3388

-10.4592 -10.9392 10.0824 10.9071


6.0739

9.4600 -5.3666 -5.9492


33

-6.0494 -18.5892 -5.9393

5.6923

-2.5863 -1.7445 -11.6903

3.7168

10.7251 -10.5659

9.8250 -10.4745

Layer1_Bias =

-16.0634
5.4848
9.5144
9.6231
7.4340
5.7091

Layer2_Weights =

-2.5967 -23.3294 15.7618 23.4261 -22.5208 -23.3569

Layer2_Bias =

18.4268

34

Actual_Desired =

0.0000

1.0000

1.0000

0.9999

1.0000

0.0000

1.0000

1.0000

0.0000

0.0000

0.9998

1.0000

1.0000

1.0000

0.0000

0.0000

0.9999

1.0000

0.0000

1.0000

1.0000

1.0000

1.0000

0.0000

35

Before Training
2
1.5
1
0.5
0
-0.5
-1
-1.5
-2
-5

-4

-3

-2

-1

After Training
2
1.5
1
0.5
0
-0.5
-1
-1.5
-2
-5

-4

-3

-2

-1

36

10

Best Training Performance is 3.3016e-09 at epoch 26

Mean Squared Error (mse)

Train
Best
10

10

10

10

-2

-4

-6

-8

10

15

20

25

26 Epochs

37

10
10

10
10

-5

-10

Mu = 1e-13, at epoch 26

-10

-20

Validation Checks = 0, at epoch 26


1
val fail

mu

10

Gradient = 1.9837e-08, at epoch 26

0
-1

10

15
26 Epochs

20

25

Training: R=1
1
Data
Fit
Y = T

0.9
0.8

Output ~= 1*Target + 1.6e-05

gradient

10

0.7
0.6
0.5
0.4
0.3
0.2
0.1
0

0.2

0.4

0.6

0.8

Target

38

Experiment No. 16
AIM: WAP to implement Instar learning Rule
CODE:
% Using the Instar learning law, group all the sixteen possible binary
% vectors of length 4 into four different groups. Use suitable values for
% the initial weights and for the learning rate parameter. Use a 4-unit
% input and 4-unit output network. Select random initial weights in the
% range [0,1]
in=[0 0 0 0;0 0 0 1;0 0 1 0;0 0 1 1;0 1 0 0;0 1 0 1;0 1 1 0;0 1 1 1;1 0 0 0;1 0 0 1;1 0 1 0;1 0 1 1;1
1 0 0;1 1 0 1;1 1 1 0;1 1 1 1];
wgt=[0.4 0.1 0.2 0.7; 0.9 0.7 0.4 0.7; 0.1 0.2 0.9 0.8 ; 0.5 0.6 0.7 0.6];
eta=0.5;
it=3000;
for t=1:it
for i=1:16
for j=1:4
w(j)=in(i,:)*wgt(j,:)';
end
[v c]=max(w);
wgt(c,:)=wgt(c,:)+eta*(in(i,:)-wgt(c,:));
k=power(wgt(c,:),2);
f=sqrt(sum(k));
wgt(c,:)=wgt(c,:)/f;
end
end
for i=1:16
for j=1:4
w(j)=in(i,:)*wgt(j,:)';
end
[v c]=max(w);
if(v==0)
c=4;
end
s=['Input= ' int2str(in(i,:)) ' Group= ' int2str(c)];
display(s);
end
wgt

39

OUTPUT:
s=

Input= 0 0 0 0 Group= 4

s=

Input= 0 0 0 1 Group= 1

s=

Input= 0 0 1 0 Group= 3

s=

Input= 0 0 1 1 Group= 3

s=

Input= 0 1 0 0 Group= 2

40

s=

Input= 0 1 0 1 Group= 4

s=

Input= 0 1 1 0 Group= 2

s=

Input= 0 1 1 1 Group= 4

s=

Input= 1 0 0 0 Group= 1

s=

Input= 1 0 0 1 Group= 1

41

s=

Input= 1 0 1 0 Group= 3

s=

Input= 1 0 1 1 Group= 3

s=

Input= 1 1 0 0 Group= 2

s=

Input= 1 1 0 1 Group= 1

s=

Input= 1 1 1 0 Group= 2

42

s=

Input= 1 1 1 1 Group= 4

wgt =

0.6548

0.4318

0.0000

0.6203

0.5646

0.6819

0.4651

0.0000

0.5646

0.0000

0.6819

0.4651

0.3877

0.5322

0.5322

0.5322

43

Experiment No.17
AIM: WAP to implement weight vector matrix
CODE:
clc;
clear;
x= [-1 -1 -1 -1; -1 -1 1 1 ];
t= [1 1 1 1];
w= zeros(4,4);
for i=1:2
w= w+x(i,1:4)'*x(i,1:4);
end
yin= t*w;
for i= 1:4
if yin(i)>0
y(i)=1;
else
y(i)= -1;
end
end
disp('The Calculated Weight Matrix');
disp(w);
if x(1,1:4)==y(1:4)| x(2,1:4)==y(1:4)
disp('the vector is a known vector');
else
disp('the vector is a unknown vector');
end

OUTPUT:
The Calculated Weight Matrix
2

the vector is a unknown vector


44

You might also like