You are on page 1of 19

CSC 367 2.

0 Mathematical Computing

Assignment 3
Radial Basis Functions

AS2010377
M.K.H.Gunasekara

Special Part 1
Department of Computer Science
UNIVERSITY OF SRI JAYEWARDENEPURA

M.K.H.Gunasekara - AS2010377

CSC 367 2.0 Mathematical Computing

Table of Contents
-

Introduction ............................................................................................................................................ 2
Methodology........................................................................................................................................... 3
Implementation ...................................................................................................................................... 5
Results ..................................................................................................................................................... 6
Discussion.............................................................................................................................................. 10
Appendices............................................................................................................................................ 11

1|Page

M.K.H.Gunasekara - AS2010377

CSC 367 2.0 Mathematical Computing

Introduction
Neural Networks offer a powerful framework for representing nonlinear mappings from
several inputs to one or more outputs.
An important application of neural networks is regression. Instead of mapping the inputs
into a discrete class label, the neural network maps the input variables into continuous
values. A major class of neural networks is the radial basis function (RBF) neural network.
We will look at the architecture of RBF neural networks, followed by its applications in both
regression and classification.
In this report Radial Basis function is discussed for clustering as unsupervised learning
algorithm. Radial basis function is simulated to cluster three flowers in a given data set
which is available in http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data.

2|Page

M.K.H.Gunasekara - AS2010377

CSC 367 2.0 Mathematical Computing

Methodology
Radial Basis Function

Figure 01 : One hidden layer with Radial Basis Activation Functions


Radial basis function (RBF) networks typically have three layers
1. Input Layer
2. A hidden layer with a non-linear RBF activation function
3. Output Layer
Where N is the number of neurons in the hidden layer,
is the center vector for neuron i, and is
the weight of neuron i in the linear output neuron. Functions that depend only on the distance from
a center vector are radially symmetric about that vector, hence the name radial basis function. In the
basic form all inputs are connected to each hidden neuron. The norm is typically taken to be the
Euclidean distance and the radial basis function is commonly taken to be Gaussian Function
(

------ (1)

There are some other Radial Basis functions


Logistic Basis Function
( )

( )

Multi-quadratics
( )

3|Page

M.K.H.Gunasekara - AS2010377

CSC 367 2.0 Mathematical Computing

Input nodes connected by weights to a set of RBF neurons fire proportionately to the distance
between the input and the neuron in the weight space

The activation of these nodes is used as inputs to the second layer. The second layer (output layer) is
treated as a simple Perceptron network
Training the RBF Network
This can be done positioning the RBF nodes and using the activation of RBF nodes to train the linear
outputs.
Positioning RBF nodes can be done in two ways; First method is randomly picking some of the data
points to act as basis functions. And the second method is trying to position the nodes so that they
are representative of typical inputs, like using k-means clustering algorithm.
In Activation function there is standard deviation parameter.
One option is, giving all nodes the same size, and testing lots of different sizes using a validation set
to select one that works. Alternatively we can select the size of RBF nodes so that the whole space is
coved by the receptive fields. So the width of the Gaussian should be set according to the maximum
distance between the locations of the hidden nodes (d), and the number of hidden nodes (M)
------ (2)

We can use this normalized Gaussian function also.


(

------ (3)
)

Outputs of the RBF Network:

Training the Perceptron Network


We can train Pereceptron Network by using supervised learning method. Therefore we train the
MLP Network according to targets.

4|Page

M.K.H.Gunasekara - AS2010377

CSC 367 2.0 Mathematical Computing

Implementation
Implementation was done using MATLAB 7.10 (2010). Implementation was done according to
following methods
1.
2.
3.
4.
5.

Locate RBF nodes into centers


Calculate for the Gaussian function
Calculate outputs of the RBF layer Unsupervised Training
Make Perceptron Network for second layer ( I used MLP network without a hidden layer)
Train MLP Network according to targets and inputs (inputs are the output of RBF network)
Supervised Training
6. Simulate the network

I have implement RBF Network with different strategies to compare the results

Using Randomly selected centers


Using K-Means Cluster centers
Using Non-normalized Gaussian function
Using Normalized Gaussian function
Using SVM for second layer

5|Page

M.K.H.Gunasekara - AS2010377

CSC 367 2.0 Mathematical Computing

Results
sepal length
5.1
4.9
4.7
4.6
5
5.4
4.6
5
4.4
4.9
5.4
4.8
4.8
4.3
5.8
5.7
5.4
5.1
5.7
5.1
5.4
5.1
4.6
5.1
4.8
5
5
5.2
5.2
4.7
4.8
5.4
5.2
5.5
4.9
5
5.5
4.9

6|Page

sepal width
3.5
3
3.2
3.1
3.6
3.9
3.4
3.4
2.9
3.1
3.7
3.4
3
3
4
4.4
3.9
3.5
3.8
3.8
3.4
3.7
3.6
3.3
3.4
3
3.4
3.5
3.4
3.2
3.1
3.4
4.1
4.2
3.1
3.2
3.5
3.1

petal length
1.4
1.4
1.3
1.5
1.4
1.7
1.4
1.5
1.4
1.5
1.5
1.6
1.4
1.1
1.2
1.5
1.3
1.4
1.7
1.5
1.7
1.5
1
1.7
1.9
1.6
1.6
1.5
1.4
1.6
1.6
1.5
1.5
1.4
1.5
1.2
1.3
1.5

petal width
0.2
0.2
0.2
0.2
0.2
0.4
0.3
0.2
0.2
0.1
0.2
0.2
0.1
0.1
0.2
0.4
0.4
0.3
0.3
0.3
0.2
0.4
0.2
0.5
0.2
0.2
0.4
0.2
0.2
0.2
0.2
0.4
0.1
0.2
0.1
0.2
0.2
0.1

Expected Target
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa

Actual Output
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa

M.K.H.Gunasekara - AS2010377
4.4
5.1
5
4.5
4.4
5
5.1
4.8
5.1
4.6
5.3
5
7
6.4
6.9
5.5
6.5
5.7
6.3
4.9
6.6
5.2
5
5.9
6
6.1
5.6
6.7
5.6
5.8
6.2
5.6
5.9
6.1
6.3
6.1
6.4
6.6
6.8
6.7
6
5.7
5.5

7|Page

3
3.4
3.5
2.3
3.2
3.5
3.8
3
3.8
3.2
3.7
3.3
3.2
3.2
3.1
2.3
2.8
2.8
3.3
2.4
2.9
2.7
2
3
2.2
2.9
2.9
3.1
3
2.7
2.2
2.5
3.2
2.8
2.5
2.8
2.9
3
2.8
3
2.9
2.6
2.4

1.3
1.5
1.3
1.3
1.3
1.6
1.9
1.4
1.6
1.4
1.5
1.4
4.7
4.5
4.9
4
4.6
4.5
4.7
3.3
4.6
3.9
3.5
4.2
4
4.7
3.6
4.4
4.5
4.1
4.5
3.9
4.8
4
4.9
4.7
4.3
4.4
4.8
5
4.5
3.5
3.8

CSC 367 2.0 Mathematical Computing


0.2
0.2
0.3
0.3
0.2
0.6
0.4
0.3
0.2
0.2
0.2
0.2
1.4
1.5
1.5
1.3
1.5
1.3
1.6
1
1.3
1.4
1
1.5
1
1.4
1.3
1.4
1.5
1
1.5
1.1
1.8
1.3
1.5
1.2
1.3
1.4
1.4
1.7
1.5
1
1.1

Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor

Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
FALSE
Iris-versicolor
FALSE
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
FALSE
Iris-versicolor
Iris-versicolor
Iris-versicolor
FALSE
FALSE
Iris-versicolor
Iris-versicolor
Iris-versicolor

M.K.H.Gunasekara - AS2010377
5.5
5.8
6
5.4
6
6.7
6.3
5.6
5.5
5.5
6.1
5.8
5
5.6
5.7
5.7
6.2
5.1
5.7
6.3
5.8
7.1
6.3
6.5
7.6
4.9
7.3
6.7
7.2
6.5
6.4
6.8
5.7
5.8
6.4
6.5
7.7
7.7
6
6.9
5.6
7.7
6.3

8|Page

2.4
2.7
2.7
3
3.4
3.1
2.3
3
2.5
2.6
3
2.6
2.3
2.7
3
2.9
2.9
2.5
2.8
3.3
2.7
3
2.9
3
3
2.5
2.9
2.5
3.6
3.2
2.7
3
2.5
2.8
3.2
3
3.8
2.6
2.2
3.2
2.8
2.8
2.7

3.7
3.9
5.1
4.5
4.5
4.7
4.4
4.1
4
4.4
4.6
4
3.3
4.2
4.2
4.2
4.3
3
4.1
6
5.1
5.9
5.6
5.8
6.6
4.5
6.3
5.8
6.1
5.1
5.3
5.5
5
5.1
5.3
5.5
6.7
6.9
5
5.7
4.9
6.7
4.9

CSC 367 2.0 Mathematical Computing


1
1.2
1.6
1.5
1.6
1.5
1.3
1.3
1.3
1.2
1.4
1.2
1
1.3
1.2
1.3
1.3
1.1
1.3
2.5
1.9
2.1
1.8
2.2
2.1
1.7
1.8
1.8
2.5
2
1.9
2.1
2
2.4
2.3
1.8
2.2
2.3
1.5
2.3
2
2
1.8

Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica

Iris-versicolor
Iris-versicolor
FALSE
Iris-versicolor
Iris-versicolor
FALSE
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
FALSE
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica

M.K.H.Gunasekara - AS2010377
6.7
7.2
6.2
6.1
6.4
7.2
7.4
7.9
6.4
6.3
6.1
7.7
6.3
6.4
6
6.9
6.7
6.9
5.8
6.8
6.7
6.7
6.3
6.5
6.2
5.9

3.3
3.2
2.8
3
2.8
3
2.8
3.8
2.8
2.8
2.6
3
3.4
3.1
3
3.1
3.1
3.1
2.7
3.2
3.3
3
2.5
3
3.4
3

CSC 367 2.0 Mathematical Computing

5.7
6
4.8
4.9
5.6
5.8
6.1
6.4
5.6
5.1
5.6
6.1
5.6
5.5
4.8
5.4
5.6
5.1
5.1
5.9
5.7
5.2
5
5.2
5.4
5.1

2.1
1.8
1.8
1.8
2.1
1.6
1.9
2
2.2
1.5
1.4
2.3
2.4
1.8
1.8
2.1
2.4
2.3
1.9
2.3
2.5
2.3
1.9
2
2.3
1.8

Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica

Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
FALSE
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica

I found best results using RBF Network with Non-Normalized Gaussian activation function with 9
mismatches. And I found best results using MLP Network with 4 mismatches.
MLP Network as Second Layer

Non-Normalized Gaussian
function
Normalized Gaussian function

Random Center
9

K Means Center
9

11

11

Support Vector Machine as Second Layer

Non-Normalized Gaussian
function
Normalized Gaussian function

9|Page

Random Center
14

K Means Center
10

14

17

M.K.H.Gunasekara - AS2010377

CSC 367 2.0 Mathematical Computing

Discussion
1. There are some drawbacks of unsupervised center selection in radial basis functions
2. We can use an SVM for the second layer instead of a perceptron but it is not efficient for more
than 2 classes classification

10 | P a g e

M.K.H.Gunasekara - AS2010377

CSC 367 2.0 Mathematical Computing

Appendices
MATLAB Sourcecode for RBF Network with MLP Network
clc
clear all
% M.K.H. Gunasekara
% AS2010377
% Machine Learning
% Radial Basis Function
[arr tx] = xlsread('data.xls');
Centers=zeros(3,4);

% I found centers as mean of the same cluster values


for i=1:50
Centers(1,1)=arr(i,1)+Centers(1,1);
Centers(1,2)=arr(i,2)+Centers(1,2);
Centers(1,3)=arr(i,3)+Centers(1,3);
Centers(1,4)=arr(i,4)+Centers(1,4);
end
for i=51:100
Centers(2,1)=arr(i,1)+Centers(2,1);
Centers(2,2)=arr(i,2)+Centers(2,2);
Centers(2,3)=arr(i,3)+Centers(2,3);
Centers(2,4)=arr(i,4)+Centers(2,4);
end
for i=101:150
Centers(3,1)=arr(i,1)+Centers(3,1);
Centers(3,2)=arr(i,2)+Centers(3,2);
Centers(3,3)=arr(i,3)+Centers(3,3);
Centers(3,4)=arr(i,4)+Centers(3,4);
end
for j= 1:3
Centers(j,1)=Centers(j,1)/50;
Centers(j,2)=Centers(j,2)/50;
Centers(j,3)=Centers(j,3)/50;
Centers(j,4)=Centers(j,4)/50;
end
Centers

% OR we can use k means algorithms calculate cluster centers


k=3; %number of clusters
[IDX,C]=kmeans(arr,k);
C %RBF centres
%Uncomment following line to use k means
%Centers=C;

11 | P a g e

M.K.H.Gunasekara - AS2010377

CSC 367 2.0 Mathematical Computing

% distance between hidden nodes


%distance between hidden node 1 & 2
dist1= sqrt((Centers(1,1)-Centers(2,1))^2 + (Centers(1,2)-Centers(2,2))^2 +
(Centers(1,3)-Centers(2,3))^2 + (Centers(1,4)-Centers(2,4))^2);
%distance between hidden node 1 & 3
dist2= sqrt((Centers(1,1)-Centers(3,1))^2 + (Centers(1,2)-Centers(3,2))^2 +
(Centers(1,3)-Centers(3,3))^2 + (Centers(1,4)-Centers(3,4))^2);
%distance between hidden node 3 & 2
dist3= sqrt((Centers(3,1)-Centers(2,1))^2 + (Centers(3,2)-Centers(2,2))^2 +
(Centers(3,3)-Centers(2,3))^2 + (Centers(3,4)-Centers(2,4))^2);

% finding maximum distance


maxdist=0;
if ( dist1>dist2) & (dist1>dist3)
maxdist=dist1;
end
if ( dist2>dist1) & (dist2>dist3)
maxdist=dist2;
end
if ( dist3>dist1) & (dist3>dist2)
maxdist=dist3;
end
% calculating width
sigma= maxdist/sqrt(2*3);

maxdist;
% Gaussian
%calculating outputs of RBF networks
RBFoutput=zeros(150,3);
d1=zeros(1,4);
Centers;
d=zeros(1,3);
%Unnormalized method
% calculate output for gaussian function
%Uncomment following lines (98-106) to use Non-Normalized Activation
%functions
%
for i=1:150
for j=1:3
d(1,j)= (arr(i,1)- Centers(j,1))^2 + (arr(i,2)- Centers(j,2))^2 +
(arr(i,3)- Centers(j,3))^2 + (arr(i,4)- Centers(j,4))^2;
RBFoutput(i,j)= exp(-(d(1,j)/(2*(sigma^2))));
end
end

12 | P a g e

M.K.H.Gunasekara - AS2010377

CSC 367 2.0 Mathematical Computing

% %

%Normalized method
%Summation
%Uncomment following lines (114-130) to use Gaussian Normalized Activation
functions
% RBFNormSum=zeros(150,1);
% for i=1:150
%
for j=1:3
%
d(1,j)= (arr(i,1)- Centers(j,1))^2 + (arr(i,2)- Centers(j,2))^2 +
(arr(i,3)- Centers(j,3))^2 + (arr(i,4)- Centers(j,4))^2;
%
RBFNormSum(i,1)= exp(-(d(1,j)/(2*(sigma^2))))+ RBFNormSum(i,1);
%
end
%
% d=[0 0 0];
% end
%
% % calculate output for gaussian function
% for i=1:150
%
for j=1:3
%
d(1,j)= (arr(i,1)- Centers(j,1))^2 + (arr(i,2)- Centers(j,2))^2 +
(arr(i,3)- Centers(j,3))^2 + (arr(i,4)- Centers(j,4))^2;
%
%
RBFoutput(i,j)= exp(-(d(1,j)/(2*(sigma^2))))/RBFNormSum(i,1);
%
end
%
% d=[0 0 0];
% end

RBFoutput
RBFo=RBFoutput.'
% making MLP network
% T=zeros(1,150);
T=[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
3]
S=[3 1]
;
R=[0 1;0 1;0 1]

% used feedforward neural network as MLP [3 1]


MLPnet=newff(RBFo,S);
MLPnet.trainParam.epochs = 500;
MLPnet.trainParam.lr = 0.1;
MLPnet.trainParam.mc = 0.9;
MLPnet.trainParam.show = 40;

13 | P a g e

M.K.H.Gunasekara - AS2010377

CSC 367 2.0 Mathematical Computing

MLPnet.trainParam.perf = 'mse';
MLPnet.trainParam.goal = 0.001;
MLPnet.trainParam.min_grad = 0.00001;
MLPnet.trainParam.max_fail=4;

MLPnet = train(MLPnet,RBFo,T);
%simulating neural network
y=sim(MLPnet,RBFo);
output=round(y.');
Target=T.';
compare= [T.' output]
count=0;
for i=1:150
if(output(i)~=Target(i))
count=count+1;
end
end
Unmatched=count

MATLAB Source code for RBF Network with SVM


clc
clear all
% M.K.H. Gunasekara
% AS2010377
% Machine Learning
% Radial Basis Function with Support Vector Machine
[arr tx] = xlsread('data.xls');
Centers=zeros(3,4);

% I found centers as mean of the same cluster values


for i=1:50
Centers(1,1)=arr(i,1)+Centers(1,1);
Centers(1,2)=arr(i,2)+Centers(1,2);
Centers(1,3)=arr(i,3)+Centers(1,3);
Centers(1,4)=arr(i,4)+Centers(1,4);
end
for i=51:100
Centers(2,1)=arr(i,1)+Centers(2,1);
Centers(2,2)=arr(i,2)+Centers(2,2);
Centers(2,3)=arr(i,3)+Centers(2,3);
Centers(2,4)=arr(i,4)+Centers(2,4);
end
for i=101:150
Centers(3,1)=arr(i,1)+Centers(3,1);
Centers(3,2)=arr(i,2)+Centers(3,2);
Centers(3,3)=arr(i,3)+Centers(3,3);

14 | P a g e

M.K.H.Gunasekara - AS2010377

CSC 367 2.0 Mathematical Computing

Centers(3,4)=arr(i,4)+Centers(3,4);
end
for j= 1:3
Centers(j,1)=Centers(j,1)/50;
Centers(j,2)=Centers(j,2)/50;
Centers(j,3)=Centers(j,3)/50;
Centers(j,4)=Centers(j,4)/50;
end
Centers

% OR we can use k means algorithms calculate cluster centers


k=3; %number of clusters
[IDX,C]=kmeans(arr,k);
C %RBF centres
%Uncomment following line to use k means
Centers=C;

% distance between hidden nodes


%distance between hidden node 1 & 2
dist1= sqrt((Centers(1,1)-Centers(2,1))^2 + (Centers(1,2)-Centers(2,2))^2 +
(Centers(1,3)-Centers(2,3))^2 + (Centers(1,4)-Centers(2,4))^2);
%distance between hidden node 1 & 3
dist2= sqrt((Centers(1,1)-Centers(3,1))^2 + (Centers(1,2)-Centers(3,2))^2 +
(Centers(1,3)-Centers(3,3))^2 + (Centers(1,4)-Centers(3,4))^2);
%distance between hidden node 3 & 2
dist3= sqrt((Centers(3,1)-Centers(2,1))^2 + (Centers(3,2)-Centers(2,2))^2 +
(Centers(3,3)-Centers(2,3))^2 + (Centers(3,4)-Centers(2,4))^2);

% finding maximum distance


maxdist=0;
if ( dist1>dist2) & (dist1>dist3)
maxdist=dist1;
end
if ( dist2>dist1) & (dist2>dist3)
maxdist=dist2;
end
if ( dist3>dist1) & (dist3>dist2)
maxdist=dist3;
end
% calculating width
sigma= maxdist/sqrt(2*3);

15 | P a g e

M.K.H.Gunasekara - AS2010377

CSC 367 2.0 Mathematical Computing

maxdist;
% Gaussian
%calculating outputs of RBF networks
RBFoutput=zeros(150,3);
d1=zeros(1,4);
Centers;
%Unnormalized method
% calculate output for gaussian function
%Uncomment following lines (98-106) to use Non-Normalized Activation
%functions
d=zeros(1,3);
for i=1:150
for j=1:3
d(1,j)= (arr(i,1)- Centers(j,1))^2 + (arr(i,2)- Centers(j,2))^2 +
(arr(i,3)- Centers(j,3))^2 + (arr(i,4)- Centers(j,4))^2;
RBFoutput(i,j)= exp(-(d(1,j)/(2*(sigma^2))));
end
% d=[0 0 0];
end
%

%Normalized method
%Summation
%Uncomment following lines (114-130) to use Gaussian Normalized Activation
functions
% RBFNormSum=zeros(150,1);
% for i=1:150
%
for j=1:3
%
d(1,j)= (arr(i,1)- Centers(j,1))^2 + (arr(i,2)- Centers(j,2))^2 +
(arr(i,3)- Centers(j,3))^2 + (arr(i,4)- Centers(j,4))^2;
%
RBFNormSum(i,1)= exp(-(d(1,j)/(2*(sigma^2))))+ RBFNormSum(i,1);
%
end
%
% d=[0 0 0];
% end
%
% % calculate output for gaussian function
% for i=1:150
%
for j=1:3
%
d(1,j)= (arr(i,1)- Centers(j,1))^2 + (arr(i,2)- Centers(j,2))^2 +
(arr(i,3)- Centers(j,3))^2 + (arr(i,4)- Centers(j,4))^2;
%
%
RBFoutput(i,j)= exp(-(d(1,j)/(2*(sigma^2))))/RBFNormSum(i,1);
%
end
%
% d=[0 0 0];
% end

RBFoutput
RBFo=RBFoutput.'
% making SVM network

16 | P a g e

M.K.H.Gunasekara - AS2010377

CSC 367 2.0 Mathematical Computing

group=cell(3,1)
group{1,1}=zeros(150,1);
for n=1:150;
tclass(n,1)=tx(n,5);
end
group{1,1}=ismember(tclass,'Iris-setosa')
group{2,1}=ismember(tclass,'Iris-versicolor')
group{3,1}=ismember(tclass,'Iris-virginica')

[train, test] = crossvalind('holdOut',group{1,1});


cp = classperf(group{1,1});
for i=1:3
%svmStruct(i) =
svmtrain(RBFoutput(train,:),group{i,1}(train),'showplot',true);
svmStruct(i) = svmtrain(RBFoutput,group{i,1},'showplot',true);
end
for j=1:size(RBFoutput)
for k=1:3
if(svmclassify(svmStruct(k),RBFoutput(j,:)))
break;
end
end
result(j) = k;
end
T=[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
3]
compare=[T.' result.']
Target=T.'
output=result.'
count=0;
for i=1:150
if(output(i)~=Target(i))
count=count+1;
end
end
Unmatched=count

MATLAB Source Code MLP Network


clc
clear all
% M.K.H. Gunasekara
% AS2010377
% Machine Learning
% MLP Network
[arr tx] = xlsread('data.xls');

17 | P a g e

M.K.H.Gunasekara - AS2010377

CSC 367 2.0 Mathematical Computing

inputs=arr.';
T=[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
3]
%Multilayer network with hidden layer with 3 nodes
MLPnet=newff(inputs,[4 3 1]);
MLPnet.trainParam.epochs = 500;
MLPnet.trainParam.lr = 0.1;
MLPnet.trainParam.mc = 0.9;
MLPnet.trainParam.show = 40;
MLPnet.trainParam.perf = 'mse';
MLPnet.trainParam.goal = 0.001;
MLPnet.trainParam.min_grad = 0.00001;
MLPnet.trainParam.max_fail=4;

MLPnet = train(MLPnet,inputs,T);
%simulating neural network
y=sim(MLPnet,inputs);
output=round(y.');
Target=T.';
compare= [T.' output]
count=0;
for i=1:150
if(output(i)~=Target(i))
count=count+1;
end
end
Unmatched=count

18 | P a g e

You might also like