You are on page 1of 9

# Yu Hen Hu

5/9/16

Last (family) name: _________________________
First (given) name: _________________________
Student I.D. #: _____________________________
Department of Electrical and Computer Engineering
ECE/CS/ME 539 Introduction to Artificial Neural Network and Fuzzy Systems

Take Home Final Examination
(From noon, Monday 5/2/2016 till noon, Monday 5/9/2016)

This is a take home final examination. You are to submit your answer
electronically to the Moodle assignment box before the deadline. If you
prefer hand-writing rather than typing, you should keep the writing
electronic submission.
You CANNOT discuss either the questions, or the answers with any one
except the instructor.
Many of the problems require programming. You are required to attach
copies of your code as part of the submission so that the grader can
run the code to verify the answer.
You cannot copy the code elsewhere EXCEPT those post on the course
website.
ABSOLUTELY NO EXTENSION REQUEST WILL BE GRANTED. Be on time.
Any academic mis-conduct will be pursued to the full extend
according to University rule. You must sign below and scan this
page as part of submission to receive credits for the final examination.

I, _____________________ (print your name) promise that I will not commit
academic plagiarism. I understand such an offense will result in failure of
this course for both the person who copy other’s answer and those who
let their answer copied by others.

IMPORTANT!

During his regular office hours, 1-2PM, Tuesday, 2-3PM, Wednesday,
11-noon, Thursday, or by appointment, Prof. Hu will be available
answering questions related to this final examination. Questions may also
be submitted via email.

1 of 9

pts 100 Points .Yu Hen Hu 5/9/16 Proble m 1 15 2 10 3 15 4 30 5 15 6 15 Total 2 of 9 Max.

Actually. (iii) MLP. idx2=find(y==+1).idx1). We define a 1  N vector y = sgn(wTX  c[1 1 … 1]1N) as the binary label vector where “sgn” is a sign function such that sgn(x) = + 1 if x > 0 and = 1 if x < 0.Yu Hen Hu 5/9/16 1. the answer should provide w.:). …. D1=x(:. x2. Note that this is a binary classification problem using a linear classifier. These include: (i) Perceptron learning. Suppose there is a hyperplane H: {x. Find a hyperplane in the form of H: {x. g(x) = wTx  c = 0} such that yn = sgn(g(xn)) for 1  n  N where y = [y1. y. xN]mN be a matrix consisting of N m-dimensional feature vectors {xn. N1=length(idx1). Now. y=s16p1dat_1(end. x=s16p1dat_1(1:end-1. % 1 x N yb1=yb(y==-1).N]=size(x). idx1=find(y==-1). The matlab code that compute the Cmat matrix for the given X. The projection of each column of X along the direction of w. g(x) = wTx  c = 0} that separates these feature vectors into two subgroups. the training data and testing data are the same set of X and y.:). You may use any one or more methods you have learned in this class. Its value may be adjusted to maximize the probability of classification. a realization of X and corresponding y vector are stored in the datafile s16p1dat_1. (iv) SVM. % label for +1 class if mean(yb1) < mean(yb2). If you obtain an adquate choice of w and c. the Cmat should be a diagnal matrix (100% correct classification). y2. yb=w'*x. [m. In this particular example. with one labeled with 1 and the other labeled with +1. The first m rows belongs to the X matrix while the last row is the y vector. D2=x(:. namely.txt. % if -1 class is projected to negative values sides=0. …. clear s16p1dat_1. (ii) linear discriminant analysis. w. or (v) maximum liklihood classifier using uni-variate Normal distribution. Hence. The data set given is linearly separable using a particular value of w and c as shown in the figure below.idx2). % label for -1 class yb2=yb(y==+1).txt. and c are as follows. The matlab code to load the data and set up parameters are: load s16p1dat_1. (15 points) linear Classification Let X = [x1. % -1 side is negative projection value 3 of 9 . here c is computed to be in the middle of the gap. yN]. c and corresponding confusion matrix Cmat. one having values greater than c and the other smaller than c. wTX then may be divided into two groups. N2=length(idx2). 1  n  N}.

end if gp < 0.mathworks. % N x 2. (0. ye=double([[yest' < 0] [yest' > 0]]).0286 0. Plot the clustering results using diferent colors for different clusters. ye=double([[yest' > 0] [yest' < 0]]). % +1 side is negative projection value gp=min(yb1)-max(yb2).txt using a linear array of indices of the neurons.5*(min(yb2)+max(yb1)).0082 0. % N x 2 elseif sides==1.txt using 8 clusters. Recall that for polynomial Kernel.0243 0. (15 points) clustering. -1) to (1.0124 0. SV 1 -23 12 2 -2 3 3 4 -4 4 16 -8 5 16 6 6 -3 8 7 -1 4 8 -5 9 9 1 -15 28 -5 -1 -6 6 11 11 8 4 d 1 -1 -1 1 -1 -1 1 -1 1  0. 3.Yu Hen Hu 5/9/16 gp=min(yb2)-max(yb1). ground truth if sides==0.txt and s16p3b. 0). Compute the output of this SVM model when the input feature vector is (13. xi) = (1 + xTxi)2 where xi is a support vector. 8.0615 0. Reporting results with total number of neurons N = 10. Classification = ' num2str(round(100*sum(diag(Cmat))/N)) '%']).m (c) (5 points) Apply the Kmeans algorithtm for s16p3b.5*(min(yb1)+max(yb2)). and 20 respectively. kmeans algorithm.0016 0.com/matlabcentral/fileexchange/52905-dbscan-clusteringalgorithm) to cluster s16p3a. SOM Download the data files s16p3a. disp('*** projection not separated linearly! ***'). else % if +1 class is projected to negative values sides=1.0027 0. y) = (1 + xTy)2. disp(['Cmat = ']) disp(Cmat) disp(['Prob. (a) (5 points) Apply DBSCAN clsutering algorithtm (http://www. end yest=sign(w'*x-c*ones(1. Plot the clustering results as illustrated in somdemo.txt. 1) encoding ygrnd=double([[y' > 0] [y' < 0]]). % N x 2 end Cmat=ygrnd'*ye. c=0. 2. (b) (5 points) Apply the SOM algorithm on s16p3a. 6). Discuss which values of Eps and Minpts should be used to yield two clusters.0085 Suppose that the polynomial kernel is used. Plot the clustering results as demonstrated in clusterdemo. (10 points) A hypothetical SVM model has the following values of support vectors (SV) and corresponding target values d. K(x.txt. c=0.m 4 of 9 .N)+eps). % 1 x N. estimated % converting output from scalar (+1.8703 0. Hence K(x.