Professional Documents
Culture Documents
(1)
Then the mean image is subtracted from the each image of
the set, so as to equalize the data
K
i
= X
i
-n (2)
Then a matrix is formed by concatinating all mean
images.
F=[K
1
,K
2
,......K
n
] (3)
A covariance matrix is formed U= FF
T
having dimensions
N
2
x N
2
, which then produces eigenvectors and
eigenvalues. The eigen vectors are :
FF
T
J
i
=
i
J
i
(4)
Which then also be written as
F
T
FFJ
i
= F(
i
J
i
) (5)
F
T
F(FJ
i
)=
i
(FJ
i
) (6)
FJ
i
is the eigen vector denoted by U
i
and
i
is the eigen
value. U
i
represent the faces which look hazy and are
called eigenfaces[5]. The eigenfaces which have the large
eigenvalues account for the most variance of the data set.
Each face image is now projected on this face space using
r
= U
T
(X
r
- n) , r = 1,2,......n (7)
where (X
r
- n) represents the mean centered image .
therefore above equation can be used for finding the
projection of each image
A. RECOGNITION
During recognition process the test image X is projected
onto the face space , which obtain the vector
= U
T
(X
- n) (8)
2
The Euclidean Distance Classifier is used to calculate
the distance between the projected test image and each of
the projected images during the training
= -
r
2
, r = 1,2......n (9)
A threshold is set up to classify the face
=
max
t,r
t
-
r
,j,r=1,2......n (10)
III. LDA
LDA [8][9] is a supervised learning method, which
utilizes the category information associated with each
sample. The goal of LDA is to maximize the between-
class scatter while minimizing the within-class scatter.
Mathematically speaking, the within-class scatter matrix
S
w
and between-class scatter matrix S
b
are defined as
S
w
= (
T
(11)
S
b
= (
T
(12)
Where
is the mean of
class j, is the mean image of all classes, c is the number
of classes, and N
j
is the number of samples of class j .
One way to select W
lda
is to maximize the ratio
det|S
b
|/det|S
w
|. If S
w
is nonsingular matrix then this ratio is
maximized, when the transformation matrix W
lda
consists
of g generalized eigenvectors corresponding to the g
largest eigenvalues of S
w
-1
S
b
[1][6][8][9]. Note that there
are at most c-1 nonzero generalized eigenvalues, and so
an upper bound on g is c-1. In this paper, we consider
seven kinds of facial expressions, so the dimension of
LDA is up to 6.
IV. PCA+LDA
In order to guarantee S
w
not to become singular, we
require at least t+c samples. In practice it is difficult to
obtain so many samples when the dimension of feature is
very high. To solve this problem, a two-phase framework
PCA plus LDA is proposed by [1][6][8][18]. It can be
described that PCA maps the original t-dimensional
feature x
i
to the f-dimensional feature y
i
as an intermediate
space, and then LDA projects the PCA output to a new g-
dimensional feature vectors z
i
. More formally, it is given
by
Z
i
=
x
i
(1,2,N) (13)
To compare the performance of PCA with PCA+LDA,
recognition results using PCA feature and PCA+LDA
feature respectively will be reported in Section 5.
V. EXPERIMENTS AND RESULTS
Disgust Neutral
Happy
Sad Anger
Fig. 1 Output of Various Facial Expressions
Database
(1) Number of individual : 1
(2) Total number of Training Images : 50
(3) Total number of Testing images : 40
(4) Image Lighting Variation is very little
TABLE I. Recognition Rates of Various Facial
Expressions
Facial
Expression Recognition Rate using PCA
Happy 75
Disgust 87.5
Neutral 87.5
Sad 87.5
Anger 75
3
Fig.2 Bar Graph Showing Recognition Rates of Various
Facial Expressions
TABLE II. System Performance Results For Testing
40 images using PCA Method
Target
Recognit
ion Rate
Hap
py
[8]
Disg
ust
[8]
Neut
ral
[8]
Sa
d
[8]
Ang
er
[8]
Avera
ge
Happy 6 2 0 0 1
Disgust 2 7 0 1 0
Neutral 0 0 7 0 0
Sad 0 0 1 7 1
Anger 0 0 0 0 6
Recognit
ion Rate
75 87.5 87.5 87.
5
75 82.5
VI. CONCLUSION
In this paper facial expression recognition system has
been implemented . the experiments shows the
recognition rate of 82.5 % has been obtained using
Principal Component Analysis and Linear Discriminant
Analysis
VII. REFRENCES
[1]Bartlett, M. S., Donato, G., Ekman, P., Hager, J. C.,
Sejnowski, T.J., 1999,"Classifying Facial Actions", IEEE
Trans. Pattern Analysis and Machine Intelligence, Vol.
21, No. 10, pp. 974-989.
[2] Yang, J., Zhang, D., 2004, Two-dimensional pca: a
new approach to appearance-based face representation
and recognition, IEEE Trans. Pattern Anal. Mach.
Intell.Vol.26, No.1, pp. 131137.
[3] Yi, J., R. Qiuqi et al. (2008),Gabor-based Orthogonal
Locality Sensitive Discriminant Analysis for face
recognition, Signal Processing, 2008. ICSP 2008. 9th
International Conference on.
[4] Menaka Rajapakse, Jeffrey Tan, Jagath
Rajapakse,Color Channel Encoding With NMF for Face
Recognition, International Conference on Image
Processing; Proceedings; ICIP, pp 2007-2010 (October
2004).
[5] Cohn, J.F., Kanade, T., Lien, J.J., 1998,"Automated
Facial Expression Recognition Based on FACS Action
Units", Proc. Third EEE Int. Conf. Automatic Face and
Gesture Recognition, pp. 390-395.
[6] M. Turk, A. and Pentland, "Eigen faces for face
recognition ", Journal cognitive neuroscience, Vol. 3,
No.1, 1991.
[7] Ching-Chih, T., C. You-Zhu et al. (2009),"Interactive
emotion recognition using Support Vector Machine for
human-robot interaction. Systems, Man and Cybernetics,
2009. SMC 2009. IEEE International Conference on.M.
Turk, A. and Pentland, "Eigen faces for face recognition ",
Journal cognitive neuroscience, Vol. 3, No.1, 1991.
[8] Pantic, M. and Rothkrantz, L., 2000, Automatic
analysis of facial expressions: The state of the art, IEEE
Transactions on Pattern Analysis and Machine
Intelligence, Vol. 22, No. 12, pp.14241445.
[9] Jain, A.K., Duin R.P.W., Mao J., 2000,"Statistical
Pattern Recognition: A Review", IEEE Trans. Pattern
Analysis and Machine Intelligence, Vol. 22, No. 1, pp. 4-
37.
[10] Pantic, M. and Rothkrantz, L., 2000, Automatic
analysis of facial expressions: The state of the art, IEEE
Transactions on Pattern Analysis and Machine
Intelligence, Vol. 22, No. 12, pp.14241445.
65
70
75
80
85
90
Recognition Rate
using PCA
Recognition
Rate using
PCA