Professional Documents
Culture Documents
Fatma Guney
Bogazici University
Computer Engineering
Istanbul, Turkey 34342
Email: guneyftm@gmail.com
I. I NTRODUCTION
A persons face changes according to emotions or internal
states of the person. Face is a natural and powerful communication tool. Analysis of facial expressions through moves
of face muscles leads to various applications. Facial expression recognition plays a significant role in Human Computer
Interaction systems. Humans can understand and interpret
each others facial changes and use this understanding to
response and communicate. A machine capable of meaningful
and responsive communication is one of the main focuses
in robotics. There are many other areas which benefit from
the advances in facial expression analysis such as psychiatry,
psychology, educational software, video games, animations, lie
detection or other practical real-time applications.
The objective of this study is to get an overview of the
current methods and based on proposed solutions, to develop
a real-time facial expression recognition system to recognize
neutr expression and six prototypic expressions, happiness,
sadness, anger, surprise, disgust and fear. The approach of [1]
have been benefited during the study.
In this study, a facial emotion recognition system is developed. First, face and eye detection based on modified census
transform (MCT) are performed on the input image. After
detection, alignment based on eye coordinates of the detected
face image is applied to scale and translate face and reduce
variances in feature space. Aligned face image is divided
into local blocks and discrete cosine transform is performed
on these local blocks. Concatenating features of each block,
an overall feature vector is obtained. A scaling procedure is
applied to obtained overall feature vector before classification.
For classification purpose, one-versus-all Support Vector Machine (SVM) classifier is trained with cross validation.
Fig. 1.
The face and eyes are automatically detected using a modified census transformation (MCT) based face and eye detector
Fig. 3.
Fig. 2.
TABLE III
C ONFUSION M ATRIX
Two records of each sample for each emotion and neutr are
used in training and validation, set of remaining sequences
constitute the dataset for tests.
B. Experimental Setup
For emotion recognition, each detected face is scaled to
64x80 pixels and aligned so that eye row is 35 and distance
between eyes is 32 pixels [?], [4]. DCT is performed on
blocks of 8x8 pixels. For each block the first 10 coefficients
in the zig-zag scanning order are kept, leading to a 8x10x10
= 800 dimensional feature vector. After scaling the overall
feature vector, SVM parameter optimization is performed by
grid-search using five-fold cross validation. Grid search is
performed with C = 2k and = 2l with k = 3, 1, 1
and l = 16, 14, ..., 7.
C. Results
TABLE I
PARAMETERS OF SVM
Emotion
Anger
210
21
Disgust
210
21
Fear
210
21
Happiness
212
21
Sadness
210
21
Surprise
210
21
FP + FN
TP + TN + FP + FN
(1)
Fold-1
Fold-2
Fold-3
Fold-4
Fold-5
avg
Anger
0.19
0.27
0.12
0.26
0.13
0.19
Disgust
0.22
0.24
0.24
0.21
0.20
0.22
Fear
0.18
0.16
0.10
0.13
0.32
0.17
Happiness
0.17
0.08
0.10
0.06
0.12
0.10
Sadness
0.45
0.35
0.24
0.28
0.25
0.31
Surprise
0.33
0.10
0.23
0.10
0.08
0.16
Anger
Disgust
Fear
Happiness
Sadness
Surprise
Anger
13
Disgust
18
Fear
12
Happiness
17
Sadness
16
Surprise
15