You are on page 1of 5

Proceedings of The Fourth International Conference on Informatics & Applications, Takamatsu, Japan, 2015

Improve the Recognition Rate of Facial Expressions by Normalized Facial


Features of Different Personal Face and Photo Sizes
Shih-Lin Wu and Jun-Wei Hsu
Computer Science and Information Engineering
Chang Gung University Kwei-Shan, 333, Taiwan (ROC)
Email: slwu@mail.cgu.edu.tw

Abstract

applications in many interesting areas [9-11], such

With the advancement in camera techniques,

as Human-Robot interaction, Human-Computer

the recognition of human facial expressions has

interaction, the effect analysis of commercial

become an important research topic in recent years.

advertisements, the effect analysis of on-line

Most of the face recognition systems are trying to

questionnaires, automobile safety,

seek facial features such as eyes and mouth

science analysis, education, animations, psychiatry,

automatically. But the results are inefficient because

and so on.

behavioral

of the wrong angle and size of a face compared with

Most of the face recognition systems are trying

facial database. In this paper, we propose a real time

to seek facial features such as eyes, eyebrows, nose,

facial

which

and mouth automatically. But the results are

possesses two functions. First, the system extracts

inefficient because of the wrong angle and size

facial features from human face and rotates to the

values of a face taken by cameras compared with the

right angle if the face did not pose the right angle.

values in pre-constructed database. In this paper, we

Second, to solve the different face sizes caused by

develop a real-time system to recognize facial

near-far distance to the camera, the system employs

expressions which possesses two functions. First,

a normalization method based on the fixed length of

the system extracts facial features from human face

some features on user faces. From experimental

and rotates to the right angle if the face did not pose

results,

the right angle. Second, to solve the different face

expression

the

recognition

system

can

system

identify

the

facial

expressions with a high precision rate.

sizes caused by near-far distance to the camera, the

Keywords: facial expression recognition, feature

system employs a normalization method based on

extraction, SVM.

the fixed length of some features on user faces. The


remainder of this paper is organized as follows.

I.

Introduction

Related works are introduced in Section 2. In

Human facial expressions have important

Section 3, we present the system model and the

information for interpersonal communication. In

proposed facial expression recognition system.

most of the cases, facial expressions will offer the

Several experiments are presented in Section 4.

traits response to the emotions of individuals such as

Finally, conclusions are given in Section 5.

happy, sad, surprise, anger or other emotions. The


automatic recognition of facial expressions find

ISBN: 978-1-941968-16-1 2015 SDIWC

II. Related work

170

Proceedings of The Fourth International Conference on Informatics & Applications, Takamatsu, Japan, 2015

expression

sample is assigned a weight, indicating that it is

recognitions has been developed for many years. To

selected into one classifier training set probability.

have higher recognition rate, many scholars directed

Fig. 2 shows Haar-like features and selected by

this problem and published several papers. Viola et

AdaBoost algorithm. A cascade

al. [1] proposed a method, called AdaBoost, to find

composed of a combination of a lot of weak

the face from a picture quickly. When an image

classifier in AdaBoost algorithm. As shown in Fig. 3,

processed, integral image use rectangle features to

SVM (support vector machine) training algorithm [7]

calculate it in the middle position very quickly. The

builds a model that assigns new examples into one

RGB values of color images are converted into gray

class or the other, creating it a non-probabilistic

scale values of images by the help of the following

binary linear classifier.

The

research

area

of

facial

classifier is

equation,
Gray = 0.299 + 0.587 + 0.114,

while Gray, R, G, and B are the

(1)

scale value of gray, red, green, and blue, respectively.


The value at any point (x, y) in the summed area
table is just the sum of all the pixels above and to
the left of (x, y), the equation is as follows.
(, ) = ( , )

(2)

(a)

(b)

Fig. 2 (a) Example rectangle features shown relative to


the enclosing detection window. (b) The first and second
features selected by AdaBoost.

(a)

(b)

Fig. 1 (a) the value at point (x, y) is the sum of all the
pixels within the gray rectangle. (b) The sum of the pixel
values within the rectangle D can be calculated by

Fig. 3 cascade classifier is a particular case of ensemble

p1+p4-p2-p3.

learning based on the concatenation of several classifiers.

In reference [2], Haar-like features focus on the


difference of the sum of pixels within a certain

III. Facial feature extraction


In general, the rotation of coordinate use origin

rectangle area, which represent any position and

point as the center of rotation, assume that

scale within the original image. AdaBoost method is

coordinates P1 is rotate counter clockwise angle B to

an iterative algorithm, where new weak classifiers

the point P2 and we want to calculate the coordinates

are added in each round until it reaches a small

of P2, as shown in Fig. 4, assume the distance d is

enough predetermined error rate. Each training

from P1 to P2. The X coordinate of P2 is

ISBN: 978-1-941968-16-1 2015 SDIWC

171

Proceedings of The Fourth International Conference on Informatics & Applications, Takamatsu, Japan, 2015

2 = ( + )

= [() () () ()]

= 1 () 1 sin().

model.
(3)

IV. Experiment results


Apart from using CK+ database as our

The Y coordinate of P2 is

experimental data, we collect our own photos as our

2 = ( + )

database. We use CK+ database [5-6] and self-built

= [() () + () ()]
= 1 () + 1 ().

database to train SVM model using SVM classifier.


(4)

Fig. 4. Schematic depiction of Coordinate rotation.

We used WEKA software [8] for training SVM


model.

Fig. 5 68 Facial landmarks schematic diagram

In this paper, we use an approach that is


implemented by T. F. Cootes et al. [3-4] which
defines 68 points on the face that are considered to
be facial landmarks, as shown in Fig.5. According to
facial expressions in face region, the larger
difference feature values will distinguish the
accurate feature of block, for example eyebrows,
eyes, nose, mouth etc. Our features focuse on the
face value of different contours and distance
between them. To solve the different face sizes
caused by near-far distance to the camera, we use a

Fig. 6 The red line is our Base Line

method, called Base Line, to calculate ratio of facial


landmarks, as pictured in Fig. 6. We use SVM, the

After the training phase, the SVM model is

well-known algorithm, to construct our system

authenticated by using k-fold cross-validation. In

model for recognizing facial expressions. In Fig. 7,

k-fold cross-validation, the original sample set are

the 18 features are used as input to train the system

randomly partitioned into k subsets with equal

ISBN: 978-1-941968-16-1 2015 SDIWC

172

Proceedings of The Fourth International Conference on Informatics & Applications, Takamatsu, Japan, 2015

number of samples; the first k 1 subsets are used as

The system is now ready to capture the facial

training data to construct a SVM model and the last

expression and classify them accordingly. Our

one is the validation data for testing the model.

experiment calculate recognition rate and record the


result, the system will perform well with good
accuracy to classify the facial expression.

Table 1 Results basic expression recognition


happy

sad

surprise

angry

fear

disgusting

happy

94.5%

0%

0%

0%

17.1%

0%

sad

2.7%

62.1%

0%

18.9%

11.4%

23.5%

surprise

0%

0%

86.4%

0%

11.4%

0%

angry

0%

35.1%

0%

72.9%

5.4%

38.2%

fear

2.7%

2.7%

13.5%

8.1%

42.8%

8.8%

disgusting

0%

0%

0%

0%

11.4%

29.4%

1.

Hleb: height of left eyebrow

Tab. 1 is our basic expression recognition

2.

Wleb: width of left eyebrow

results, and in Tab. 2 we divide it in two parts, one is

3.

Hreb: height of right eyebrow

using Base Line identification rate, and the other is

4.

Wreb: width of right eyebrow

Non Base Line identification rate. By Base Line it

5.

Hle: height of left eye

increases recognition rate and improve the different

6.

Wle: width of left eye

face sizes caused by near-far distance to the camera.

7.

Hre: height of right eye

Improvement Ratio means that the base line upgrade

8.

Wre: width of right eye

percentage accuracy and it calculated as in Eq. 5.

9.

Hn: height of nose

10. Wn: width of nose


11. Hm: height of mouth
12. Wm: width of mouth


(5)

13. Dleb: distance between left eyebrow and


left eye

Table 2 Base Line of facial related


recognition rate

14. Dreb: distance between right eyebrow


and right eye

Reorganization
happy

15. Dnm: distance between center of nose


and mouth

sad

Surprise

angry
Rate

Base Line

97%

67%

97%

81%

85.5%

16. Etoe: distance between eye to eye

Non Base Line

97%

54%

89%

75%

78.75%

17. Ebtoeb: distance between left eyebrow to

Improvement
0%

24%

8.9%

8%

8.5%

right eyebrow

ratio

18. Hm2: half height of mouth


Fig. 7 18 features used by our system

ISBN: 978-1-941968-16-1 2015 SDIWC

173

Proceedings of The Fourth International Conference on Informatics & Applications, Takamatsu, Japan, 2015

[6] P. Ekman and W. Friesen, Facial Action Coding

V. Conclusions

System: A Technique for the Measurement of Facial

In the paper, we propose a countenance

Movement, Consulting Psychologists Press, Palo Alto,

recognition system which can real-time identify

1978.

facial expressions taken by a camera. We extract 18

[7] E. Osuna, R. Freund, and F. Girosi, Training Support

feature values on the recognized face and use SVM

Vector Machines: An Application to Face Detection,

to train the system model that can find expressions

Computer Vision and Pattern Recognition, pp. 130-136,

from captured facial photos. From experiment

June 1997.

results, the facial expressions of users can be

[8] M. Hall, E. Frank, Geoffrey Holmes, B. Pfahringer, P.

recognized with a very high precision rate. We will

Reutemann, and I. H. Witten, The WEKA Data Mining

integrate the system with our developed vehicle

Software: An Update, SIGKDD Explorations, Vol. 11,

system [12-13] for automobile safety in the near

no.1, 2009.

future.

[9] F. Cid, J. A. Prado, P. Bustos, and P. Nnez, "A real


time and robust facial expression recognition and

References

imitation approach for affective human-robot interaction

[1] P. Viola and M. J. Jones, Rapid Object Detection

using gabor filtering." IEEE/RSJ Intelligent Robots and

using a Boosted Cascade of Simple Features, in

Systems (IROS), November, 2013.

Proceedings of the IEEE Computer Society International

[10] E. B. McClure, K. Pope, A. J. Hoberman, D. S.

Conference on Computer Vision and Pattern Recognition,

Pine, , and E. Leibenluft, Facial expression recognition

vol. 1, pp. 511-518, December 2001.

in adolescents with mood and anxiety disorders.

[2] R. Lienhart and J. Ma ydt, An Extended Set of

American Journal of Psychiatry, 160, 11721174, 2003.

Haar-like Features for Rapid Object Detection, in

[11] K. Bahreini, R. Nadolski, W. Westera. "Towards

Proceedings of the IEEE International Conference on

multimodal

Image Processing, vol. 1, pp. 900-903, September 2002.

environments.", Interactive Learning Environments, 1-16,

[3] G. Edwards, C.J. Taylor, T.F. Cootes, Interpreting

2014.

face images using active appearance models, Third

[12] C. Liu, J. Yang, and S.-L. Wu, Intelligent

International Conference on Automatic Face and Gesture

Information Dissemination Scheme for Urban Vehicular

Recognition , pp. 300-305, April 1998.

Ad

[4] T.F. Cootes, C.J. Taylor, D.H. Cooper and J. Graham,

Engineering, Vol. 2015, 2015.

"Active

and

[13] P. K. Sahoo, M.-J. Chiang, and S.-L. Wu, "SVANET:

application", Computer Vision and Image Understanding,

A Smart Vehicular Ad Hoc Network for Efficient Data

vol. 61, pp. 3859, 1995.

Transmission with Wireless Sensors.", Sensors, 14(12),

[5] T. Kanade, J. Cohn, Y. Tian, Comprehensive database

22230-22260, 2014.

shape

models

their

training

Hoc

emotion

Networks,

recognition

Mathematical

in

e-learning

Problems

for facial expression analysis, IEEE International


Conference on Automatic Face & Gesture Recognition,
pp.46-53, March 2000.

ISBN: 978-1-941968-16-1 2015 SDIWC

174

in

You might also like