You are on page 1of 5

World Applied Programming, Vol (2), Issue (1), January 2012.

55-59
Special section for proceeding of International e-Conference on Computer Engineering (IeCCE) 2012

ISSN: 2222-2510
2011 WAP journal. www.waprogramming.com

Improved PCA Algorithm for Face Recognition


Vinay Rishiwal1

Ashutosh Gupta2

CSIT Department, MJP Rohilkhand University


Bareilly, India
1
vinayrishiwal@gmail.com

CSIT Department, MJP Rohilkhand University


Bareilly, India
2
ashutosh333@rediffmail.com

Abstract: As society becoming more and more electronically connected, the capability to automatically
establish an identity of individuals using face as a biometric has become important. Many applications such
as identity verification, criminal face recognition, and surveillance require robust and accurate face
recognition technology. Face recognition has become a very challenging problem in presence of clutter and
variability of the background, noise and occlusion, and finally speed requirements. This paper focuses on
developing a face recognition system using an extended PCA algorithm. The proposed algorithm uses the
concept of PCA and represents an improved version of PCA to deal with the problem of orientation and
lightening conditions present in the original PCA. The preprocessing phase of the proposed algorithm
emphasize the efficiency of he algorithm even when number of images per person or the orientation is very
different.
Keywords: Face recognition, Principal component Analysis
I.

INTRODUCTION AND RELATED WORK

Face recognition has been a challenging and quite interesting problem in the field of pattern recognition for a
very long time. Beginning with Bledsoe's [12] and Kanade's [13] early systems, a number of automated or semiautomated face recognition strategies have modeled and classified faces based on normalized distances and
ratios among feature points. Recently this general approach has been continued and improved by the recent work
of Yuille et al [14].
Face recognition has received significant attention in the past decades due to its potential applications in
biometrics, information security, law enforcement, etc. By far, numerous methods have been suggested to
address this problem [1]. Among them, principal component analysis (PCA) turns out to be very effective [2].
Recently, a PCA closely-related method, independent component analysis (ICA) [3], has also been applied to
face recognition. ICA can be viewed as a generalization of PCA since it concerns not only second-order
dependencies but also high-order dependencies between variables. The previous researchers [4,5] however, all
use the standard PCA as the baseline algorithm to evaluate ICA-based face recognition systems.
The initial success of eigenfaces popularized the idea of matching images in compressed subspaces.
Researchers began to search for other subspaces that might improve performance. One alternative is Fisher's
linear discriminant analysis (LDA, a.k.a. "fisher faces") [6]. For any N-class classification problem, the goal of
LDA is to find the N-1 basis vectors that maximize the interclass distances while minimizing the intraclass
distances. At one level, PCA and LDA are very different: LDA is a supervised learning technique that relies on
class labels, whereas PCA is an unsupervised technique. Nonetheless, in circumstances where class labels are
available either technique can be used, or LDA has been compared to PCA in several studies [7].
Principal Component Analysis is a standard technique used to approximate the original data with lower
dimensional feature vectors [8]. The basic approach is to compute the eigenvectors of the covariance matrix, and
approximate the original data by a linear combination of the leading eigenvectors. The mean square error (MSE)
in reconstruction is equal to the sum of remaining Eigen values. The feature vector here is the PCA projection
coefficient. PCA is appropriate when the samples are from one class or group (super class). In real
implementation, there are two ways to compute the eigenvalues and eigenvectors: SVD decomposition and
regular Eigen computation. For efficient way to compute or update the SVD, please refer to [9, 10]. In many
cases, even though the matrix is full rank matrix, the large condition number will create a numerical problem.
The distance measure used in the matching could be a simple Euclidean or a weighted Euclidean distance. It has
been suggested that the weighted Euclidean will give better classification than the simple Euclidean distance [11].
Moreover this technique can also be applied for the purpose of Facial Expression Analysis. Most approaches
to automatic facial expression analysis attempt to recognize a small set of prototypic emotional facial

55

Ashutosh Gupta et al., World Applied Programming, Vol (2), No (1), January 2012.

expressions, i.e., fear, sadness, disgust, anger, surprise, and happiness (e.g., [15, 16, 19, 20]. This practice
follows from the work of Darwin [17], and more recently Ekman [18], who suggested that basic emotions have
corresponding prototypic expressions.
The paper is organized as follows. Section 2 describes the Improved PCA algorithm for face recognition
followed by results and discussion in Section 3. Finally, Conclusion and Future Work is given in section 4.
II.

IMPROVED PRINCIPLE COMPONENT ANALYSIS (IPCA)

Principal Component Analysis (PCA) is a way of identifying patterns in data, and expressing the data in such
a way as to highlight their similarities and differences. In PCA the variance of each image form the mean image
is determined. This variance is a measure of variability in the face space. To apply PCA for face recognition,
Eigenfaces are calculated and the weight of these Eigenfaces is used to find the contribution of training images
to the input image. In the proposed work, certain changes to the original PCA algorithm are made. The
preprocessing of the training images has been done to remove the background, lightening conditions and the
orientation factors. Also, some normalization steps have been included to remove the calculation induced errors.
Our proposed algorithm is an improvement of PCA algorithm. The original PCA algorithm didn't work well
when the orientation of the images was very large i.e. around 90 degrees. But our proposed algorithm worked
quite well even in those cases in which PCA failed. The main steps of Improved PCA algorithm are:
1. The faces in the training set are preprocessed by taking the co-ordinates of both the eyes and mouth and
then applying cropping and aligning on these distances.
2. From the preprocessed training set, compute the Eigenfaces and then obtain the best Eigenfaces
corresponding to highest Eigen values.
3. Now project these Eigenfaces into the face database to find their contribution.
4. Take an input image which has to be identified and apply the same preprocessing steps.
5. Find the weight pattern of the Eigenfaces by projecting the input image into Eigen faces.
6. Now reconstruct the input image from the weighted Eigenfaces.
7. Determine if it is a face at all and if so, either known or unknown.
III.

RESULTS AND DISCUSSION

The experiments were conducted in MATLAB 7.0 taking following parameters: A total of 110 images were
taken in the training set. There were 11 images per person and a total of 10 persons. These 11 images were in the
basic 11 directions including the 90 degree orientation. These images were taken from FERET database which is
a standard database meant for testing various face recognition algorithms. These images were then preprocessed
and stored at a separate place. The input image was also preprocessed to remove the noise from it. The resolution
of the original images was 256*384. After the preprocessing the resolution became 241*291 since only face part
of the image was taken. The two test cases were classified as follows:
1. As the first test case, we took the image of the person who was in the database but the input image was not
included in the database. It had a different orientation which we have taken as 90 degree to test the
robustness and efficiency of the algorithm.
2.

As the second case, we took the image of the person which was exactly present in the database.

The Figure 1(b) shows the preprocessing image of input image as shown in Figure 1(a).

Fig. 1. (a) Input Image

(b) Preprocessed Image.

Experiment 1: A different image taken as input

56

Ashutosh Gupta et al., World Applied Programming, Vol (2), No (1), January 2012.

As explained above, we took the image of the person who was in the database but the input image was not
included in the database. It had a different orientation, which we have taken as 90 degree, to test the robustness
and efficiency of the algorithm.
Now our algorithm reconstructs the input image by considering the weights if the Eigenfaces and the
contribution of each face. The reconstructed image is shown to the right part of Figure 2.

Fig. 2. (a) Left part shows input image (b) Right part shows Reconstructed Image.

It is evident from Figure 2; the reconstructed image resembles the preprocessed input image hence improving
the recognition efficiency
The plot of the weight of the input image against face space and the Euclidean distance of the input image
from all of the face space images is shown in Figure 3. It is clear that the Euclidean distance of image "103.jpg"
is the lowest and below the threshold value. Also the Euclidean distances of the input image form the face class
(all the images of a single person) are comparable.

Fig. 3. plot of the weight of the input image against face space and the Euclidean distance of the input image.

The face identified by the proposed algorithm is shown in Figure 4

Fig. 4. Face identified from database.

Experiment 2: Input image present exactly in the database


As the second case, we took the image of the person which was exactly present in the database as shown in
Figure 5 (a). The image reconstructed by the algorithm is shown in Figure 5(b).

57

Ashutosh Gupta et al., World Applied Programming, Vol (2), No (1), January 2012.

Fig. 5. (a) Left part shows input image (b) Right part shows Reconstructed Image.

Fig. 6. plot of the weight of the input image against face space and the Euclidean distance of the input image.

As the input image is exactly same as in the database, so the reconstructed image is very fine.
Now in Figure 6 shown below, we have plotted the Euclidean distance of the input image from all of the images
in the face space. As the image was present exactly, so the Euclidean image of the same image i.e. "12.jpg" is
zero. This is also true theoretically.
The face identified by the proposed algorithm is shown in Figure 7.

Fig. 7. Face identified from database.

IV.

CONCLUSIONS AND FUTURE WORK

This paper presents an algorithm to recognize faces present in the face database. The proposed algorithm uses
the concept of PCA and represents an improved version of PCA to deal with the problem of orientation and
lightening conditions present in the original PCA. In the case when test image was in the database, the person
was identified correctly. Now even when we took as input, an image with different orientation of a person
present in database, the algorithm successfully identified the person. This shows that pre-processing greatly
enhances the efficiency of the algorithm even when we have less number of images per person or the orientation
is greatly different.
This work is being extended to deal with a range of aspects (other than full frontal views) by defining a small
number of classes for each known person corresponding to characteristic views. Because of the speed of the
recognition, the system has many chances within a few seconds to attempt to recognize many slightly different
views, at least one of which is likely to fall close to one of the characteristic views. An intelligent system should
also have an ability to adapt over time. Reasoning about images in face space provides a means to learn and
subsequently recognize new faces in an unsupervised manner. When an image is sufficiently close to face space
(i.e., it is face-like) but is not classified as one of the familiar faces, it is initially labeled as "unknown". The
computer stores the pattern vector and the corresponding unknown image. If a collection of "unknown" pattern

58

Ashutosh Gupta et al., World Applied Programming, Vol (2), No (1), January 2012.

vectors cluster in the pattern space, the presence of a new but unidentified face is postulated. The system can also
be developed for real - time recognition including moving objects and changing illumination conditions.
.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]

W. Zhao, R. Chellappa, A. Rosenfeld, and P. Phillips (2002) Face recognition: A literature survey. Technical Report CAR-TR-948,
UMD CS-TR-4167R, August, 2002
M. Turk and A. Pentland (1991) Eigenfaces for recognition J. Cognitive Neuroscience, 1991, 3(1), pp. 71-86.
A. Hyvrinen, J. Karhunen, E. Oja (2001) Independent Component Analysis Wiley,New York, 2001.
M. S. Bartlett and T. J. Sejnowski (1997) Independent components of face images: a representation for face recognition Proceedings of
the 4th Annual Jount Symposium on Neural Computation, Pasadena, CA, May 17, 1997.
Jian Yang, David Zhang and Jing-yu Yang (2005) Is ICA Signi_cantly Better than PCA for Face Recognition? Proceedings of the
Tenth IEEE International Conference on Computer Vision (ICCV'05) 1550-5499/05, 2005
D. Swets and J. Weng (1996) Using Discriminant Eigenfeatures for Image Retrieval IEEE Transactions on Pattern Analysis and
Machine Intelligence, vol. 18, pp. 831-836, 1996.
P. Belhumeur, J. Hespanha, and D. Kriegman (1997) Eigenfaces vs. Fisherfaces: Recognition Using Class Speci_c Linear Projection
IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 19, pp. 711-720, 1997.
K. Fukunaga (1989) Statistical Pattern Recognition New York: Academic Press, 1989
M. Gu, S.C. Eisenstat (1994) A Stable and Fast Algorithm for Updating the Singular Value Decomposition Research Report YALE
DCR/RR-996, 1994, Yale University ,New Haven, CT
S. Chandrasekaran, B.S. Manjunath, Y.F.Wang, J. Winkeler, and H. Zhang (1997) An Eigenspace update algorithm for image analysis
journal of Graphical Model and Image Processing, 1997
K. Etemad and R. Chellappa (1997) Discriminant Analysis for Recognition of Human Face Images journal of Optical Society of
America A, pp. 1724-1733, Aug.1997
W. W. Bledsoe (1966) The model method in facial recognition Panoramic Research Inc., Palo Alto, CA, and Rep. PRI: 15, Aug. 1966
T. Kanade (1973) Picture processing system by computer complex and recognition of human face Dept. of Information Science, Kyoto
University, Nov. 1973
A.L. Yuille, D.S. Cohen, and P.W. Hallinan (19889) Feature extraction from faces using deformable templates proc. CVPR, San Diego,
CA, June 1989.
M. Black and Y. Yacoob (1997) Recognizing facial expressions in image sequences using local parameterized models of image motion
Comput.Vis" vol. 25, no. 1, pp. 23-48, 1997.
M. Pantic and L. J. M. Rothkrantz (2000) Expert system for automatic analysis of facial expression Image Vis. Comput. J., vol. 18, no.
11, pp. 881-905, 2000.
C. Darwin (1965) The Expression of the Emotions in Man and Animals Chicago, IL: Univ. of Chicago Press, 1872, 1965.
M. Lewis and J. M. Haviland-Jones, Eds. Handbook of Emotions Guilford Press, New York, 2000, pp. 236-249.
P. Ekman and W. Friesen (1978) Facial Action Coding System Palo Alto, CA: Consulting Psychol. Press, 1978.
K. R. Scherer and P. Ekman, Eds. Handbook Methods in Non-Verbal Behavior Research Cambridge Univ. Press, Cambridge, MA,
1982.

59

You might also like