You are on page 1of 11

International Journal Of Image Processing and Visual Communication

Volume 1 , Issue 1 , August 2012



1

An Automatic Face Recognition Technique Based on Face Contours
and Face Profile Characterization
Gayathri Devi
1
, Avinash Jha
1

1
Fourth Dimension , Trivandrum


Abstract:
Face recognition is such a challenging yet
interesting problem that it has attracted
researchers with different backgrounds:
psychology, pattern recognition, neural
networks, computer vision, and computer
graphics.
In this paper, an improved algorithm for
automatic face recognition, based on the
characterization of faces by their contours
and profiles, is proposed. The main aim of
this paper is to offer some insights into the
studies of machine recognition of faces.
Even though current machine recognition
systems have reached a certain level of
maturity, their success is limited by the
conditions imposed by many real
applications. Experiments show that the
central vertical profile and the contour are
both very useful features for face
recognition. Illumination and pose remains
a largely unsolved problem. Hence the
pose compensation and illumination
compensation are discussed in order to
avoid their effects. Current systems are
still far away from the capability of the
human perception system. Hence the
solutions for the problems of face
recognition are also considered. Image
morphing technique was introduced to get
the better efficiency for the recognition of
faces. A comparison to the principle
component analysis was made to prove our
algorithm is more robust, simpler and less
computationally intensive.



International Journal Of Image Processing and Visual Communication
Volume 1 , Issue 1 , August 2012

2

1. INTRODUCTION:
1.1 Face Recognition from Still Images
The automatic face recognition involves
three key steps/subtasks:
(1) Detection and rough normalization of
faces,
(2) feature extraction and accurate
normalization of faces,
(3) Identification and/or verification




Other
applications


simultaneously






Identification/verification

Fig1.Key Steps For Identification

Sometimes, different subtasks are not
totally separated. Face detection and
feature extraction can be achieved
simultaneously, as indicated in the
Figure1.

2. Eigenfaces Method:
The first really
successful demonstration of machine
recognition of faces was made using
eigenpictures (also known as eigenfaces)
for face detection and identification. Given
the eigenfaces, every face in the database
can be represented as a vector of weights;
the weights are obtained by projecting the
image into eigenface components by a
simple inner product operation. When a
new test image whose identification is
required is given, the new image is also
represented by its vector of weights. The
identification of the est image is done by
locating the image in the database whose
weights are the closest to the weights of
the test image. By using the observation
Face Detection
Face Extraction
Face Recognition
>Face Tracking
>Pose Estimation
>Compression
>HCI Systems

>Facial feature tracking
>emotion Recognition
>Gaze estimation
>HCI Systems
>Holistic Template
>Future Geometry
>Hybrid
International Journal Of Image Processing and Visual Communication
Volume 1 , Issue 1 , August 2012

3

that the projection of a face image and a
nonface image are usually different, a
method of detecting the presence of a face
in a given image is obtained. Using a
probabilistic measure of similarity, instead
of the simple Euclidean distance used with
eigenfaces, the standard eigenface
approach was extended.

2.1 Approaches for Eigenfaces:
The approach to face
recognition involves the following
initialization operations:
1) Acquire an initial set of face images.
2) Calculate the eigenfaces from the initial
set of faces, keeping only the M images
that correspond to the highest eigenvalues.
These M images define the face space as
new faces are experienced, the Eigen faces
can be updated or recalculated
3) Calculate the corresponding distribution
in M dimensional weight space for each
known individual, by projecting their face
images onto the face space.
Having initialized the
system, the following steps are then used
to recognize new face images:
1) Calculate a set of weights based on the
input image and the M eigenfaces by
projecting the input image onto each of the
eigenfaces.
2) determine if the image is a face at all by
checking to see If the image is sufficiently
close to face space
3) If it is a face, classify the weight pattern
as either a known person or as unknown
4) Update the eigenfaces and/or weight
Patterns.
5) If the same unknown face is seen
several times, calculate its characteristic
weight pattern and incorporate into the
known faces the concept of eigenfaces can
be extended to eigenfeatures, such as
eigeneyes, eigenmouth, etc. Using a
limited set of images, recognition
performance as a function of the number of
eigenvectors was measured for eigenfaces
only and for the combined representation.
For lower-order spaces, the eigenfeatures
performed better than the eigenfaces
International Journal Of Image Processing and Visual Communication
Volume 1 , Issue 1 , August 2012

4



Fig. 2Comparison of matching:
(a) test views, (b) eigenface matches,
(c) eigenfeature matches
when the combined set was used, only
marginal improvement was obtained.
These experiments support the claim that
feature-based mechanisms may be useful
when gross variations are present in the
input images (Figure.2).It seems to be
better to estimate eigenmodes/eigenfaces
that have large eigenvalues (and so are
more robust against noise), while for
estimating higher-order eigenmodes it is
better to use LFA.


Fig. 3. Comparison of dual eigenfaces
and standard eigenfaces:
(a) Intrapersonal, (b) extrapersonal,
(c) Standard

The extrapersonal eigenfaces appear more
similar to the standard eigenfaces than the
intrapersonal ones, the intrapersonal
eigenfaces represent subtle variations due
mostly to expression and lighting,
suggesting that they are more critical for
identification.


3. Enhanced Approach For Face
Recognition:
International Journal Of Image Processing and Visual Communication
Volume 1 , Issue 1 , August 2012

5

In 3D face recognition,
registration is a key preprocessing step.
The disadvantage of using curvatures to
register faces is that this process is very
computationally intensive and requires
very accurate range data to be successful.
On the other hand, the range images in the
database used for the research described in
this paper contain noise, such as holes and
spikes, which are difficult to eliminate
fully by automatic preprocessing
algorithms. Another registration method
often used involves choosing several user-
selected landmark locations on the face,
such as the tip of the nose, the inner and
outer corners of the eyes, and then using
the affine transformation to register the
face to a standard position. However, this
approach clearly relies on human
intervention, i.e., is not automatic.
A third method performs
registration by using moments. The matrix
constituted by the six second moments of
the face surface (eq. 1) contains the
rotational information of the face. By
applying the Singular Value
Decomposition (eq. 2), the resulting
unitary matrix U can be used as an affine
transformation matrix on the original face
surface. The problem with this method is
that during repeated scans, besides the
changes in the face area for the same
subject, there are also some changes
outside the face area. These additional
changes will also impact the registration of
the face surface, causing the registration
for different instances of the same subject
to be different.

m200 m110 m101
M= m101 m020 m110
(1)
m101 m110 m002

UU
T
=SVD(M) (2)
In our registration process, we assume that
each subject kept his head upright during
scanning, so that the face orientation
around the Z axis (yaw) does not need to
be corrected, but the orientation changes in
the X (roll) and Y (pitch) axes need to be
compensated for. Before registration,
International Journal Of Image Processing and Visual Communication
Volume 1 , Issue 1 , August 2012

6

original range images are filtered to get rid
of most of the holes and the spikes. The
registration process starts by locating the
tip of the nose. Then a unitary
transformation matrix is used to rotate the
range image a certain angle around the Y
(pitch) axis, and a score which measures
the difference between the left and right
halves of the face is computed. The angle
which minimizes the score is chosen to
register the face around the Y axis (pitch).
This criterion is based on the symmetric
property of the face. The registration
process around the X axis (roll) is similar,
the transformation matrix is used to rotate
the range image a certain angle around the
X (roll) axis and a score is computed for
each angle. But in this case the score
measures the difference between two
feature points in the central vertical profile.
The aim is to make these two feature
points, one in the forehead, and the other at
the end of the chin in the central vertical
profile, be at the same height. After
registration, because of the potential for
different scales in the range images for the
same subject, the range images are
normalized, resulting in the same length
for the nose bases for all the subjects.
During the registration and normalization
process, the bilinear interpolation method
is used to find the range of a point located
between two sampling points. The result of
the preprocessing is a range image which
has a grid of 201 by 201 points. The point
(101, 101) of the grid is made to coincide
with the tip of the nose in the registered
face surface. The value associated with
each point in the grid is the distance
between the point in the face surface and
the corresponding location on the XY
plane. The values are offset so that the
value corresponding to the tip of the nose
is normalized to 100 mm. Values below 20
mm in the grid area are threshold to 20
mm. Figure 1 shows a mesh plot of the
range image and the gray level image of
International Journal Of Image Processing and Visual Communication
Volume 1 , Issue 1 , August 2012

7

the subject.

Fig 4 Mesh plot of the range image

Fig5: gray level image plot of range data

4. Recognition Experiments and Results:
The use of profile matching
as a means for face recognition is a very
intuitive idea that has been proposed in the
past. In our previous research, we have, in
fact, tested the efficiency of several
potential profile combinations used for
identification. The central vertical profile
was found to be the most powerful one to
characterize a face. Besides profiles, the
contour was also tested for its potential
applicability for face recognition. In our
experiment, a contour, which is defined
30mm below the tip of the nose, was
extracted for each scan. Although in
computing the distance or dissimilarity
between profiles and contours, some
researchers have used the Hausdoff
distance, we found that the Euclidean
distance is suitable for the context of our
experiment. The following four different
approaches were tried to compare faces in
the experimental data described above:


a) Central vertical profile.
b) Contour, defined 30mm below the tip of
the nose.
c) Central vertical profile and contour
combined.
International Journal Of Image Processing and Visual Communication
Volume 1 , Issue 1 , August 2012

8

d) Principal component analysis method.
In (a) and (b), the central vertical profile
alone and the contour alone were used as
the feature to compare for face
Recognition. In (c), the central vertical
profile and the contour were combined to
recognize faces. A sum rule and a
multiplication rule were used in the
combination of the two features. The final
score for a gallery image was the sum or
product of the (a) and (b) ranks, and the
gallery image which had the smallest score
was picked as the target subject for the
probe image. The principal component
analysis method was used in (d) for
comparison purposes.80 gallery range
images were used for the training set. The
first fifteen eigen vectors were kept to
reduce the dimension of the original range
images to 15. The Mahalanobis distance
metric was used to find the face in the
gallery. The first approach,
(a) gave a result of 70% rank-one
recognition rate and
(b) gave a result of 76.25% rank-one
recognition rate. The rank-one recognition
rates of (c) were 80% (sum rule) and
81.25% (multiplication rule), which are
significantly higher than (a) and (b). The
principal component analysis method in
(d) gave the lowest rank-one recognition
rate of only 50%.
The 3D images used for these experiments
contained imperfections (e.g., holes and
spikes). In spite of the application of some
automatic pre-processing to eliminate
some of these artifacts, a number of these
imperfections were still present in the files
used for the tests. For this reason, the
principal component analysis method,
which is very sensitive to noise, only
attains a recognition rate of 50%. On the
other hand, the central vertical profile
method and the contour method get high
recognition rates using the same data.
Since these two features are extracted from
part of the entire range image, they are
more robust to the influence of noise. In
addition, these two features are still
powerful distinctive features for a face.
When they are combined, the recognition
rate is significantly higher than the
International Journal Of Image Processing and Visual Communication
Volume 1 , Issue 1 , August 2012

9

recognition rate for any of them alone.
Because these two features are poorly
correlated to each other, more information
benefits the recognition rate. Unlike the
principal component analysis method, this
method does not need training and is very
simple to implement and computationally
efficient.
Given a probe image and a
claimed identity the output of the classifier
is a similarity score between the probe
image and images of the person with the
given identity. In the case of face
recognition this test is performed for all
subjects in the database and the one with
greater similarity is selected. The similarity
scores SC, SD obtained by the color and
depth classifiers respectively are
subsequently normalized in the range [0 ,
1]. Fusion of color and depth cues is
performed by computing a combined
similarity score given by
log (SC+D) = wc . log (SC) + wd .log
(SD)
Where wc and wd are relative
weights corresponding to color and depth
modalities respectively, chosen
experimentally (wc = wd = 0.5 was used in
our experiments). We used this
experimental methodology to evaluate the
effectiveness of the proposed algorithms
and perform comparison with other
techniques as detailed in the sequel.

5. Pose Compensation:
A face normalization procedure is
described based on the detection of facial
features such as the eyes and mouth and
adaptation of a generic 3D model. An
approach based on building a pose varying
eigenspace using several images of each
person. Representative techniques are the
view-based subspace and the predictive
characterized subspace .The main
importance of our techniques is that pose
estimation and compensation is based
solely on 3D images and thus exhibits
robustness under varying illumination and
occlusion of facial features (beard and hair
glasses, etc.). Another advantage stemming
from the availability of 3D data is that
International Journal Of Image Processing and Visual Communication
Volume 1 , Issue 1 , August 2012

10

geometric artifacts which may be present
after image warping are avoided.

6. Illumination Compensation:
Our algorithm
utilizes a 3D range image and albedo map
of the persons face to render novel images
under arbitrary illumination. The
shortcomings of this approach are
a) The requirement in practice of large
example sets to achieve good
reconstructions,
b) The increase in the dimensionality of
the classification problem by the
introduction of the illumination variability,
c) The requirement of pixel wise alignment
between probe and gallery images which
necessitates pose compensation and
d) The reliance to simplified reflectance
models

7. Contribution of Image Morphing:

We have used image morphing to
achieve smart image recognition, to
increase the efficiency and perfection, to
increase the reliability of the system.
Effective face recognition can be done
with a morphed image. When the system
has been learnt with some images the
system stores the key information of those
images into its knowledgebase and
respective error and weight values. When a
sample image is brought to be recognized
then the system first morphs the image in
several steps form -10 to +10 degree. Fig.6
shows how the image is being morphed.


Fig.6 Intermediate stages of Morphing the
flag of Sweden

From the intermediate stages of morphing
we choose some reasonable stages near -10
degree and +10 degree for the recognition
process. We do so because the sample
input image may be little more different
from the image that the system learnt
which difference may cross the tolerance
of our system.

8. Conclusion:
International Journal Of Image Processing and Visual Communication
Volume 1 , Issue 1 , August 2012

11

In this paper, a new algorithm using
profiles and contours to recognize faces is
proposed and compared with the principal
component analysis method. The following
conclusions can be reached:
Both the central vertical profile and the
contour of a face have the potential to be
features used in face recognition
algorithms.
Combining the central vertical profile
and the contour in the algorithm yields a
better recognition rate than those using the
central vertical profile or the contour
alone.
As demonstrated by extensive
experimental evaluation the proposed
algorithms lead also to superior face
recognition rates of about 85.25%
In comparison to the principal
component analysis, which is sensitive to
noise in the range images, methods based
on the central vertical profile and the
contour are more robust to the range
image quality and the recognition method
based on these features is simpler and less
computationally intensive.







9. REFERENCES:

1) ADINI, Y., MOSES,Y. ANDULLMAN, S.
1997. Face recognition: The problem of
compensating for changes in illumination
direction. IEEE Trans. Patt.Anal.Mach.Intell.19.

2) BARTLET. C. AND SEARCY, J.
1993. Inversion and configuration of faces. Cog.
Psych. 25, 281316.

3) Face Recognition: A Literature Survey BY
W.ZHAO, P. J. PHILLIPS
R. CHELLAPPA, A. ROSENFELD

4) An Interactive Image Recognition for Human-
Machine Interfacing by
Md. Tajmilur Rahman, M. A. Jobayer Bin Bakkre,
Al-Amin *Ahsan Raja Chowdhury, Md. Al-Amin
Bhuiyan

You might also like