You are on page 1of 48

Face Recognition and Its applications

PART
Based on works of: 1
Jinshan Tang; Ariel P from Hebrew
University; Mircea Focşa, UMFT; Xiaozhen Niu,
Department of Computing Science, University of Alberta;
Christine Podilchuk, chrisp1@caip.rutgers.edu,
http://www.caip.rutgers.edu/wiselab
Contents
Introduction
Face detection using color information
Face matching
Face Segmentation/Detection
Facial Feature extraction
Face Recognition
Video-based Face Recognition
Comparison
Conclusion
Reference
Face
Segmentation/Detection
During the past ten years, considerable
progress has been made in multi-face
recognition area,
This includes:
 Example-based learning approach by Sung and
Poggio (1994).
 The neural network approach by Rowley et al.
(1998).
 Support vector machine (SVM) by Osuna et al.
(1997).
Introduction
Basic steps for face
recognition
Input face image

Face detection

Face feature Face


extraction recognitio
n
Face
database Feature Matching Decision maker

Output result
Face detection
• Geometric information based face
detection
• Color information based face detection
• Combining them together

(a) Geometric
information based face (b) Color information
detection based face detection
Color information based face
detection
Face color is different from
background
Choice of color spaces is very
important Skin
Color Spaces: Background
color color
•R,G,B
•YCbCr
•YUV
•r,g
•……..

Figure 4. Skin color distribution in a


complex background
A face detection algorithm Using Color and Geometric
information
Ideas: (1) compensate for lightning, (2) separate by transforming to new
(sub) space.
Ideas: (1) compensate for lightning, (2) separate by transforming to new
(sub) space.
Feature-based face detection

Color can be used in segmentation and grouping


of image subareas.
Location and shape parameters of eyes are the most important
features to be detected through segmentation and morphological
operations (dilation and erosion).
Ideas:
2) Eyes
3) Mouth
4) Boundary (edge
detection)
5) Boundary approximated
to ellipse or something
(Hough)
The
concept
of eye
glasses

The concept of half-profiles


Face Matching
•Feature based face matching
•Template matching

Features versus
templates
•Feature based face matching You can extract
various features

Face image
From face Normalization Feature extraction
detection

Feature vector

Decision classifier
Output results
maker

You can use


various decision You can use
makers various
classifiers
Normalization

Eye
Normalization: rotation
location normalization, scale normalization

Averaged
for objects
Cross Correlation :
mean( I T T ) − mean( I )(T )
C N (y ) =
σ ( I T )σ (T )

object templat
e
Feature extraction
•Eyebrow thickness and vertical position at the eye center
position
•A coarse description of the left eyebrow’s arches
•Nose vertical position and width
•Mouth vertical position, width, height upper and lower lips
• eleven radii describing the chin shape
•Bigonial breadth (face width at nose position)
•Zygomatic breadth (face width halfway between nose tip and
eyes).

3.5-D feature vector


Example of
some
geometrical
features
Classifier This is just one
example of classifier,
others are Decision
Trees, expressions,
Bayes classifier decomposed
structures, NNs.


−1
∆ j ( x) = ( x − m j )
T
(x − mj )

Rank the
Feature Output
Computer∆ (x) distance
the
vector x
j
values
results
∆ (x)
m (j=2,3,…N)
j

j
ANN Classifier Feature vector

Class 1 Class 2

ANN

one-class-in-one network

MAXNET
multi-class-in-one network
Classification results

Fig.2. one-class-in-one network


Template matching Templates You
have to
database create
Produce a template the data
base of
templat
es for all
Face image people
From face Normalization matching you
want to
detection recogniz
e

Decision
Output results
maker
There are
different
templates used
in various
regions of the
normalized face.

Various methods
can be used to
compress
information for
each template.
Example-based learning approach
(EBL)

Three parts:
The image is divided into many possible-
overlapping windows,
 each window pattern gets classified as either “a
face” or “not a face” based on a set of local image
measurements.
For each new pattern to be classified, the
system computes a set of different
measurements between the new pattern and
the canonical face model.
A trained classifier identifies the new pattern
as “a face” or “not a face”.
Example of a system using
EBL
Neural network (NN)
Kanade et al. first proposed an NN-based
approach in 1996.
Although NN have received significant
attention in many research areas, few
applications were successful in face
recognition.

Why?
Neural network (NN)
It’s easy to train a neural network with
samples which contain faces, but it is much
harder to train a neural network with
samples which do not.
The number of “non-face” samples are just
too large.
Neural network (NN)
Neural network-based filter.
 A small filter window is used to scan through
all portions of the image,
 and to detect whether a face exists in each
window.
Merging overlapping detections and
arbitration. By setting a small threshold,
many false detections can be eliminated.
An example of using NN
Test results of using
NN
SVM (Support Vector
Machine)
SVM was first proposed in
1997, it can be viewed as
a way to train polynomial
neural network or radial
basic function classifiers.
Can improve the accuracy
and reduce the
computation.
Comparison with
Example Based Learning
(EBL)

Test results reported in 1997.


Using two test sets (155 faces).
 SVM achieved better detection rate and
fewer false alarms.
Recent
approaches
Face segmentation/detection
research area still remain active, for
example:
 An integrated SVM approach to multi-face
detection and recognition was proposed in
2000.
 A technique of background learning was
proposed in August 2002.
Still lots of potential!
Static face
recognition
Numerous face recognition
methods/algorithms have been proposed in
last 20 years,
 several representative approaches are:

 Eigenface
 LDA/FDA (Linear DA, Fisher DA) Discriminant analysis
(algorithm)
 Neural network (NN)
 PCA – Principal Component Analysis
 Discrete Hidden Markov Models (DHMM)
 Continuous Density HMM (CDHMM).
Eigenface
The basic steps are:
Registration. A face in an input image first
must be located and registered in a standard-
size frame.
Eigenpresentation.
 Every face in the database can be represented
as a vector of weights,
 the principal component analysis (PCA) is
used to encode face images and capture face
features.
Identification. This part is done by locating
the images in the database whose weights
are the closest (in Euclidean distance) to the
LDA/FDA
Face recognition method using LDA/FDA is called the
fishface method.
Eigenface use linear PCA. It is not optimal to discrimination
for one face class from others.
Fishface method seeks to find a linear transformation to
maximize the between-class scatter and minimize the
within-class scatter.
Test results demonstrated LDA/FDA is better than eigenface
using linear PCA (1997).
Test results of LDA
Test results of a subspace LDA-
based face recognition method in
1999.
Video-based Face
Recognition
Three challenges:
 Low quality
 Small images
 Characteristics of face/human objects.
Three advantages:
 Allows much more information.
 Tracking of face image.
 Provides continuity,
 this allows reuse of classification information from
high-quality images in processing low-quality
images from a video sequence.
Basic steps for video-based face
recognition

Object segmentation/detection.
Motion structure.
 The goal of this step is to estimate the 3D
depths of points from the image sequence.
3D models for faces.
 Using a 3D model to match frontal views of
the face.
Non-rigid motion analysis.
Recent approaches
Most video-based face recognition
system has three modules for
 detection,
 tracking
 and recognition.
 An access control system using Radial Basis
Function (RBS) network was proposed in
1997.
 A generic approach based on posterior
estimation using sequential Monte Carlo
methods was proposed in 2000.
 A scheme based on streaming face
The Streaming Face Recognition (SFR) scheme
Combine several decision rules together, such as:
 Discrete Hidden Markov Models (DHMM) and
 Continuous Density HMM (CDHMM).
The test result achieved a 99% correct recognition
rate in the intelligent room.
Comparison
Two most representative and
important protocols for face
recognition evaluations:
 The FERET protocol (1994).
 Consists of 14,126 images of 1199 individuals.
 Three evaluation tests had been administered in
1994, 1996, and 1997.
 The XM2VTS protocol (1999).
 Expansion of previous M2VTS program (5 shots of
each of 37 subjects).
 Now consists 295 subjects.
 The results of M2VTS/XM2VTS can be used in wide
range of applications.
1996/1997 FERET Evaluations
Compared ten algorithms.
Conclusion
• Face recognition has many potential applications.
• For many years not very successful,
• we need to improve the accuracy of face recognition

• Combining face recognition and other biometric


recognition technologies,

•Such as:
• fingerprint recognition technology,
• voice recognition technologies
• and so on

For our applications accuracy is


much more important than
Conclusion
Significant achievements have been made
recently.
 LDA-based methods and NN-based methods are
very successful.
FERET and XM2VTS have had a significant
impact to the developing of face recognition
algorithms.
Challenges still exist, such as pose changing
and illumination changing.
Face recognition area will remain active for a
long time.
Reference
[1] W. Zhao, R. Chellappa, A. Rosenfeld, and P.J. Phillips, Face Recognition: A
Literature Survey, UMD CFAR Technical Report CAR-TR-948, 2000.
[2] K. Sung and T. Poggio, Example-based Learning for View-based Human Face
Detection, A.I. Memo 1521, MIT A.I. Laboratory, 1994.
[3] H.A. Rowley, S. Baluja, and T. Kanade, Neural Network Based Face
Detection, IEEE Trans. On Pattern Analysis and Machine Intelligence, Vol. 20,
1998.
[4] E. Osuna, R. Freund, and F. Girosi, Training Support Vector Machines: An
Application to Face Recognition, in IEEE Conference on Computer Vision and
Pattern Recognition, pp. 130-136, 1997.
[5] M. Turk and A. Pentland, Eigenfaces for Recognition, Journal of Cognitive
Neuroscience, Vol.3, pp. 72-86, 1991.
[6] W. Zhao, Robust Image Based 3D Face Recognition, PhD thesis, University
of Maryland, 1999.
[7] K.S. Huang and M.M. Trivedi, Streaming Face Recognition using Multicamera
Video Arrays, 16th International Conference on Pattern Recognition (ICPR).
August 11-15, 2002.
[8] P.J. Phillips, P. Rauss, and S. Der, FERET (Face Recognition Technology)
Recognition Algorithm Development and Test Report, Technical Report ARL-TR
995, U.S. Army Research Laboratory.
[9] K. Messer, J. Matas, J. Kittler, J. Luettin, and G. Maitre, XM2VTSDB: The
Extended M2VTS Database, in Proceedings, International Conference on Audio

You might also like