Professional Documents
Culture Documents
PART
Based on works of: 1
Jinshan Tang; Ariel P from Hebrew
University; Mircea Focşa, UMFT; Xiaozhen Niu,
Department of Computing Science, University of Alberta;
Christine Podilchuk, chrisp1@caip.rutgers.edu,
http://www.caip.rutgers.edu/wiselab
Contents
Introduction
Face detection using color information
Face matching
Face Segmentation/Detection
Facial Feature extraction
Face Recognition
Video-based Face Recognition
Comparison
Conclusion
Reference
Face
Segmentation/Detection
During the past ten years, considerable
progress has been made in multi-face
recognition area,
This includes:
Example-based learning approach by Sung and
Poggio (1994).
The neural network approach by Rowley et al.
(1998).
Support vector machine (SVM) by Osuna et al.
(1997).
Introduction
Basic steps for face
recognition
Input face image
Face detection
Output result
Face detection
• Geometric information based face
detection
• Color information based face detection
• Combining them together
(a) Geometric
information based face (b) Color information
detection based face detection
Color information based face
detection
Face color is different from
background
Choice of color spaces is very
important Skin
Color Spaces: Background
color color
•R,G,B
•YCbCr
•YUV
•r,g
•……..
Features versus
templates
•Feature based face matching You can extract
various features
Face image
From face Normalization Feature extraction
detection
Feature vector
Decision classifier
Output results
maker
Eye
Normalization: rotation
location normalization, scale normalization
Averaged
for objects
Cross Correlation :
mean( I T T ) − mean( I )(T )
C N (y ) =
σ ( I T )σ (T )
object templat
e
Feature extraction
•Eyebrow thickness and vertical position at the eye center
position
•A coarse description of the left eyebrow’s arches
•Nose vertical position and width
•Mouth vertical position, width, height upper and lower lips
• eleven radii describing the chin shape
•Bigonial breadth (face width at nose position)
•Zygomatic breadth (face width halfway between nose tip and
eyes).
∑
−1
∆ j ( x) = ( x − m j )
T
(x − mj )
Rank the
Feature Output
Computer∆ (x) distance
the
vector x
j
values
results
∆ (x)
m (j=2,3,…N)
j
j
ANN Classifier Feature vector
Class 1 Class 2
ANN
one-class-in-one network
MAXNET
multi-class-in-one network
Classification results
Decision
Output results
maker
There are
different
templates used
in various
regions of the
normalized face.
Various methods
can be used to
compress
information for
each template.
Example-based learning approach
(EBL)
Three parts:
The image is divided into many possible-
overlapping windows,
each window pattern gets classified as either “a
face” or “not a face” based on a set of local image
measurements.
For each new pattern to be classified, the
system computes a set of different
measurements between the new pattern and
the canonical face model.
A trained classifier identifies the new pattern
as “a face” or “not a face”.
Example of a system using
EBL
Neural network (NN)
Kanade et al. first proposed an NN-based
approach in 1996.
Although NN have received significant
attention in many research areas, few
applications were successful in face
recognition.
Why?
Neural network (NN)
It’s easy to train a neural network with
samples which contain faces, but it is much
harder to train a neural network with
samples which do not.
The number of “non-face” samples are just
too large.
Neural network (NN)
Neural network-based filter.
A small filter window is used to scan through
all portions of the image,
and to detect whether a face exists in each
window.
Merging overlapping detections and
arbitration. By setting a small threshold,
many false detections can be eliminated.
An example of using NN
Test results of using
NN
SVM (Support Vector
Machine)
SVM was first proposed in
1997, it can be viewed as
a way to train polynomial
neural network or radial
basic function classifiers.
Can improve the accuracy
and reduce the
computation.
Comparison with
Example Based Learning
(EBL)
Eigenface
LDA/FDA (Linear DA, Fisher DA) Discriminant analysis
(algorithm)
Neural network (NN)
PCA – Principal Component Analysis
Discrete Hidden Markov Models (DHMM)
Continuous Density HMM (CDHMM).
Eigenface
The basic steps are:
Registration. A face in an input image first
must be located and registered in a standard-
size frame.
Eigenpresentation.
Every face in the database can be represented
as a vector of weights,
the principal component analysis (PCA) is
used to encode face images and capture face
features.
Identification. This part is done by locating
the images in the database whose weights
are the closest (in Euclidean distance) to the
LDA/FDA
Face recognition method using LDA/FDA is called the
fishface method.
Eigenface use linear PCA. It is not optimal to discrimination
for one face class from others.
Fishface method seeks to find a linear transformation to
maximize the between-class scatter and minimize the
within-class scatter.
Test results demonstrated LDA/FDA is better than eigenface
using linear PCA (1997).
Test results of LDA
Test results of a subspace LDA-
based face recognition method in
1999.
Video-based Face
Recognition
Three challenges:
Low quality
Small images
Characteristics of face/human objects.
Three advantages:
Allows much more information.
Tracking of face image.
Provides continuity,
this allows reuse of classification information from
high-quality images in processing low-quality
images from a video sequence.
Basic steps for video-based face
recognition
Object segmentation/detection.
Motion structure.
The goal of this step is to estimate the 3D
depths of points from the image sequence.
3D models for faces.
Using a 3D model to match frontal views of
the face.
Non-rigid motion analysis.
Recent approaches
Most video-based face recognition
system has three modules for
detection,
tracking
and recognition.
An access control system using Radial Basis
Function (RBS) network was proposed in
1997.
A generic approach based on posterior
estimation using sequential Monte Carlo
methods was proposed in 2000.
A scheme based on streaming face
The Streaming Face Recognition (SFR) scheme
Combine several decision rules together, such as:
Discrete Hidden Markov Models (DHMM) and
Continuous Density HMM (CDHMM).
The test result achieved a 99% correct recognition
rate in the intelligent room.
Comparison
Two most representative and
important protocols for face
recognition evaluations:
The FERET protocol (1994).
Consists of 14,126 images of 1199 individuals.
Three evaluation tests had been administered in
1994, 1996, and 1997.
The XM2VTS protocol (1999).
Expansion of previous M2VTS program (5 shots of
each of 37 subjects).
Now consists 295 subjects.
The results of M2VTS/XM2VTS can be used in wide
range of applications.
1996/1997 FERET Evaluations
Compared ten algorithms.
Conclusion
• Face recognition has many potential applications.
• For many years not very successful,
• we need to improve the accuracy of face recognition
•Such as:
• fingerprint recognition technology,
• voice recognition technologies
• and so on