Professional Documents
Culture Documents
9, SEPTEMBER 2006
2481
I. INTRODUCTION
VER the last decade, human face recognition has attracted significant attentions because of its wide range
of applications [1][3], such as criminal identification, credit
card verification, security system, scene surveillance, entertainments, etc. In these applications, face recognition techniques
are used on various source formats ranging from static, controlled format photographs to uncontrolled video sequences
which have been produced in different conditions. Therefore,
a practical face recognition technique needs to be robust to the
image variations caused by the illumination conditions [4][8],
Manuscript received June 22, 2005; revised December 8, 2005. This work was
supported by The Hong Kong Polytechnic University under a research grant.
The associate editor coordinating the review of this manuscript and approving
it for publication was Dr. Ron Kimmel.
The authors are with the Centre for Multimedia Signal Processing, Department of Electronic and Information Engineering, The Hong Kong Polytechnic
University, Hong Kong (e-mail: enxdxie@eie.polyu.edu.hk; enkmlam@polyu.
edu.hk).
Digital Object Identifier 10.1109/TIP.2006.877435
2482
(1)
where , denote the pixel position in the spatial domain, is
the radial center frequency of the complex exponential, is the
orientation of the Gabor wavelet, and is the standard deviation of the Gaussian function. The value of can be derived as
follows [49]:
(2)
where
, and is the bandwidth in octaves. By selecting different center frequencies and
orientations, we can obtain a family of Gabor kernels from (1),
which can be used to extract features from an image. Given
, the convolution of
and
a gray-level image
is given as follows:
(3)
where denotes the convolution operator. The convolution can
be computed efficiently by performing the fast Fourier transform (FFT), then point-by-point multiplications, and, finally, the
inverse fast Fourier transform (IFFT). Concatenating the convolution outputs, we can produce a one-dimensional Gabor representation of the input image denoted as follows:
(4)
where represents the transpose operation, and
and
are
the numbers of columns and rows in an image. In this paper,
XIE AND LAM: GABOR-BASED KERNEL PCA WITH DOUBLY NONLINEAR MAPPING FOR FACE RECOGNITION WITH A SINGLE FACE IMAGE
we consider only the magnitude of the output of Gabor representations, which can provide a measure of the local properties
of an image [41] and is less sensitive to the lighting conditions
).
is normal(for convenience, we also denote it as
ized to have zero mean and unit variance; and then the Gabor
representations with different and are concatenated to form
a high-dimensional vector for face recognition as follows:
(5)
where and are numbers of center frequencies and orientations used for the Gabor wavelets.
B. Principal Component Analysis
PCA is a classical method that has been widely used for
human face representation and recognition. The major idea of
PCA is to decompose a data space into a linear combination
of a small collection of bases, which are pairwise orthogonal
and which capture the directions of maximum variance in the
training set. Suppose there is a set of centered -dimensional
,
, such that
training samples
and
. The covariance matrix of the input can be
estimated as follows:
(6)
2483
(10)
The orthonormal eigenvectors
of
correlargest positive eigenvalues
sponding to the
are computed. Then, the corresponding eigenvectors
for the KPCA can be derived as follows [33],
[39]:
where
is the mapped data matrix in the high-dimensional feature space. For a mapped test
, it should be projected onto the eigenvector
sample
system
, and the projection vector of
,
, in the transformed subspace is
computed by
where
Specifically, the
component
(12)
is given as follows:
C. Kernel PCA
With the Covers theorem, nonlinearly separable patterns in
an input space will become linearly separable with high probability if the input space is transformed nonlinearly into a high-dimensional feature space. We can, therefore, map an input variable into a high-dimensional feature space, and then perform
PCA. For a given nonlinear mapping , the input data space
(13)
2484
Therefore, from (10) and (13), we can see that the explicit mapping process is not required, and that all the procedures are
performed in the low-dimensional input space instead of the
high-dimensional feature space.
In a practical face recognition application, three classes of
kernel functions have been widely used, which are the polynomial kernels, Gaussian kernels, and sigmoid kernels, [32], respectively
(14)
and
(15)
(16)
where is a positive integer,
,
, and
. In
[35], the polynomial kernels are extended to include fractional
, where a more
power polynomial (FPP) models, i.e.,
reliable performance can be achieved.
III. DOUBLY NONLINEAR MAPPING KERNEL PCA
Although we do not need to perform the mapping explicitly in KPCA, and all the computations are implemented in the
input space instead of the high-dimensional feature space, it is
still meaningful to investigate how to design a good mapping
that has an explicit physical meaning and is suitable for pattern recognition applications, such as face recognition. In this
section, we will propose a novel KPCA with doubly nonlinear
mapping, which considers not only the statistical property of
the input Gabor features, but also the spatial information about
human faces.
As discussed in Section II-A, the Gabor feature vector of an
input image can be represented by a -dimensional vector .
is a concatenation of the Gabor representations
, i.e.,
, which is
normalized to have zero mean and unit variance. On the one
, we can group the
output values
hand, for each
together to form a histogram, which represents a probability
, i.e., different values of
have different
distribution of
does not constatistical probabilities. We should note that
sider the spatial information, i.e., pixel position
, of the
. A normal distribuinput, but only the magnitude of
is adopted to estimate the probability density function
in our algorithm. On the other hand, the output
tion (pdf)
is related to the pixel position
, which has different spatial importance in a face image. Accordingly, for fea, we can also use a normal distribution
tures
to model the probability distribution of the values of
(this estimation will be verified according to the experiments
in Section IV-F), and the elements from different pixels have
varying spatial importance. Because we do not consider the effect of the center frequencies and orientations of Gabor filters
is simply denoted as
for feature transformation,
in the following sections, which does not affect the analysis.
From (14) to (16), we can see that whichever kernel function
is holistically considered.
is used, the input feature vector
is treated equally
In other words, each element
and acts in the same role. However, as discussed above, due
to the uneven statistical probability of and the different spain a face image, the elements
tial importance of pixel
with different values and spatial locations should be assigned
different weights for discrimination. An output feature with a
higher probability should provide more discriminant information for recognition, and the elements derived from the important facial features such as eyes, mouth, nose, etc., should also
be emphasized. Therefore, a nonlinear mapping is devised to
emphasize those features that have both higher statistical probabilities and spatial importance
(17)
where is the eigenmask [51], [52] which is used to represent
the importance of different facial positions. The eigenmask is
derived from a set of
a modification of the first eigenface
training images which is shown as follows:
(18)
where
and
denote the maximum and min.
imum magnitudes of the image , respectively, and
The eigenmask is then normalized to a range between 0 and
1, and the one used in our method is shown in Fig. 1.
is operated in the original
This nonlinear mapping
has the same dimension as
. For
input space, and
, there is a corresponding
in the
each
. The
transformed feature space, i.e.,
statistical probability of
is determined by the value of ,
is determined by the value
and the spatial importance of
.
of the eigenmask at the same pixel position, i.e.,
is determined by its
In other words, the mapped value
Gabor representation value and the corresponding eigenmask
value . Therefore, we have
(19)
Suppose that the statistical property of the Gabor representation
and the spatial information about faces are complementary to
XIE AND LAM: GABOR-BASED KERNEL PCA WITH DOUBLY NONLINEAR MAPPING FOR FACE RECOGNITION WITH A SINGLE FACE IMAGE
2485
can be
(20)
(25)
(21)
With (20), we have
(26)
(22)
where
is a normal distribution with zero mean and a
variance of , and is a positive constant. Then, we have
where
because the eigenmask is supposed to be a
fixed structure.
, which nonlinFirst, we consider the characteristics of
early maps the Gabor representations of a face image according
to their statistical properties. After this mapping, the features
that have higher probabilities should contribute more for dis, the nearer the value of
crimination. Because
to the mean or zero for demeaned vectors, the more likely it
is that the elements will be the expected pattern, and the more
important will be their role for recognition. Therefore, the mapshould satisfy the following condition.
ping function
1) Condition 1:
(27)
where is a constant and
is the error function which
is the integration of the normal distribution [53] and is defined
as follows:
(28)
(23)
By (22) and (23), we have
(24)
which implies that after the nonlinear mapping
, the feature
variation
of the element with a higher statistical probability should be given a larger weight, and so act in a more important role for discrimination, and vice versa.
is based on the spatial inSecond, the nonlinear mapping
formation about human faces. The spatial information is represented by an eigenmask. The higher the magnitude of an element in the eigenmask, the more important the feature point it
should satisfy the following condition.
represents. Hence,
2) Condition 2:
and its derivative function
are
monotonically increasing functions.
is monotonically increasing so that those
In Condition 2,
pixels belonging to the important facial features will be emis nonlinear and at
phasized. Furthermore, the increase of
a higher rate when has a higher value. Therefore,
(29)
where
. Considering (19), (20), (27), and (29), we have
the doubly nonlinear mapping function as follows:
(30)
where
and
. Fig. 2(a) and (b) illusand
with different values of and ,
trates the graphs of
respectively.
After this nonlinear mapping, an element in , which has a
higher statistical probability and spatial importance, can act in
a more important role for face recognition. This mapping takes
2486
TABLE I
TEST DATABASES USED IN THE EXPERIMENTS
Fig. 3. Some cropped faces used in our experiments: (a) Images from the Yale
database; (b) images from the AR database; (c) images from the ORL database;
and (d) images from the YaleB database.
place in the input space and does not increase the data dimensionality. Combined with the conventional KPCA, we can obtain a novel doubly nonlinear mapping KPCA. This process
is equivalent to performing two nonlinear mappingsthe first
nonlinear mapping is , as shown in (17), and is then followed
by the nonlinear mapping as shown in (8)on an input feature to a high-dimensional feature space, and then performing
PCA for recognition (certainly, the second mapping is not
explicitly processed and all procedures are implemented in the
original space, as discussed in Section II-C). Combining (8) and
(17), the doubly nonlinear mapping KPCA defines a nonlinear
as follows:
mapping
(31)
and PCA is performed in the mapped feature space for recognition.
IV. EXPERIMENTAL RESULTS
In this section, we will evaluate the performances of the proposed doubly nonlinear mapping KPCA for face recognition
based on different face databases. The databases used include
the Yale database, the AR database, the ORL database, and the
XIE AND LAM: GABOR-BASED KERNEL PCA WITH DOUBLY NONLINEAR MAPPING FOR FACE RECOGNITION WITH A SINGLE FACE IMAGE
2487
TABLE II
SUBCLASSES OF THE TEST DATABASES USED IN THE EXPERIMENTS
(32)
is a signum function. As discussed in [35], a PCA
where
classifier will perform better when the Mahalanobis distance is
used. Therefore, in our experiments, the Mahalanobis distance
is also employed as the distance measure. Considering the computational complexity of using Gabor wavelets, the Gabor filters
,
used include one center frequency only, which is equal to
in increments of
.
and eight orientations from 0 to
A. Determination of Parameters for the Nonlinear Mapping
Functions
In (30), two parameters, and , are involved in the mapping
function. From Fig. 2, we can see that nonlinear mapping functions with different parameter values have different properties.
Therefore, proper values for the parameters are to be determined
so as to obtain an optimal result. In our experiments, we use the
Yale database for training and determining the optimal values
and . Then, these
for and , and the mapping functions
mapping functions are evaluated using other databases. To obtain the optimal values for and , different values of and
are tested, and then DKPCA is employed for face recognition.
is considered, the value of in (30) is set at 1; and if
If only
only
is used, (30) is changed to the following form:
(33)
Experimental results are shown in Fig. 4, where we can see
that the best performance is achieved if the values of and
are set at [1.0, 2.5] and [3, 6] when only
and only
, respectively, are employed in face recognition. When we consider
is
these two parameters at the same time, i.e.,
used for nonlinear feature transformation, experimental results
and
for the
show that they should be set at
best performance. This result coincides with the discussion in
is
Section III. With Condition 1, when
satisfied, the optimal transformation for face recognition can be
, the
achieved. Considering the assumption that
value of the parameter should be close to 1.
2488
TABLE III
FACE RECOGNITION RESULTS UNDER NORMAL CONDITIONS
TABLE IV
FACE RECOGNITION RESULTS UNDER VARYING LIGHTING CONDITIONS
of ten people with each person having 65 images under different lighting conditions, is often used to investigate the effect
of lighting on face recognition. In this part of the experiments,
we select only those images with obviously uneven illuminations as the testing images. In other words, only those images
with azimuth angles or elevation angles of lighting larger than
20 are considered. Consequently, for each subject in the YaleB
database, only 48 different illumination models are chosen for
testing. The experimental results based on the images under
varying lighting from different databases are shown in Table IV.
The performance of PCA degrades significantly compared to
the results based on the normal faces. PCA represents faces with
their principal components, but the variations between the images of the same face due to illumination are almost always
larger than image variations due to change in face identity [5].
Hence, PCA cannot represent and discriminate a face under severely uneven lighting conditions.
Compared to the PCA method, the Gabor wavelets can
greatly increase the recognition performance based on the
different databases. This shows that the Gabor wavelets
representations can effectively reduce the effect of varying
illumination. For the Yale database and the YaleB database,
KPCA with FPP outperforms the conventional Gabor-based
KPCA but the latter can achieve a better performance for the
AR database.
In most cases, our doubly nonlinear mapping KPCA outperforms other algorithms, except for the YaleB database, where
the KPCA with FPP performs better than our method when only
is used as the mapping function. This is due to the fact that
is derived from the eigenmask, which emphasizes the important features in a human face under normal conditions. With
severe illumination variations, such as those shown in Fig. 3(d),
the Gabor features abstracted from some feature points may
not be reliable for face recognition. Emphasizing these features
may result in adverse performance. However, when combined
with , the spatial information can still provide additional and
useful information and improve the recognition rate. Like the
outperforms DKPCA
results in Section IV-B, DKPCA with
with , and DKPCA with achieves the best performance.
XIE AND LAM: GABOR-BASED KERNEL PCA WITH DOUBLY NONLINEAR MAPPING FOR FACE RECOGNITION WITH A SINGLE FACE IMAGE
2489
TABLE V
FACE RECOGNITION RESULTS WITH DIFFERENT FACIAL EXPRESSIONS
TABLE VI
FACE RECOGNITION RESULTS UNDER VARIOUS PERSPECTIVES
TABLE VII
FACE RECOGNITION RESULTS BASED ON DIFFERENT DATABASES
faces either rotated out of the image plane, e.g., looking to the
right, left, up, and down, or rotated in the image plane, clockwise or anti-clockwise. The experimental results are tabulated in
Table VI and show that none of these face recognition methods
can achieve a satisfactory performance under perspective variations. Nevertheless, the doubly nonlinear mapping KPCA still
outperforms other methods.
E. Face Recognition With Different Databases
We have evaluated and discussed the effect of different
conditions on the different face recognition methods. In this
section, we also show the respective performances of the
different face recognition methods based on the different
databases without dividing them into subdatabases. The recognition results are tabulated in Table VII, which also show that
the proposed doubly nonlinear mapping Gabor-based KPCA
outperforms all the other methods based on the databases. In
or
also outperforms
addition, our method using either
the conventional Gabor-based methods in most of the cases.
With these four databases, the recognition rate for the ORL
database is always the lowest, irrespective of which method
is used because most of the faces in this database are under
perspective variations, as discussed in Section IV-D.
F. Face Recognition With Empirical Modeling of the Feature
Distribution
In Condition 1 (23), we assume that the probability density
, is approximated by a
function (pdf) of an input feature,
normal distribution with zero mean and unit variance. Experimental results in Section IV-A also show that when the parameter (the variance of the normal distribution) is set at 1.0, the
best performance can be achieved. In this section, we will discuss how this assumption satisfies the real case, and whether
2490
there are other pdf that can be used to build a more reliable mapping function.
We combine the four training sets together to form a new database, which has a total of 186 training images. All the images
have a frontal view, with normal illumination and neutral expressions. Gabor wavelets are used to abstract the input features,
which are then normalized to have zero mean and unit variance.
Then, the pdf of is represented by a series of discrete values,
which can be computed by
(34)
where
if
if
,
is the total number of
within the range
input features, and is the interval, which is set at 0.1 in our
experiment. Considering (23) and (34), we have
(35)
is a constant used to center the mapped data. As
, so is set at
.
is repre. For an
sented as a sequence of discrete values where
, the value of
is computed by a linear
input , if
interpolation method. Fig. 5 shows the graphs of and
.
From Fig. 5(a), we can see that the real distribution of is
close to a normal distribution; however, it is not symmetrical.
Compare Figs. 5(b) and 2(a); the graphs are also similar. We
for
into (20), then repeat the
substitute this estimated
procedures in Section IV-E. The experimental results are tabulated in Table VIII. We can see that the recognition rates are
slightly better than the results shown in Table VII. Considering
this more simply, we can still use a normal distribution to estimate the pdf of input, which also achieves a satisfied result.
where
V. CONCLUSION
In this paper, we have proposed a novel doubly nonlinear
mapping Gabor-based KPCA for human face recognition. In our
approach, the Gabor wavelets are used to extract facial features,
then a doubly nonlinear mapping KPCA is proposed to perform
feature transformation and face recognition. Compared with the
conventional KPCA, an additional nonlinearly mapping is performed in the original space. Our new nonlinear mapping not
only considers the statistical property of the input features, but
also adopts an eigenmask to emphasize those features derived
from the important facial feature points. Therefore, after the
mappings, the transformed features have a higher discriminant
power, and the importance of the features adapts to the spatial
importance of the face image.
This paper has also evaluated the performances of the different face recognition algorithms in terms of changes in facial expressions, uneven illuminations, and perspective variations. Experiments were conducted based on different databases
TABLE VIII
FACE RECOGNITION RESULTS BASED ON DIFFERENT DATABASES
and show that our algorithm always outperforms the other algorithms, such as PCA, Gabor wavelets plus PCA, Gabor wavelets
plus kernel PCA with FPP models, under different conditions.
Furthermore, only one image per person is used for training in
our experiments which makes it useful for practical face recognition applications. With our approach, the recognition rates
based on the Yale database, the AR database, the ORL database
and the YaleB database are 94.7%, 98.8%, 82.8%, and 98.8%,
respectively.
REFERENCES
[1] W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld, Face recognition: A literature survey, ACM Comput. Surv., vol. 35, no. 4, pp.
399458, 2003.
[2] R. Chellappa, C. L. Wilson, and S. Sirohey, Human and machine
recognition of faces: A survey, Proc. IEEE, vol. 83, no. 5, pp. 705741,
May 1995.
[3] A. K. Jain, A. Ross, and S. Prabhakar, An introduction to biometric
recognition, IEEE Trans. Circuits Syst. Video Technol., vol. 14, no. 1,
pp. 420, Jan. 2004.
XIE AND LAM: GABOR-BASED KERNEL PCA WITH DOUBLY NONLINEAR MAPPING FOR FACE RECOGNITION WITH A SINGLE FACE IMAGE
2491
[30] X. He, S. Yan, Y. Hu, P. Niyogi, and H.-J. Zhang, Face recognition
using laplacianfaces, IEEE Trans. Pattern Anal. Mach. Intell., vol. 27,
no. 3, pp. 328340, Mar. 2005.
[31] S. Haykin, Neural NetworksA Comprehensive Foundation, 2nd ed.
New York: Prentice-Hall, 1999.
[32] B. Scholkopf and A. Smola, Learning with Kernels: Support Vector
Machines, Regularization, Optimization and Beyond. Cambridge,
MA: MIT, 2002.
[33] B. Scholkopf, A. Smola, and K. R. Muller, Nonlinear component analysis as a kernel eigenvalue problem, Neural Comput., vol. 10, no. 5,
pp. 12991319, 1998.
[34] B. Moghaddam, Principal manifolds and probabilistic subspaces for
visual recognition, IEEE Trans. Pattern Anal. Mach. Intell., vol. 24,
no. 6, pp. 780788, Jun. 2002.
[35] C. Liu, Gabor-based kernel PCA with fractional power polynomial
models for face recognition, IEEE Trans. Pattern Anal. Mach. Intell.,
vol. 26, no. 5, pp. 572581, May 2004.
[36] S. Mika, G. Rtsch, J. Weston, B. Schlkopf, and K.-R. Mller,
Fisher discriminant analysis with kernels, in Proc. IEEE Int.
Workshop Neural Networks for Signal Processing IX, Aug. 1999,
pp. 4148.
[37] G. Baudat and F. Anouar, Generalized discriminant analysis using
a kernel approach, Neural Comput., vol. 12, no. 10, pp. 23852404,
2000.
[38] Q. Liu, H. Lu, and S. Ma, Improving kernel fisher discriminant analysis for face recognition, IEEE Trans. Circuits Syst. Video Technol.,
vol. 14, no. 1, pp. 4249, Jan. 2004.
[39] J. Yang, A. F. Frangi, J. Yang, D. Zhang, and Z. Jin, KPCA plus LDA:
A complete kernel fisher discriminant framework for feature extraction
and recognition, IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no.
2, pp. 230244, Feb. 2005.
[40] C. K. Chui, An Introduction to Wavelets. Boston, MA: Academic,
1992.
[41] M. Lades, J. C. Vorbruggen, J. Buhmann, J. Lange, C. V. D. Malsburg,
R. P. Wurtz, and W. Konen, Distortion invariant object recognition in
the dynamic link architecture, IEEE Trans. Comput., vol. 42, no. 3,
pp. 300311, 1993.
[42] [Online]. Available: http://cvc.yale.edu/projects/yalefaces/yalefaces.
html. New Haven, CT
[43] A. M. Martinez and R. Benavente, The AR Face Database, Tech. Rep.
#24 CVC, 1998.
[44]
[Online]. Available: http://www.uk.research.att.com/pub/data/
att_faces.zipOliver Research Lab.. Cambridge, U.K.
[45] [Online]. Available: http://cvc.yale.edu/projects/yalefacesB/yalefacesB.html. New Haven, CT, Yale Univ.
[46] B. S. Manjunath and W. Y. Ma, Texture feature for browing and retrieval of image data, IEEE Trans. Pattern Anal. Mach. Intell., vol. 18,
no. 8, pp. 837842, Aug. 1996.
[47] C. Liu and H. Wechsler, Gabor feature based classification using
the enhanced fisher linear discriminant model for face recognition,
IEEE Trans. Image Process., vol. 11, no. 4, pp. 467476, Apr.
2002.
[48] D. Liu, K. M. Lam, and L. S. Shen, Optimal sampling of Gabor features for face recognition, Pattern Recognit. Lett., vol. 25, no. 2, pp.
267276, 2004.
[49] T. S. Lee, Image representation using 2D Gabor wavelets, IEEE
Trans. Pattern Anal. Mach. Intell., vol. 18, no. 10, pp. 959971, Oct.
1996.
[50] C. Nastar and N. Ayach, Frequency-based nonrigid motion analysis,
IEEE Trans. Pattern Anal. Mach. Intell., vol. 18, no. 11, pp. 10671079,
Nov. 1996.
[51] K. W. Wong, K. M. Lam, and W. C. Siu, A robust scheme for live
detection of human face in color images, Signal Process.: Image
Commun., vol. 18, no. 2, pp. 103114, 2003.
[52] K. H. Lin, K. M. Lam, and W. C. Siu, Spatially eigen-weighted hausdorff distances for human face recognition, Pattern Recognit., vol. 36,
no. 8, pp. 18271834, 2003.
[53] M. Abramowitz and I. A. Stegun, Eds., Error function and fresnel
integrals, in Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables, 9th ed. New York: Dover, 1972,
ch. 7, pp. 297309.
[54] K. W. Wong, K. M. Lam, and W. C. Siu, An efficient algorithm for
human face detection and facial feature extraction under different
conditions, Pattern Recognit., vol. 34, no. 10, pp. 19932004,
2001.
[55] K. M. Lam and H. Yan, Locating and extracting the eye in human face
images, Pattern Recognit., vol. 29, no. 5, pp. 771779, 1996.
2492
Xudong Xie received the B.Eng. degree in electronic engineering and the M.Sc. degree in signal
and information processing from the Department of
Electrical Engineering, Tsinghua University, China,
in 1999 and 2002, respectively. He is currently
pursuing the Ph.D. degree in the Department of
Electronic and Information Engineering, The Hong
Kong Polytechnic University.
His research interests include image processing,
pattern recognition, and human face analysis.
Kin-Man Lam received the Associateship in Electronic Engineering with distinction from The Hong
Kong Polytechnic University (formerly called Hong
Kong Polytechnic) in 1986, the M.Sc. degree in
communication engineering from the Department
of Electrical Engineering, Imperial College of Science, Technology and Medicine, London, U.K., in
1987 (under the S. L. Poa Scholarship for overseas
studies), and the Ph.D. degree from the Department
of Electrical Engineering, University of Sydney,
Sydney, Australia, in August 1996.
From 1990 to 1993, he was a Lecturer at the Department of Electronic Engineering, The Hong Kong Polytechnic University. He joined the Department
of Electronic and Information Engineering, The Hong Kong Polytechnic University, as an Assistant Professor in October 1996, and has been an Associate
Professor since February 1999. He has been a member of the organizing committee and program committee of many international conferences. His current
research interests include human face recognition, image and video processing,
and computer vision.
Dr. Lam is the Vice Chairman of the IEEE Hong Kong Chapter of Signal
Processing and an Associate Editor of the journal Image and Video Processing.
He was a Guest Editor for the Special Issue on Biometric Signal Processing
of the EURASIP Journal on Applied Signal Processing. He was the Secretary
of the 2003 IEEE International Conference on Acoustics, Speech, and Signal
Processing (ICASSP03), the Technical Chair of the 2004 International Symposium on Intelligent Multimedia, Video and Speech Processing (ISIMP04),
and a Technical Co-Chair of the 2005 International Symposium on Intelligent
Signal Processing and Communication Systems (ISPACS05). He won an Australia Postgraduate Award for his studies and the IBM Australia Research Student Project Prize.