You are on page 1of 8

Hybrid Features Based Face Recognition

Method Using Artificial Neural Network


Kolhandai Yesu, Himadri Jyoti Chakravorty, Prantik Bhuyan, Rifat Hussain, Kaustubh Bhattacharyya
Department of Electronics and Communication Engineering
The Assam Don Bosco University, Assam, India
Email: mariayesudass@gmail.com, pauljyoti04@gmail.com, prantik.bhuyan@rediffmail.com,
rifathussain33@gmail.com, kaustubh.d.electronics@gmail.com

Abstract Face recognition is a biometric tool for
authentication and verification having both research
and practical relevance. A facial recognition based
verification system can further be deemed a computer
application for automatically identifying or verifying a
person in a digital image. Varied and innovative face
recognition systems have been developed thus far with
widely accepted algorithms. In this paper, we present
an intelligent hybrid features based face recognition
method configuring the central moment and Eigen
vectors and the standard deviation of the eyes, nose and
mouth segments of the human face as the decision
support entities of the Generalized Feed Forward
Artificial Neural Network(GFFANN).The proposed
methods correct recognition rate is over 95%.

Keywords - recognition; central moment; eigen vectors;
standard deviation; neural network, training, testing;
cosine transform;
I. INTRODUCTION
Biometrics refers to a science of analyzing human
body parts for security purposes. The word
biometrics is derived from the Greek words bios
(life) and metrikos (measure) [1]. Biometric
identification is becoming more popular of late
owing to the current security requirements of society
in the field of information, business, military, e-
commerce and etc [2].
Face recognition is a nonintrusive method, and facial
images are the most common biometric
characteristics used by humans to make a personal
recognition. Human faces are complex objects with
features that can vary over time. However, we
humans have a natural ability to recognize faces and
identify person at the spur of the second. Of course,
our natural recognition ability extends beyond face
recognition too. Nevertheless, in the interaction
between humans and machines, also commonly
known as Human Robot Interface [3] or Human
Computer Interface (HCI), the machines are to be
trained to recognize and identify and differentiate the
human faces. There is thus a need to simulate
recognition artificially in our attempts to create
intelligent autonomous machines.
The popular approaches for face recognition are
based either on the location and shape of facial
attributes such as the eyes, eyebrows, nose, lips and
chin, and their spatial relationships, or the overall
analysis of the face image that represents a face as a
weighted combination of a number of canonical
faces.
The former approach is robust and efficient enough
as the vital attributes of the face are considered in
training and testing while the latter approach reckons
in the global information of the whole face.
Basically, any face recognition system can be
depicted by the following block diagram.





Figure 1. Basic blocks of a face recognition system.

1) Pre-processing Unit: In the initial phase, the
image captured in the true colour format is converted
to gray scale image and resized to a predefined
standard and noise is removed. Further Histogram
Equalization (HE) and Discrete Wavelet Transform
(DWT) are carried out for illumination normalization
and expression normalization respectively [4].
2) Feature Extraction: In this phase, facial features
are extracted using Edge Detection Techniques,
Principal Component Analysis (PCA) Technique,
Discrete Cosine Transform (DCT) coefficients, DWT
coefficients or fusion of different techniques [5].
3) Training and Testing: Here, Euclidean Distance
(ED), Hamming Distance, Support Vector Machine
(SVM), Neural Network [6] and Random Forest (RF)
[7] may be used for training followed by testing the
new images or the test images for recognition.
II. RELATED WORKS

The past few years have witnessed an increased
interest in researches aiming at developing reliable
face recognition techniques.
Pre-processing
Unit
Feature
Extraction
Training
and
Testing
One of the commonly employed techniques involves
representing the image by a vector in a dimensional
space of size similar to the image [8]. However, the
large dimensional space of the image reduces the
speed and robustness of face recognition. This
problem is overcome rather effectively by
dimensionality reduction techniques such as the
Principal Component Analysis (PCA) and the Linear
Discriminant Analysis (LDA).
PCA is an eigenvector method designed to model
linear variation in high-dimensional data. PCA
performs dimensionality reduction by projecting an
original n-dimensional data onto a k (<< n)-
dimensional linear subspace spanned by the leading
eigenvectors of the datas covariance matrix [9].
LDA is a supervised learning algorithm. LDA
features are obtained by computing the edge
response values in all eight directions at each pixel
position and generating a code from the relative
strength magnitude. Each face is represented as a
collection of LDP codes for face recognition process
[10].
While PCA uses orthogonal linear space for
encoding information, LDA encodes using linearly
separable space in which bases are not necessarily
orthogonal. Experiments carried out by researchers
thus far points to the superiority of algorithms based
on LDA over PCA.
Another face analysis technique is the Locality
Preserving Projections (LPP). It consists in
obtaining a face subspace and finding the local
structure of the manifold. Basically it is obtained by
finding the optimal linear approximations to the
eigen functions of the Laplace Betrami operator on
the manifold. Therefore, it recovers important
aspects of the intrinsic nonlinear manifold structure
by preserving local structure though it is a linear
technique [11].
Ramesha K and K B Raja, proposed Dual Transform
based Feature Extraction for Face Recognition
(DTBFEFR). Here Dual Tree Complex Wavelet
Transform (DT-CWT) is employed to form the
feature vector and Euclidean Distance (ED), Random
Forest (RF) and Support Vector Machine (SVM) are
used as the classifiers [12].
Weng and Huang presented a face recognition model
based on hierarchical neural network which is grown
automatically and not trained with gradient-descent.
Good results for discrimination of ten distinctive
subjects are reported [13].
This paper presents the face recognition method
using both the geometrical features of the biometrical
characteristic of the face such as eyes, nose, and
mouth and the overall analysis of the whole face.
After the pre-processing stage, segments of the eyes,
nose and mouth are extracted from the faces of the
database. These blocks are then resized and the
training features are computed. These facial features
reduce the dimensionality by gathering the essential
information while removing all redundancies present
in the segment. Besides, the global features of the
total image are also computed. These specially
designed features are then used as decision support
entities of the classifier system configured using the
GFFANN which provides a decision in the testing
phase with an accuracy of over 95%.
III. LOCAL AND GLOBAL FACE FEATURES
EXTRACTION WITH MARKED DIMENSIONALITY
REDUCTION

Local facial feature extraction consists in localizing
the most characteristic face components (eyes, nose,
mouth, etc.) within images that depict human faces.
The purpose of feature extraction is to extract the
feature vectors or information which represents the
face and reduces computation time and memory
storage.
Global feature extraction consists in considering the
face as a single whole entity and then extracting the
predetermined vital features of the face.
In this work, Central Moment, eigenvector and
standard deviation of the eyes, nose and mouth are
computed as the training features for the local feature
extraction while standard deviation and eigenvector
of the covariance of the whole face are assessed for
the global features.
These features besides extracting the quintessential
information of the face also account for
dimensionality reduction.
A. Central Moment
Central moment finds its application in recognition
of shape features which are independent of
parameters and which cannot be controlled in an
image are generated. Such features are called
invariant features. There are several types of
invariance. For example, if an object may occur in
an arbitrary location in an image, then one needs the
moments to be invariant to location. For binary
connected components, this can be achieved simply
by using the central moments,
pq
[14].
In image processing, computer vision and related
fields, an image moment is a certain particular
weighted average (moment) of the image pixels'
intensities, or a function of such moments, usually
chosen to have some attractive property or
interpretation. Image moments are useful to describe
objects after segmentation. Simple properties of the
image which are found via image moments include
area (or total intensity), its centroid, and information
about its orientation [15].
Central moments are mathematically defined as [16]

pq
=

( )

( )

( ) ()


and and are the components of the centroid.
If (x, y) is a digital image, then the previous
equation becomes

pq
= ( )


( )

( ) (2)
The central moments of order up to 3 are:

00
=M
00,

01
=0,

10
=0,

11
=M
11
- M
01
=M
11
-M
10

20
=M
20
M
10
,

02
=M
02
M
01
,

21
=M
21
-2 M
11
-M
20
+2
2
M
01
,

21
=M
21
-2 M
11
-M
20
+2
2
M
01
,

12
=M
12
-2 M
11
- M
02
+2
2
M
10
,

30
=M
30
-3 M
20
+2
2
M
10
,

03
=M
03
-3 M
02
+2
2
M
01

It can be shown that

pq
= (

)()
()
()
()

(3)
Central moments are translational invariant.
Information about image orientation can be derived
by first using the second order central moments to
construct a covariance matrix.
'
20
=
20
/
00
= M
20
/ M
00

2

'
02
=
02
/
00
= M
02
/ M
00

2

'
11
=
11
/
00
= M
11
/ M
00

The covariance matrix of the image I(x,y) is
cov[( )] = [

] (4)
The eigenvectors of this matrix correspond to the
major and minor axes of the image intensity, so
the orientation can thus be extracted from the angle
of the eigenvector associated with the largest
eigenvalue.
For higher order moments it is common to normalize
these moments by dividing by m
0
(or m
00
). This
allows one to compute moments which depend only
on the shape and not the magnitude of f(x). The
result of normalizing moments gives measures which
contain information about the shape or distribution
(not probability distribution) of f(x). This is what
makes moments useful for the analysis of shapes in
image processing, for which f(x, y) is the image
function. These computed moments are usually used
as features for shape recognition [17].
B. Eigenvector with Highest Eigen Value
An eigenvector of a matrix is a vector such that, if
multiplied with the matrix, the result is always an
integer multiple of that vector. This integer value is
the corresponding eigenvalue of the eigenvector.
This relationship can be described by the equation:
M u = u, where u is an eigenvector of the
matrix M is the matrix and is the corresponding
eigenvalue. Eigenvectors possess following
properties:
- They can be determined only for square
matrices.
- There are n eigenvectors (and
corresponding eigenvalues) in an n n
matrix.
- All eigenvectors are perpendicular, i.e. at
right angle with each other.
The traditional motivation for selecting the
Eigenvectors with the largest Eigenvalues is that the
Eigenvalues represent the amount of variance along a
particular Eigenvector. By selecting the Eigenvectors
with the largest Eigenvalues, one selects the
dimensions along which the gallery images vary the
most. Since the Eigenvectors are ordered high to low
by the amount of variance found between images
along each Eigenvector, the last Eigenvectors find
the smallest amounts of variance. Often the
assumption is made that noise is associated with the
lower valued Eigenvalues where smaller amounts of
variation are found among the images [12].
C. Artificial Neural Network
Artificial Neural Networks (ANNs) are non-linear
mapping structures based on the function of the
human brain. They are computational structures
inspired by observed process in natural networks of
biological neurons in the brain. They consist of
simple computational units called neurons which are
highly interconnected.
ANNs identify and correlate patterns between input
data sets and corresponding target values even when
underlying data relationship is unknown. Once
trained, these can predict the outcome of new
independent input data. A very important feature of
ANNs is their adaptive nature, where learning by
example replaces programming in solving
problems. This feature makes such computational
models very appealing in application domains where
one has little or incomplete understanding of the
problem to be solved but where training data is
readily available.
The most widely used learning algorithm in an ANN
is the Backpropagation algorithm. There are various
types of ANNs like Multilayered Perceptron, Radial
Basis Function and Kohonen networks. These
networks are neural in the sense that they may have
been inspired by neuroscience but not necessarily
because they are faithful models of biological neural
or cognitive phenomena [13, 14].
In this work we use a Multilayer Feed-Forward
Network consisting of multiple layers. The
architecture of this class of network, besides having
the input and the output layers, also have one or
more intermediary layers called hidden layers. The
computational units of the hidden layer are known as
hidden neurons. The hidden layer does intermediate
computation before directing the input to output
layer. The input layer neurons are linked to the
hidden layer neurons; the weights on these links are
referred to as input-hidden layer weights. The hidden
layer neurons and the corresponding weights are
referred to as output-hidden layer weights.

Figure 2. Multilayered feed-forward network configuration.
IV. EXPERIMENTAL MODEL

The experimental model can be divided into two
phases namely the training phase and the testing
phase. The training phase denotes the training of the
faces of the data base while the testing phase
involves the recognition of test image. Figure 2 gives
the block diagram of the training phase and Figure 3
depicts the block diagram of the testing phase.





Figure 3. Block Diagram of the training phase.





Figure 4. Block diagram of the testing phase.

Algorithm for training phase











Algorithm for testing phase









Database
Image
Pre-
processing
Feature
Extraction
Averaging the
Features
ANN

Training
Result

Test Image Pre-
processing
Feature
Extraction

ANN
Test Result

Input: Data base face images
Output: Column vector of the extracted features
Begin:
Step 1: Carry out pre-processing for all the images
of the data base.
Step 2: The eyes, nose and mouth are segmented
from each of the pre-processed face images of the
data base.
Step 3: Compute the Eigenvector of the
Covariance, central moment and standard deviation
of the segmented blocks of step 2. Store the values
in a column vector.
Step 4: Store the results computed in step 3 of
different face images in different column vectors.
Step 5: Train the designed network with the
column vectors of step 4 as input data with unique
binary vectors as corresponding targets.
End

Input: Face Test Image
Output: Matched face image from data base
Begin:
Step 1: Carry out pre-processing of the test face
image.
Step 2: The eyes, nose and mouth are segmented
from each of the pre-processed test image.
Step 3: Compute the Eigenvector of the
Covariance, central moment and standard deviation
of the segmented blocks of step 2. Store the values
in a column vector.
Step 4: Simulate using the trained network to
match with the data base face images.
End

In this work, we have used our own image data base
consisting of 120 images of 8 individuals. There are
15 images of each individual captured at different
instances of the day representing all possible
variations of light intensity, image tilt, image size,
noise levels, varying illumination, pose and distance
from the camera. Figure 4 shows a sample of the
acquired data base with the above specifications.

Figure 5. Sample images of the acquired database.

The pre-processing stage involved removal of noise
(Figure 6), histogram equalization (Figure 7), size
normalization and illumination normalization.

















Facial feature extraction is a special form of
dimensionality reduction. When the input data is too
large and it is suspected to be redundant then the
input data is transformed into a reduced
representation set of features (also named feature
vector). Transforming the input data into the set of
features is called feature extraction. If the extracted
features are carefully chosen it is expected that the
features set will extract the relevant information from
the input data in order to perform the desired task
using this reduced representation instead of the full
size input.
The method of global feature extraction carried out
in this work is listed in the following algorithm.
From the pre-processed images, the eyes, nose and
mouth of the image are detected for procuring the
local feature vector as shown in Figure 9. These
segments are used to compute the Eigen Vector of
the covariance, central moment and Standard
Deviation.

Figure 9. Samples of the vital features of the face used for
Training the ANN.

TABLE II. SIZE OF SELECTED FACIAL SECTIONS
Facial
Region
Size in Pixels
(M x N)
Size in % of the Full Face
(M x N)
Right Eye 24 x 38 0.14 x 0.25
Left Eye 24 x 38 0.14 x 0.25
Nose 36 x 40 0.21 x 0.26
Mouth 27 x 57 0.16 x 0.38

TABLE III. DIMENSIONS OF LOCALLY EXTRACTED FEATURES
Facial
Region
Length of the Extracted Features Total
Feature
Length

Central
Moment
Eigen
Vector

Standard
Deviation
Row Col.
Right
Eye
38 24 x 4 24 38 196
Left
Eye
38 24 x 4 24 38 196
Nose 40 36 x 4 36 40 260
Mouth 57 27 x 4 27 57 249
Total
Length 173 444 111 173 901
V. RESULTS AND PERFORMANCE ANALYSIS

In the present work, the high performance
Backpropagation training algorithm and the variable
learning rate Backpropagation is employed for
training the network. This algorithm is based on
heuristic technique. The network training algorithm
used here (GDMBPAL) updates weight and bias
values according to gradient descent momentum and
an adaptive learning. We have used a learning rate of
0.7 and momentum of 0.6. The number of neurons in
the hidden layers is fixed to be 1.5 times the number
of neurons in the input layer.
Figure 6. Sample of noise removal process
Figure 7. Histogram plots of a bright image before and after
Histogram Equalization.
The specifications of the Neural Network used for
training phase are as tabulated in Table 4. The input
layer consists of the global as well as the local
features of the database image.
We have carried out the training using both log
sigmoid function and the tan sigmoid activation
functions in the network. Further, the effect of the
different activation functions with different iterations
on the convergence of Mean Squared Error (MSE)
was also tested.
The convergence of Mean Squared Error (MSE) for
different number of iterations and for different
activation function of the hidden and output layers is
tabulated in Table 5.
From Table 5, we make the inference that the
convergence of MSE is better for log sigmoid
activation function with 1500 iterations. Further, the
convergence of MSE decreases if the number of
iterations is very high.
TABLE IV. SPECIFICATIONS OF THE NEURAL NETWORK
Type: Feed Forward Backpropagation Network
Parameters Specifications
Number of Layers 3 (Input layer, Hidden Layer,
Output Layer)
Number of Input Unit 1 Feature Matrix
Number of Output Unit 1 Binary Encoded Vector
Number of Neurons in the
Input Layer
1821
Number of Neurons in the
Hidden Layer
1821 * 1.5 = 2732
Number of Neurons in the
Output Layer
8
Number of Iterations 1000, 1500, 2000
Number of Validation Checks 6
Learning Rate 0.7
Momentum 0.6
Activation Functions Log-Sigmoid and Tan-Sigmoid

TABLE V. CONVERGENCE OF MSE FOR DIFFERENT NUMBER OF
ITERATIONS AND DIFFERENT ACTIVATION FUNCTION

Iterations
Activation
Function of
Hidden Layers
Activation
Function of
Output
Layers
MSE
1000
Tansigmoid

Tansigmoid
1x10
-4

1500 1.2x10
-4

2000 1.4x10
-4

1000
Logsigmoid

Logsigmoid
1x10
-7

1500 1x10
-12

2000 1x10
-6


We also studied the effect of Gaussian noise of
different SNR levels on the efficiency of our face
recognition system. We find that if the noise is added
before pre-processing phase the systems Correct
Recognition Rate (CRR) is not affected much.
However, if the image is affected by Gaussian noise
after the pre-processing phase, the systems CRR is
adversely affected.
When the face is affected after pre-processing phase
by Gaussian noise of SNR 25dB and above the
system has a CRR of 100% while CRR reduces for
SNR less than 25dB. The other observations are
tabulated in Table VI.
TABLE VI. EFFECT OF GAUSSIAN NOISE IN THE CORRECT
RECOGNITION RATE (CRR) OF THE PROPOSED SYSTEM

AWGN
SNR (dB)
CRR (%) for
Gaussian Noise
Present before
Pre-processing
Phase
CRR (%) for
Gaussian Noise
Present after Pre-
processing Phase
25 100 100
22 100 81.25
20 100 62.52
18 98.74 43.75
16 96.47 37.50
15 89.56 12.51
14 86.25 06.25
VI. CONCLUSION

In this paper, face recognition based on ANN is
proposed. ANN with Back propagation algorithm is
found to be the efficient method for recognising the
faces. It is observed that the proposed feature vectors
are useful for proper recognition of human faces with
the activation function log sigmoid in the hidden
layer of the Neural Network and it gives a better
convergence of the MSE. Further, the systems
efficiency is reduced if the image is affected by
Gaussian noise after the pre-processing phase.

ACKNOWLEDGMENT
The authors would like to thank the staff and
management of DBCET for their support and
encouragement in the completion of this work. Our
sincere thanks to Ms. Jhimli Kumari Das, HoD of
Electronics and Communication Engineering for her
efforts in initiating us into this work. We place on
record our deepest gratitude to Mr. Kaustubh
Bhattacharyya, the guide for his scholarly guidance
and masterly expertise all through this work.
REFERENCES
[1] Marcos Faundez-Zanuy, Biometric security technology,
Encyclopedia of Artificial Intelligence, 2000, pp. 262-264.
[2] K. Ramesha and K. B. Raja, Dual transform based feature
extraction for face recognition, International Journal of
Computer Science Issues, 2011, vol.VIII, no. 5, pp. 115-
120.G. Eason, B. Noble, and I. N. Sneddon, On certain
integrals of Lipschitz-Hankel type involving products of
Bessel functions, Phil. Trans. Roy. Soc. London, vol. A247,
pp. 529551, April 1955. (references)
[3] Khashman, Intelligent face recognition: local versus global
pattern averaging, Lecture Notes in Artificial Intelligence,
4304, Springer-Verlag, 2006, pp. 956 961.
[4] Abbas, M. I. Khalil, S. Abdel-Hay and H. M. Fahmy,
Expression and illumination invariant preprocessing
technique for face recognition, Proceedings of the
International Conference on Computer Engineering and
System, 2008, pp. 59-64.
[5] K. Ramesha , K. B. Raja, K. R. Venugopal and L. M.
Patnaik, Feature extraction based face recognition, gender
and age classification, International Journal on Computer
Science and Engineering, 2010, vol. II, no. 01S, pp. 14-23.
[6] S. Ranawade, Face recognition and verification using
artificial neural network, International Journal of Computer
Applications, 2010, vol. I, no. 14, pp. 21-25.
[7] Albert Montillo and Haibin Ling, Age regression from
faces using random forests, Proceedings of the IEEE
International Conference on Image Processing, 2009, pp.
2465-2468.
[8] H. Murase and S. K. Nayar, Visual learning and recognition
of 3-D objects from appearance, Journal of Computer
Vision, vol. XIV, 1995, pp. 5-24.
[9] M. Turk and A. P. Pentland, Face recognition using
Eigenfaces, Proceedings of IEEE Conference on Computer
Vision and Pattern Recognition, 1991, pp. 586-591.
[10] Peter Belhumeur, J. Hespanha and David Kriegman,
Eigenfaces versus Fisherfaces: Recognition using class
specific linear projection, IEEE Transactions on Pattern
Analysis and Machine Intelligence (PAMI), 1997, vol. XIX,
no. 7, pp.711-720.
[11] P. Niyogi, Locality preserving projections, Proceedings of
Conference on Advances in Neural Information Processing
Systems, 2003.
[12] K. Ramesha and K. B. Raja, Dual Transform based Feature
Extraction for Face Recognition, International Journal of
Computer Science Issues, 2011, vol.VIII, no. 5, pp. 115-120.
[13] J. Weng, N. Ahuja, and T. S. Huang, Learning recognition
and segmentation of 3D objects from 2D images,
Proceedings of the International Conference on Computer
Vision, 1993, pp 121128.
[14] Sundos A. Hameed Al_azawi, Eyes Recognition System
Using Central Moment Features, Engineering &
Technology Journal, 2011, vol. 29, no. 7, pp. 1400-1407.
[15] K. Bonsor, "How Facial Recognition Systems Work".
http://computer.howstuffworks.com/facial-recognition.htm.
[16] M. K. Hu, Visual pattern recognition by moment
invariants, IRE transactions on Information Theory, 1962,
pp. 179187.
[17] Bob Bailey, Moments in image processing,
http://www.csie.ntnu.edu.tw/~bbailey/Moments%20in%20IP
.htm, Nov. 2002.
[18] Peter Belhumeur, J. Hespanha and David Kriegman,
Eigenfaces versus Fisherfaces: Recognition using class
specific linear projection, IEEE Transactions on Pattern
Analysis and Machine Intelligence (PAMI), 1997, vol. XIX,
no. 7, pp.711-720.
[19] J. Anderson, An Introduction to Neural Networks, Prentice
Hall, 2003
[20] Simon Haykin, Fundamentals of Neural Networks, Pearson
Education, 2003.






















































[1] Marcos Faundez-Zanuy, Biometric security technology,
Encyclopedia of Artificial Intelligence, 2000, pp. 262-264.
[2] K. Ramesha and K. B. Raja, Dual transform based feature
extraction for face recognition, International Journal of Computer
Science Issues, 2011, vol.VIII, no. 5, pp. 115-120.

A face recognition system recognizes an individual
by matching the input image against images of all
users in a database and finding the best match.

[3] Khashman, Intelligent face recognition: local versus global
pattern averaging, Lecture Notes in Artificial Intelligence, 4304,
Springer-Verlag, 2006, pp. 956 961.
[4] Abbas, M. I. Khalil, S. Abdel-Hay and H. M. Fahmy,
Expression and illumination invariant preprocessing technique
for face recognition, Proceedings of the International Conference
on Computer Engineering and System, 2008, pp. 59-64.
[5] K. Ramesha , K. B. Raja, K. R. Venugopal and L. M. Patnaik,
Feature extraction based face recognition, gender and age
classification, International Journal on Computer Science and
Engineering, 2010, vol. II, no. 01S, pp. 14-23.
[6] S. Ranawade, Face recognition and verification using
artificial neural network, International Journal of Computer
Applications, 2010, vol. I, no. 14, pp. 21-25.
[7] Albert Montillo and Haibin Ling, Age regression from faces
using random forests, Proceedings of the IEEE International
Conference on Image Processing, 2009, pp. 2465-2468.
[8] H. Murase and S. K. Nayar, Visual learning and recognition
of 3-D objects from appearance, Journal of Computer Vision,
vol. XIV, 1995, pp. 5-24.
[9] M. Turk and A. P. Pentland, Face recognition using
Eigenfaces, Proceedings of IEEE Conference on Computer
Vision and Pattern Recognition, 1991, pp. 586-591.
[10] Peter Belhumeur, J. Hespanha and David Kriegman,
Eigenfaces versus Fisherfaces: Recognition using class specific
linear projection, IEEE Transactions on Pattern Analysis and
Machine Intelligence (PAMI), 1997, vol. XIX, no. 7, pp.711-720.
[11] P. Niyogi, Locality preserving projections, Proceedings of
Conference on Advances in Neural Information Processing
Systems, 2003.
[12] K. Ramesha and K. B. Raja, Dual Transform based Feature
Extraction for Face Recognition, International Journal of
Computer Science Issues, 2011, vol.VIII, no. 5, pp. 115-120.
[13] J. Weng, N. Ahuja, and T. S. Huang, Learning recognition
and segmentation of 3D objects from 2D images, Proceedings of
the International Conference on Computer Vision, 1993, pp 121
128.
[14] Peter Belhumeur, J. Hespanha and David Kriegman,
Eigenfaces versus Fisherfaces: Recognition using class specific
linear projection, IEEE Transactions on Pattern Analysis and
Machine Intelligence (PAMI), 1997, vol. XIX, no. 7, pp.711-720.
[15] J Anderson, An Introduction to Neural Networks, Prentice
Hall, 2003
[16] Simon Haykin, Fundamentals of Neural Networks, Pearson
Education, 2003.

M. K. Hu, Visual pattern recognition by moment invariants, IRE
transactions on Information Theory, 1962, pp. 179187.

Sundos A. Hameed Al_azawi, Eyes Recognition System Using
Central Moment Features, Eng. & Tech. Journal, 2011, vol. 29,
no. 7.





If an object is not at a fixed distance from a fixed
focal length camera, then the sizes of objects will not
be fixed. In this case size invariance is needed. This
can be achieved by normalizing the moments.

You might also like