You are on page 1of 32

National Conference on Emerging Trends and Applications in Computer Science - 2012

Kaveri Chetia Kaustubh Bhattacharyya Don Bosco College of Engineering & Technology Assam Don Bosco University Guwahati

Plan of Presentation
Introduction
Basic Theory Related Works

Problem Formulation
Experimental Model Results and Performance Analysis Conclusion References Acknowledgement

Biometrics

Introduction

Identifies or verifies a person based on ones physical

characteristics by matching the testing images against the enrolled ones .

Human Computer Interface (HCI)


The demand for reliable personal identification in

computerized access control has resulted in an increased interest in biometrics to replace password and identification (ID) card.

Artificial Modeling of Human Natural Ability Surveillance System

Three Key Steps/Sub Tasks Involved


1. Face Detection - Detect the Face in a given image 2. Feature Extraction - Essential features (necessary) 3. Identification or Verification

Three Key Steps/Sub Tasks Involved


Face Detection, Verification, Identification

FR System Yes/No This is Mr. J This is Ms. Y Face Detection System Process Face Identification

Face Verification

Basic Blocks of a Face Recognition System

Pre-processing Unit

Size Normalization

Illumination Normalization

Noise Removal

Feature Extraction

Global Features

Local Features

Training

Testing

Complications Involved in Face Recognition Systems


Face recognition is influenced by many

complications, such as
The differences of facial expression,

The light intensity and directions of imaging,


The variety of posture, size and angle. Even to the same people, the images taken in

different surroundings may be unlike.

Problem Formulation
An Artificial Neural Network is to be designed that recognizes

and classifies a human face relative to the faces of the training database. Hence, the designed system classifies the face and verifies as well. Further, the face recognition system needs to solve the problem concerning different facial expressions as well. The system must be able to know that two images of the same person with different facial expressions actually are the same person. Other issues of face recognition systems need to solve is: makeup, posing positions, illumination conditions and effect of noise.

Popular Approaches of Face Recognition Systems (FRS)


(1) Holistic / Global matching methods: These methods use the whole face region as the raw input to a recognition system. E.g. PCA, LDA, Fisher face etc. Accurate only for frontal view of faces Less accurate for face tilt more than 30 degree
(2) Feature-based (Local) matching methods: Typically, in these methods, local

features such as the eyes, nose, and mouth are first segmented and features are extracted for these. E.g. ANN, HMM, SVD etc.
Requires a large data base for training High Computational Complexity

(3) Hybrid methods: This system uses both local features and the whole face region to recognize a face, a machine recognition system should use both. The trade off between the computational complexity and the accuracy is easily overcome

Artificial Neural Network


ANN is the massively parallel dynamic system out of which MLP feed forward network is best for face recognition Resembles the Brain Processor in Two Respects:

Knowledge is acquired by the network through a learning process. Interneuron connection strengths known as synaptic weights are used to store the knowledge.

Basic Blocks of the FRS


Size Normalization Pre-Processing for Local Feature Extraction Acquisition of Database

Intensity and Illumination Normalization


Face Normalization for Global Feature Extraction

Segmentation of the Landmark segments of the face

Size Normalization

Feature Extraction

Simulation of the Test Image

Training the Neural Network

Experimental Model
1. Database Images Varying illumination, facial expression, pose, distance from the camera
No. of Persons 8 No. of Images per Person 25 Total No. of Training Images 160 (20 per person)

Acquired Database Images

Variation in Expressions

Experimental Model
2. Pre-processing

Normalization

Experimental Model
3. Normalization and Segmentation
An individual face, normalized face of the database faces and the resulting face of the subtraction of the individual face from the mean face

Segmentation of Landmark parts of Face for Geometrical Approach

Experimental Model
4. Dimension of the Segmented Segments
Facial Region Size in % of the Full Face (M x N) 0.14 x 0.25 0.14 x 0.25 Size in Pixels (M x N) 12 x 19 12 x 19

Right Eye Left Eye

Nose
Mouth

18 x 20
14 x 28

0.21 x 0.26
0.16 x 0.38

Segmented Face

Experimental MOdel
4. Feature Extraction
Length of the Extracted Features Segment Central Moment 19 19 20 28 Eigen Vector 12 x 4 12 x 4 18 x 4 14 x 4 12 12 18 14 Standard Deviation
Row Col.

Total Feature Length 98 98 130 126 75x4=300 752

Right Eye Left Eye Nose Mouth Global Total Length

19 19 20 28

Experimental Model
Artificial Neural Network - Specifications
Type: Feed Forward Backpropagation Network Parameters Specifications Number of Layers 3 (Input layer, Hidden Layer, Output Layer) Number of Input Unit 1 Feature Matrix consisting of feature vectors of eight persons Number of Output Unit 1 Binary Encoded matrix consisting of eight target vectors for eight persons. 752 08 (To recognize 8 persons) (752 + 8) * (2/3) = 506 1000, 1500, 2000 0.7, 0.8, 0.9 0.5, 0.6, 0.7 Log-Sigmoid and Tan-Sigmoid

Number of Neurons in the Input Layer Number of Neurons in the Output Layer Number of Neurons in the Hidden Layer Number of Iterations Learning Rate Momentum Activation Functions

Experimental Model
Binary Encoded Output

Results and Performance Analysis


Convergence of Mean Squared Error for Different Number of Iterations and Different Activation Function
Activation Function of Hidden Layers Activation Function of Output Layers MSE

Iterations

1000 1500 2000 1000 1500 2000

Tansigmoid

Tansigmoid

Logsigmoid

Logsigmoid

1x10-4 1.2x10-4 1.4x10-4 1x10-7 1x10-12 1x10-6

Results and Performance Analysis


MSE Convergence Curve

Results and Performance Analysis


Convergence of Mean Squared Error for Different Number of Iterations and Different Learning Rates
Learning Rate Momentum MSE for different Iterations 1000 1500 2000 1.5x10-7 1.3x10-10 1.7x10-4 1x10-7 1x10-12 1x10-6 1.2x10-6 1.2x10-10 1.1x10-5 1.4x10-4 1.5x10-7 1.4x10-3 1.3x10-5 1x10-8 1x10-5 1.2x10-4 1x10-7 1.6x10-4 1.7x10-3 1.8x10-6 1.4x10-2 1.4x10-5 1x10-7 1.3x10-4 1.3x10-3 1.4x10-6 1.7x10-3

0.7

0.8

0.9

0.5 0.6 0.7 0.5 0.6 0.7 0.5 0.6 0.7

Results and Performance Analysis


Comparison of Recognize Rates of the Proposed System with the other Systems with Variation in the Number of Training Images
Performance No. of training images 5 10 20 5 10 20 5 10 20 Analytical System 57 67 73 27 20 15 16 13 12

Holistic System
62 76 80 22 15 12 16 09 08

Hyrbrid System 70 83 86 18 10 08 12 07 06

CRR (%)

FAR (%)

FRR (%)

FALSE ACCEPTANCE RATE (FAR) - measure of the likelihood that the proposed system will incorrectly accept an access attempt by an unauthorized user. FALSE REJECTION RATE (FRR) - measure of the likelihood that the biometric security system will incorrectly reject an access attempt by an authorized user.

Result and Performance Analysis


Comparison of Correct Recognition Rate of Analytical, Holistic and Proposed Hybrid Systems

Result and Performance Analysis


Performance Parameters of the Proposed Hybrid System

Result and Performance Analysis


Plot of the CRR of the System Affected by Gaussian Noise

Result and Performance Analysis


COMPARISON OF RECOGNITION RATES OF THE PROPOSED SYSTEM WITH THE DATABASE AFFECTED BY VARIOUS T YPES OF NOISE
90 Database with Noise C F F R A R R R R % % % 86 8 6 80 70 Ideal Images Gaussian Noise Salt & Pepper Noise

60
50 40 30 20

Ideal Images Gaussian Salt & Pepper

78 10 12
73
15 12

10
0 CRR (%) FAR (%) FRR (%)

Future Prospects
The designed system has been tested only on the acquired

database images. The system could be tested for the universally accepted database such as ORL, Yale. The system is trained only for 8 persons. The systems accuracy could be validated training it to recognize more persons. The most essential features could be extracted by different mathematical modeling thus reducing the input features to the Neural Network The Global Features could be extracted by methods other than PCA and the Recognition Rate could be estimated. A more effective system that is robust to the presence of noise could be designed.

References
Marcos Faundez-Zanuy, Biometric security technology, Encyclopedia of Artificial Intelligence, 2000, pp. 262-264. K. Ramesha and K. B. Raja, Dual transform based feature extraction for face recognition, International Journal of Computer Science Issues, 2011, vol.VIII, no. 5, pp. 115-120.G. Eason, B. Noble, and I. N. Sneddon, On certain integrals of Lipschitz-Hankel type involving products of Bessel functions, Phil. Trans. Roy. Soc. London, vol. A247, pp. 529551, April 1955. (references) Khashman, Intelligent face recognition: local versus global pattern averaging, Lecture Notes in Artificial Intelligence, 4304, Springer-Verlag, 2006, pp. 956 961. Abbas, M. I. Khalil, S. Abdel-Hay and H. M. Fahmy, Expression and illumination invariant preprocessing technique for face recognition, Proceedings of the International Conference on Computer Engineering and System, 2008, pp. 59-64. K. Ramesha , K. B. Raja, K. R. Venugopal and L. M. Patnaik, Feature extraction based face recognition, gender and age classification, International Journal on Computer Science and Engineering, 2010, vol. II, no. 01S, pp. 14-23. S. Ranawade, Face recognition and verification using artificial neural network, International Journal of Computer Applications, 2010, vol. I, no. 14, pp. 21-25. Albert Montillo and Haibin Ling, Age regression from faces using random forests, Proceedings of the IEEE International Conference on Image Processing, 2009, pp. 24652468.

References
Bernd Heisele, Y Purdy Ho, and Tomaso Poggio, Face recognition with support vector machines: global versus component-based approach, in Proc. 8th International Conference on Computer Vision, pages 688694, 2001. H. Murase and S. K. Nayar, Visual learning and recognition of 3-D objects from appearance, Journal of Computer Vision, vol. XIV, 1995, pp. 5-24. M. Turk and A. P. Pentland, Face recognition using Eigenfaces, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 1991, pp. 586-591. Peter Belhumeur, J. Hespanha and David Kriegman, Eigenfaces versus Fisherfaces: Recognition using class specific linear projection, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 1997, vol. XIX, no. 7, pp.711-720. P. Niyogi, Locality preserving projections, Proceedings of Conference on Advances in Neural Information Processing Systems, 2003. K. Ramesha and K. B. Raja, Dual Transform based Feature Extraction for Face Recognition, International Journal of Computer Science Issues, 2011, vol.VIII, no. 5, pp. 115-120. J. Weng, N. Ahuja, and T. S. Huang, Learning recognition and segmentation of 3D objects from 2D images, Proceedings of the International Conference on Computer Vision, 1993, pp 121128.

References
Sundos A. Hameed Al_azawi, Eyes Recognition System Using Central Moment Features,

Engineering & Technology Journal, 2011, vol. 29, no. 7, pp. 1400-1407. K. Bonsor, "How Facial Recognition Systems Work". http://computer.howstuffworks.com/facial-recognition.htm. M. K. Hu, Visual pattern recognition by moment invariants, IRE transactions on Information Theory, 1962, pp. 179187. Bob Bailey, Moments in image processing, http://www.csie.ntnu.edu.tw/~bbailey/Moments%20in%20IP.htm, Nov. 2002. Peter Belhumeur, J. Hespanha and David Kriegman, Eigenfaces versus Fisherfaces: Recognition using class specific linear projection, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 1997, vol. XIX, no. 7, pp.711-720. J. Anderson, An Introduction to Neural Networks, Prentice Hall, 2003 Simon Haykin, Fundamentals of Neural Networks, Pearson Education, 2003.

Acknowledgement
At the very outset, we thank the Almighty for his gracious

presence all through the days of this work. We would like to place on record my sincere thanks and gratitude to all who were instrumental in the completion of this paper work. Heartfelt gratitude to the Don Bosco (DBCET) Family and especially the department of ECE for furnishing us with necessary opportunities to explore our creative technical skills. Due thanks to Mr. Kaustubh Bhattacharyya for his expertise and scholarly guidance. But for his masterly advice, this work would not have seen the light of the presentation. Gratitude to the organizers of NCETACS 2012 and the chair personnel. Thanks one and all

You might also like