You are on page 1of 5

International Journal of Advanced Research and Publications

ISSN: 2456-9992

Eigenbased Multispectral Palmprint Recognition


Abubakar Sadiq Muhammad, Fahad Abdu Jibrin, Abubakar Sani Muhammad
Kano State Polytechnic, School of Technology,
Department of Computer Engineering, Nigeria, PH-+234 7069259611
alsiddiq4uhd@gmail.com

Faculty of Engineering, Ahmadu Bello University Zaria,


Department of Electrical Engineering, Nigeria, PH-+234 8060288816
fajad03@gmail.com

Kano State Polytechnic, School of Technology,


Department of Electrical Engineering, Nigeria, PH-+234 9031338779
abubakarsan747@gmail.com

Abstract: This paper presents a simple method for verification of palm images. Fusion of palmprint instances is performed image level. To
capture the palm characteristics, the fused image were concatenated to form longer feature vector whose dimension is reduced by Principal
Component Analysis (PCA). Finally, the reduced set of features is trained with distance metric classifiers (Manhattan, Euclidean and Cosine
Distance) to accomplish recognition task. For evaluation, PolyU Multispectral Palmprint database is used. The experimental results reveal
that three bands R, B, NIR contain most of the salient and discriminative features for building an accurate biometric system and in which a
recognition rate of 99.99% can be achieved.

Keywords: Palmprint Verification, Multispectral Image Fusion, Eigenpalm, Distance metric Classifiers.

1. Introduction different biometric sensors; this fusion produces an image in


Palmprint verification and identification systems are spatially enhanced form which contains more intrinsic and
increasingly employed for use because of its merits such as discriminative information thereby increasing the reliability
low resolution, low cost, low intrusiveness and stable and accuracy of biometric systems. The information
feature structures. Biometric fusion is a concept employed presented by multiple biometric measures can be fused at
to combine multiple biometric sources from different or four levels namely image level, feature level, matching score
identical sensors to improve system accuracy and level and decision level. Wang and Ruan [2] proposed a
robustness [1]. Multispectral imaging involves capturing of palmprint and palmvein fusion system that uses one color
data at specific frequencies of electromagnetic spectrum camera and NIR camera with a registration procedure of 9s,
using devices that separate the wavelength and extends this system suffers from cost. Hao et al [3] developed a
beyond the visible spectrum since infrared light is contact-free multispectral palm sensor whose image quality
considered. It has the advantage of extraction of additional happens to be limited; hence low recognition accuracy was
information the human eyes fail to capture with its obtained. Z. Guo et al [4] proposed a feature band selection
receptors (red, green, and blue) possible. Image fusion based multispectral palmprint recognition where statistical
method provides a solution to problem of combining features are extracted to compare each band. Score level
several images taken from the same object to obtain a new fusion was performed to determine the best combination
fused image which is spatially enhanced and contains more from all candidates and concluded that most discriminative
discriminative features. With palmprint images captured features can be found in two special bands
under different light illuminations and combining the (600,700,880nm). Cui J. proposed an algorithm for solving
information obtained from images captured at different the problem of band selection using extended General
spectrum. It is possible to collect more information that Colour Image Discriminant (GCID) model to generate three
would improve accuracy, reliability and antispoofing new color components for improvement of recognition
capability for designing a palmprint authentication performance.[5][6] proposed approaches for improving
system with optimal efficiency. recognition performance in which a recognition rate of
98.83% and (99.4%,99.2%) were obtained respectively. This
1.1. Problem Statement work presents the fusion of the palm images taken under
Previous works done on the palmprint recognition systems different wavelength at image level followed by performance
involve the use of palm images captured under a single evaluation using a number of classifiers. The overall
illumination (white light). This often suppresses features that framework is as follows: Section 2 present preprocessing
are vital for the discriminative purposes and also make the steps conducted to achieve a normalized region of palmprint
recognition system to suffer from compromise due to spoof images and a brief introduction to eigenpalms. Experimental
attacks. A solution to the problem is to design the system results and conclusions are given in sections 3 and 4.
using images captured under different light illuminations. A
number of research showed that an image capture under light 2. Preprocessing
illumination or using different sensors contains more Preprocessing is used to align the different palmprint images
distinctive features which on combination of the different and to segment the Centre for feature extraction which is
information enhances the performance of the system. known as the Region of Interest (ROI) [7]. The different
Biometric image fusion refers to a process that fuses images are converted into subimages of the same size (128
multispectral biometric images captured by identical or x128) by series of preprocessing steps. Extracting the ROI of

Volume 2 Issue 2, February 2018 49


www.ijarp.org
International Journal of Advanced Research and Publications
ISSN: 2456-9992

the palmprint requires defining a coordinate system based on where M is the number of images in the training
x 1 , x 2 ... x m
which different palm images are aligned for matching and
set. The average palmprint image of the training set is
verification. Most of the preprocessing algorithms employ
defined by
identification of the key points between fingers to set up a
M
coordinate system. The following algorithm is followed to 1
extract the central part of the palmprint image as ROI and
   xi (1)
M i 1
further this ROI is used for multispectral fusion of palm
images: The difference between each palmprint image and the
Step 1: Convert the multispectral image to a binary image. A average image is given by  i  x i   , the covariance
smoothening filter can be used to enhance the image.
Step 2: Apply boundary-tracking algorithm to obtain the matrix of x i is as follows:
boundaries of the gaps between the fingers. M T
1 XX
 xi     xi   
T
Step 3: Determine palmprint coordinate system by C    (2)
computing the tangent of the two gaps with any two points M i 1 M
on these gaps. The y-axis is considered as the line which
Where the matrix X    1 2 ... m  is the concatenated
joined these two points. To determine the origin of the
coordinate system the midpoint of these two points are taken palm image vectors. The covariance matrix C is of
through which is a line passing and perpendicular to the y- 2 2
dimension N  N . The PCA transform has been found to be
axis. the best approximation of an original process in the sense
Step 4: finally, extract the ROI which is the central part of that it reduces the total mean-square error provided the
the palmprint for feature extraction. eigenvalues are arranged in decreasing order thereby
allowing the eigenvectors of C to span an algebraic
eigenspace and provide an optimal approximation for those
training samples in terms of the mean square error [10]. The
determination of the eigenvalues and eigenvectors of the
2 2
matrix C of typical image size ( N  N ) is cumbersome and
time- consuming. Thus an efficient method is needed to
Figure 2.1(a): Preprocessing Steps calculate the eigenvalues and eigenvectors. It is well known
that every matrix satisfies the following theorem
Auk  ku (3)
k

Where u k
and  k refer to the eigenvalues and eigenvectors
of A. Hence the matrix C can be written as:
Figure 2.1(b): ROI under four illuminations (R,G,B,NIR)
Cuk  u (4)
k

2.1. Multispectral Image fusion with u and  k refer to the eigenvalues and eigenvectors
The multispectral ROI palm images obtained under k

different illumination conditions as Red, Green, Blue and of C. However, in practice the number of training samples is
NIR bands are fused together to form a single feature relatively small as such it is easier to calculate the
vector. The fusion extracts information from each source eigenvectors ( v k ) and eigenvalues ( a k ) of the matrix say
image and obtains the effective representation in the final T

fused image. The objective of the fusion technique is to L  X X whose size is bounded by the number of training

process the detailed information that is found from both the samples. Thus
X vk  ak vk
T
source ROI images. A typical image can be described by a X (5)
two dimensional array (NxN). The eigenspace method Multiplying both side of equation (5) by X we have
defines a palmprint image as a vector of length N2x1 called a
 X vk    X vk 
T
“palm vector”. A sub palmprint image obtained after XX ak (6)
preprocessing of fixed size 128x128 gives a vector which Comparing eqn.(4) with eqn.(6) above the eigenvectors of
represents a single point in the 16,384 dimensional space.
The fact that Palmprints have similar structures usually three matrix C is obtained, since we have
main lines and creases and all “palm vectors” are located in a uk  Xv (7)
narrow image space bounded by the number of images k

employing the Principal Component Analysis (PCA) allows


for representation of the principle components of the U  u k , K  1 ... M  (8)
distribution or the eigenvectors of the covariance matrix of
This method actually reduces the number of calculations to
the set of palmprint images. These eigenvectors define the
be carried out with U denoting the basis vectors that
space of the palmprints which are called “eigenpalms” and
correspond to the original palmprint images. Resizing each
each palmprint image can be represented in terms of linear
of the eigenvectors into the original image size ( N  N ), it is
combination of the eigenpalms. The fusion at image level is
observed that they look like the palmprint in appearance and
done by concatenation of the extracted ROIs from each
can represent the principle characters (main lines) of the
individual illuminations IR, IB, IG, I NIR to form the training
palmprints which are referred to as “eigenpalms”. Each
set which is then processed by the PCA as explained below:
palmprint in the training set can be represented by an
Suppose the training samples of the palmprint images are
eigenvector as such the number of eigenpalms will be equal

Volume 2 Issue 2, February 2018 50


www.ijarp.org
International Journal of Advanced Research and Publications
ISSN: 2456-9992

to the number of samples in the training set. However, it is 3. Simulations and Result Discussion
not necessary to choose all the eigenvectors as the base The simulation for this work was implemented using Matlab
vectors but just those eigenvectors that have associated with v7.14 (R2012a). The experiments used images of 200
them largest eigenvalues as they give required representation persons which is equivalent to 200 classes since each palm
of the training set [13]. The number of significant image differs from the other. Each person has a total of
eigenvectors (u’k) with the largest associated eigenvalues are twelve (12) images which are chosen randomly beginning
selected to be the components of the eigenpalms which can with 2, 3, 4 to 10 training and others for authentication. The
extend to an M’ dimensional subspace of all possible process was repeated ten (10) times with the average
palmprint images. A new palmprint image is then recognition obtained. The recognition rates obtained from the
transformed into its corresponding eigenpalm by the different classifiers for Single Spectra (R, G, B, NIR) are as
following operation: shown in the figures below:
fi  U  xi   i  1 ... M 
'
(9)

U
'
 u k
, K  1 ... M
'
 (10)

Where f i represents the standard feature vector of each


'
person and M is the feature length.

Figure 2.2: Eigenpalm for four illuminations (R, G, B, NIR)

2.2. Proposed Approach


The proposed approach will be implemented as follows: Figure 3.1(a): Recognition Rates with Manhattan Classifier
Step 1: investigate the recognition performance of single
palmprint spectra.
Step 2: Fusion of spectral with best recognition performance
Step 3: Comparison between single spectral and fused
spectral.

Figure 3.1 (b): Recognition Rates with Manhattan Classifier

Figure 2.3: Flowchart of the proposed approach

The palmprint database used is the PolyU Multispectral


palmprint database. It is made up of palmprint images from
250 persons with each providing images from two palms for
capture. A number of literature have shown that palmprint of
a person are not the same even for identical twins. Thus it is
evident to consider the palm images as belonging to 500
different persons. Experiments were carried out in two Figure 3.1(c): Recognition Rates with Manhattan Classifier
sessions in which the recognition rates were obtained and a
comparison made between the performances of single Figure 3.1 (a-c) shows the recognition rates of the individual
spectra and combined (triple) spectra. The experiment uses spectral with (R, B, NIR) emerging as the best and green
200 persons in which training and test subject were chosen spectral seen to have the lowest recognition for the training
randomly with recognition rate evaluated using a set of three and test subjects. The Red spectral is found to show best
classifiers namely: Euclidean, Manhattan and Cosine performance and it is succeeded by the NIR, this could be
Distance. due to additional information captured under the illumination
such as palm vein structures. it is then succeeded by Blue

Volume 2 Issue 2, February 2018 51


www.ijarp.org
International Journal of Advanced Research and Publications
ISSN: 2456-9992

and lastly Green. Table 1-3 show the comparison of to the single spectra with each of the classifier though the
recognition rates obtained on fusion among three spectra (R, performance with Euclidean is seen to be below those of the
B, NIR) and (G,B,NIR) the former is found to yield good fused spectra. This might have to do with its dependency on
results with the three classifier and the fusion is seen to have the smallest distance to the query point thus in high
significant improvement than with the latter (GBNIR). dimensional data (concatenated feature vectors) the ratio
between nearest and farthest points will approach unity
Table 1: Recognition Rates for Manhattan Classifier (L1) for making points to be uniformly distant from each other. This
GBNIR and RBNIR makes clear distinction difficult.

Table 4: Recognition Rates for Manhattan Classifier (L1)


Single Spectra(R, G, B, NIR) Vs Combined Spectra
(GBNIR/RBNIR)

Table 2: Recognition Rates for Euclidean Classifier (L2)


for GBNIR and RBNIR
Table 5: Recognition Rates for Cosine Distance Classifier
Single Spectra(R, G, B, NIR) Vs Combined Spectra
(GBNIR/RBNIR)

Table 3: Recognition Rates for Cosine Distance Classifier


for GBNIR and RBNIR

Table 6: Recognition Rates for Euclidean Classifier (L2)


Single Spectra(R, G, B, NIR) Vs Combined Spectra
(GBNIR/RBNIR)

This could have result from the availability of less


correlation between such spectra that in turns amounts for a
better performance. The fusion of the three best bands is also
seen to have demonstrated an improvement when compared
with some of previous work reviewed as in [5] [6]. A
comparison of the fused spectra to single spectra are seen in
Table (4-7) , it is also seen to demonstrate a significant
improvement in the recognition rates obtained with reference

Volume 2 Issue 2, February 2018 52


www.ijarp.org
International Journal of Advanced Research and Publications
ISSN: 2456-9992

Such effect is common to metric distance classifiers but is [10] S. Ribaric and I. Fratric, “ A Biometric Identification
more pronounced with Euclidean. System Based on Eigenpalm and Eigenfinger Features”,
IEEE Transaction on Pattern Analysis and Machine
4. Conclusion Intelligence, Vol. 27, No. 11, pp. 1698-1709,
In this work, Biometric fusion was carried out at image level November, 2005.
for four set of multispectral palm images (R, G , B, NIR), the
fusion demonstrated a significant increase in recognition [11] G. Lu, D. Zhang and K. Wang, “ Palmprint Recognition
rates (99.95%) with a number of classifiers when compared Using Eigenpalms Features “, Pattern Recognition
with reviewed works. The Principal Component Analysis Letters, Vol. 24, pp. 1463-1467, 2003.
(PCA) was employed for feature extraction. It is seen to
demonstrate increase in recognition with more increase in the [12] A. Ross, A. K. Jain, “Information Fusion In
number of the training sets; this is quite typical with the Biometrics”, Pattern Recognition Letters, Vol. 24, pp.
method employed for feature extraction (Eigenface) which 2115-2125, 2003.
demonstrates better performance with increase in the number
of available feature vectors. It is also evident that most [13] M. Turk and A. Pentland, “Eigenfaces for
discriminative information is captured under three bands Recognition”, Journal of Cognitive Neuroscience, Vol.
Red, Blue and NIR. 3, No. 1, pp. 71-86, 1991.

References [14] R Raghavendra, C Busch ,” Novel Image Fusion


[1] A. Kong, D. Zhang and M. Kamel, “A Survey of Scheme Based On Dependency Measure For Robust
Palmprint Recognition”, Pattern Recognition, No. 42, Multispectral Palmprint Recognition”,Pattern
pp. 1408-1418, 2009. recognition, Elsevier, 2014.

[2] Y. Hao, Z. Sun, T. Tan and C. Ren, “Multispectral [15] D Zhang, W Zuo, F Yue,” A comparative study
Palm Image Fusion for Accurate Contact-Free of palmprint recognition algorithms“ , ACM
Palmprint Recognition”, 15th IEEE International Computing Surveys (CSUR), 2012.
Conference on Instrumentation and Measurement, pp.
281-284, October , 2008. [16] Z Liu, J Pu, T Huang, Y Qiu, “A novel classification
method for palmprint recognition based on
[3] Z. Guo, L. Zhang and D. Zhang, “Feature Band reconstruction error and normalized distance”,
Selection for Multispectral Palmprint Recognition”, Applied intelligence - Springer, 2013.
20th International Conference on Pattern Recognition,
pp. 1136-1139, August, 2010. [17] H. C. Chuan, HL. Cheng, CL. Lin and KC. Fan, “
Personal Authentication Using Palmprint Features”,
[4] D. Zhang, Z. Guo, G. Lu, L. Zhang and W. Zuo, “An Pattern Recognition, Vol. 36, No. 2, pp. 371-381, 2003
Online System of Multispectral Palmprint
Verification”, IEEE Transaction on Instrumentation and
[18] X. Shuang and J. Ding, “ Palmprint Image Processing
Measurements, Vol. 59, No. 2, February, 2010.
And Linear Discriminant Analysis Method”, Journal
[5] X. Xu , Z. Guo , C. Song and Y. Li, “Multispectral of Multimedia, Vol. 7, No. 3, 2012.
Palmprint Recognition Using a Quaternion Matrix”,
Sensors, Open Access Journals Vol. 12, No. 4, pp.
4633-4647, 2012. [19] T. Connie, A.T.B. Jin and M.G.K. Ong, “ An
Automated Palmprint Recognition System”, Image
[6] N. Mittal, M. Hanmandlu, J. Grover, R. Vijay, ”Rank- Vision in Computer, Vol. 23, o. 5, pp. 501-505, May
level Fusion of Multispectral Palmprints”, International 2005.
Journal of Computer Applications, Vol. 38, No. 2,
January, 2012 [20] S. A.C. Schuckers , “ Spoofing and Anti-Spoofing
Measures“, Information Security Technical Report,
[7] Y.Wang and Q. Ruan, “A New Preprocessing Method Vol. 7, No. 4, pp. 56-62, December, 2002.
of Palmprint”, Journal of Image and Graphics, Vol. 13,
No. 6, pp. 1115-1122, 2008. Author Profile
Abubakar Sadiq Muhammad was born in Kano State,
[8] R. K. Rowe, U. Uludag, M. Demirkus, S. Parthasaradhi Nigeria in 1985. He had his B.Eng Degree in
and A. K. Jain, “A Multispectral Whole-Hand Computer Engineering from Bayero
Biometric Authentication System”, in Proc. Biometric University, Kano, Nigeria in 2010.He
Symposium, Biometric Consortium Conference, received his M.Sc Degree from Mevlana
Baltimore, MD, pp. 1-6, September, 2007. University, Konya, Turkey in the same field
in 2014. Currently, He is a Lecturer in
[9] W. Linqyu, G. Leedham, “Near and Far-Infrared School Of Technology, Kano State Polytechnic.
Imaging for Vein Pattern Biometrics”, IEEE
International Conference on Video and Signal Based
surveillance, pp. 52, November, 2006

Volume 2 Issue 2, February 2018 53


www.ijarp.org

You might also like