Professional Documents
Culture Documents
ISSN: 2456-9992
Abstract: This paper presents a simple method for verification of palm images. Fusion of palmprint instances is performed image level. To
capture the palm characteristics, the fused image were concatenated to form longer feature vector whose dimension is reduced by Principal
Component Analysis (PCA). Finally, the reduced set of features is trained with distance metric classifiers (Manhattan, Euclidean and Cosine
Distance) to accomplish recognition task. For evaluation, PolyU Multispectral Palmprint database is used. The experimental results reveal
that three bands R, B, NIR contain most of the salient and discriminative features for building an accurate biometric system and in which a
recognition rate of 99.99% can be achieved.
Keywords: Palmprint Verification, Multispectral Image Fusion, Eigenpalm, Distance metric Classifiers.
the palmprint requires defining a coordinate system based on where M is the number of images in the training
x 1 , x 2 ... x m
which different palm images are aligned for matching and
set. The average palmprint image of the training set is
verification. Most of the preprocessing algorithms employ
defined by
identification of the key points between fingers to set up a
M
coordinate system. The following algorithm is followed to 1
extract the central part of the palmprint image as ROI and
xi (1)
M i 1
further this ROI is used for multispectral fusion of palm
images: The difference between each palmprint image and the
Step 1: Convert the multispectral image to a binary image. A average image is given by i x i , the covariance
smoothening filter can be used to enhance the image.
Step 2: Apply boundary-tracking algorithm to obtain the matrix of x i is as follows:
boundaries of the gaps between the fingers. M T
1 XX
xi xi
T
Step 3: Determine palmprint coordinate system by C (2)
computing the tangent of the two gaps with any two points M i 1 M
on these gaps. The y-axis is considered as the line which
Where the matrix X 1 2 ... m is the concatenated
joined these two points. To determine the origin of the
coordinate system the midpoint of these two points are taken palm image vectors. The covariance matrix C is of
through which is a line passing and perpendicular to the y- 2 2
dimension N N . The PCA transform has been found to be
axis. the best approximation of an original process in the sense
Step 4: finally, extract the ROI which is the central part of that it reduces the total mean-square error provided the
the palmprint for feature extraction. eigenvalues are arranged in decreasing order thereby
allowing the eigenvectors of C to span an algebraic
eigenspace and provide an optimal approximation for those
training samples in terms of the mean square error [10]. The
determination of the eigenvalues and eigenvectors of the
2 2
matrix C of typical image size ( N N ) is cumbersome and
time- consuming. Thus an efficient method is needed to
Figure 2.1(a): Preprocessing Steps calculate the eigenvalues and eigenvectors. It is well known
that every matrix satisfies the following theorem
Auk ku (3)
k
Where u k
and k refer to the eigenvalues and eigenvectors
of A. Hence the matrix C can be written as:
Figure 2.1(b): ROI under four illuminations (R,G,B,NIR)
Cuk u (4)
k
2.1. Multispectral Image fusion with u and k refer to the eigenvalues and eigenvectors
The multispectral ROI palm images obtained under k
different illumination conditions as Red, Green, Blue and of C. However, in practice the number of training samples is
NIR bands are fused together to form a single feature relatively small as such it is easier to calculate the
vector. The fusion extracts information from each source eigenvectors ( v k ) and eigenvalues ( a k ) of the matrix say
image and obtains the effective representation in the final T
fused image. The objective of the fusion technique is to L X X whose size is bounded by the number of training
process the detailed information that is found from both the samples. Thus
X vk ak vk
T
source ROI images. A typical image can be described by a X (5)
two dimensional array (NxN). The eigenspace method Multiplying both side of equation (5) by X we have
defines a palmprint image as a vector of length N2x1 called a
X vk X vk
T
“palm vector”. A sub palmprint image obtained after XX ak (6)
preprocessing of fixed size 128x128 gives a vector which Comparing eqn.(4) with eqn.(6) above the eigenvectors of
represents a single point in the 16,384 dimensional space.
The fact that Palmprints have similar structures usually three matrix C is obtained, since we have
main lines and creases and all “palm vectors” are located in a uk Xv (7)
narrow image space bounded by the number of images k
to the number of samples in the training set. However, it is 3. Simulations and Result Discussion
not necessary to choose all the eigenvectors as the base The simulation for this work was implemented using Matlab
vectors but just those eigenvectors that have associated with v7.14 (R2012a). The experiments used images of 200
them largest eigenvalues as they give required representation persons which is equivalent to 200 classes since each palm
of the training set [13]. The number of significant image differs from the other. Each person has a total of
eigenvectors (u’k) with the largest associated eigenvalues are twelve (12) images which are chosen randomly beginning
selected to be the components of the eigenpalms which can with 2, 3, 4 to 10 training and others for authentication. The
extend to an M’ dimensional subspace of all possible process was repeated ten (10) times with the average
palmprint images. A new palmprint image is then recognition obtained. The recognition rates obtained from the
transformed into its corresponding eigenpalm by the different classifiers for Single Spectra (R, G, B, NIR) are as
following operation: shown in the figures below:
fi U xi i 1 ... M
'
(9)
U
'
u k
, K 1 ... M
'
(10)
and lastly Green. Table 1-3 show the comparison of to the single spectra with each of the classifier though the
recognition rates obtained on fusion among three spectra (R, performance with Euclidean is seen to be below those of the
B, NIR) and (G,B,NIR) the former is found to yield good fused spectra. This might have to do with its dependency on
results with the three classifier and the fusion is seen to have the smallest distance to the query point thus in high
significant improvement than with the latter (GBNIR). dimensional data (concatenated feature vectors) the ratio
between nearest and farthest points will approach unity
Table 1: Recognition Rates for Manhattan Classifier (L1) for making points to be uniformly distant from each other. This
GBNIR and RBNIR makes clear distinction difficult.
Such effect is common to metric distance classifiers but is [10] S. Ribaric and I. Fratric, “ A Biometric Identification
more pronounced with Euclidean. System Based on Eigenpalm and Eigenfinger Features”,
IEEE Transaction on Pattern Analysis and Machine
4. Conclusion Intelligence, Vol. 27, No. 11, pp. 1698-1709,
In this work, Biometric fusion was carried out at image level November, 2005.
for four set of multispectral palm images (R, G , B, NIR), the
fusion demonstrated a significant increase in recognition [11] G. Lu, D. Zhang and K. Wang, “ Palmprint Recognition
rates (99.95%) with a number of classifiers when compared Using Eigenpalms Features “, Pattern Recognition
with reviewed works. The Principal Component Analysis Letters, Vol. 24, pp. 1463-1467, 2003.
(PCA) was employed for feature extraction. It is seen to
demonstrate increase in recognition with more increase in the [12] A. Ross, A. K. Jain, “Information Fusion In
number of the training sets; this is quite typical with the Biometrics”, Pattern Recognition Letters, Vol. 24, pp.
method employed for feature extraction (Eigenface) which 2115-2125, 2003.
demonstrates better performance with increase in the number
of available feature vectors. It is also evident that most [13] M. Turk and A. Pentland, “Eigenfaces for
discriminative information is captured under three bands Recognition”, Journal of Cognitive Neuroscience, Vol.
Red, Blue and NIR. 3, No. 1, pp. 71-86, 1991.
[2] Y. Hao, Z. Sun, T. Tan and C. Ren, “Multispectral [15] D Zhang, W Zuo, F Yue,” A comparative study
Palm Image Fusion for Accurate Contact-Free of palmprint recognition algorithms“ , ACM
Palmprint Recognition”, 15th IEEE International Computing Surveys (CSUR), 2012.
Conference on Instrumentation and Measurement, pp.
281-284, October , 2008. [16] Z Liu, J Pu, T Huang, Y Qiu, “A novel classification
method for palmprint recognition based on
[3] Z. Guo, L. Zhang and D. Zhang, “Feature Band reconstruction error and normalized distance”,
Selection for Multispectral Palmprint Recognition”, Applied intelligence - Springer, 2013.
20th International Conference on Pattern Recognition,
pp. 1136-1139, August, 2010. [17] H. C. Chuan, HL. Cheng, CL. Lin and KC. Fan, “
Personal Authentication Using Palmprint Features”,
[4] D. Zhang, Z. Guo, G. Lu, L. Zhang and W. Zuo, “An Pattern Recognition, Vol. 36, No. 2, pp. 371-381, 2003
Online System of Multispectral Palmprint
Verification”, IEEE Transaction on Instrumentation and
[18] X. Shuang and J. Ding, “ Palmprint Image Processing
Measurements, Vol. 59, No. 2, February, 2010.
And Linear Discriminant Analysis Method”, Journal
[5] X. Xu , Z. Guo , C. Song and Y. Li, “Multispectral of Multimedia, Vol. 7, No. 3, 2012.
Palmprint Recognition Using a Quaternion Matrix”,
Sensors, Open Access Journals Vol. 12, No. 4, pp.
4633-4647, 2012. [19] T. Connie, A.T.B. Jin and M.G.K. Ong, “ An
Automated Palmprint Recognition System”, Image
[6] N. Mittal, M. Hanmandlu, J. Grover, R. Vijay, ”Rank- Vision in Computer, Vol. 23, o. 5, pp. 501-505, May
level Fusion of Multispectral Palmprints”, International 2005.
Journal of Computer Applications, Vol. 38, No. 2,
January, 2012 [20] S. A.C. Schuckers , “ Spoofing and Anti-Spoofing
Measures“, Information Security Technical Report,
[7] Y.Wang and Q. Ruan, “A New Preprocessing Method Vol. 7, No. 4, pp. 56-62, December, 2002.
of Palmprint”, Journal of Image and Graphics, Vol. 13,
No. 6, pp. 1115-1122, 2008. Author Profile
Abubakar Sadiq Muhammad was born in Kano State,
[8] R. K. Rowe, U. Uludag, M. Demirkus, S. Parthasaradhi Nigeria in 1985. He had his B.Eng Degree in
and A. K. Jain, “A Multispectral Whole-Hand Computer Engineering from Bayero
Biometric Authentication System”, in Proc. Biometric University, Kano, Nigeria in 2010.He
Symposium, Biometric Consortium Conference, received his M.Sc Degree from Mevlana
Baltimore, MD, pp. 1-6, September, 2007. University, Konya, Turkey in the same field
in 2014. Currently, He is a Lecturer in
[9] W. Linqyu, G. Leedham, “Near and Far-Infrared School Of Technology, Kano State Polytechnic.
Imaging for Vein Pattern Biometrics”, IEEE
International Conference on Video and Signal Based
surveillance, pp. 52, November, 2006