You are on page 1of 6

A Novel Method for Person Authentication using Retinal Images

L.Latha1 , M.Pabitha2 and S.Thangasamy


Department of Computer Science & Engineering, Kumaraguru College of Technology
Coimbatore, India
1
latha_kct@rediff.com and 2 pabithasaran@gmail.com

Abstract new approach, compared to other biometrics


features. Retina recognition technology
A new way of person authentication captures and analyzes the patterns of blood
based on retina pattern is presented in this vessels on the thin nerve on the back of the
paper. It consists of retina feature extraction, eyeball that processes light entering through
template generation and finally matching the the pupil. Retinal-based recognition for
patterns. Our proposed method initially personal identification has further desirable
involves segmentation process to identify properties such as uniqueness, stability, and
blood vessel intersection points in the retina, noninvasiveness. The features extracted
then the generation of template consisting of from retina can identify even among
the bifurcation points in the blood vessels genetically identical twins. Retinal patterns
and finally matching of the intersection are highly distinctive traits.
points in different patterns. The number of
matched blood vessel intersection points
between two patterns is used to quantify the
degree of matching. The validity of our
approach is then verified with the
experimental results. Also we have made a
performance analysis by using the DRIVE
database and found that the proposed retina
recognition method gives us 100% accuracy Fig.1 side view of the eye
rate.
Retina pattern is more stable and
Keywords Bifurcation points, Degree of reliable for identification, which makes
matching, Feature Extraction, intersection retina recognition a prominent solution to
Points, Retina Recognition. security in the near future. Retina biometrics
is considered to be the best biometric
1. Introduction performers. In this paper, the retina feature
Biometric authentication is being extraction is discussed with a method of
utilized widely as an inherently more retina pattern matching. Some interesting
convenient and reliable way to authenticate results indicating the reliability of the
a user over the traditional knowledge based method are also presented.
or token-based approaches. The recent
upswing in technology and increasing 2. Related Works
concern related to security caused a boost in
intelligent person authentication system Previous methods to segment blood
based on retina biometrics. It is relatively a vessels into two categories: window-based
[3], classifier-based [5]. Window-based changes during a persons life. The retina
methods, such as edge detection, estimate a size of the actual template is only 96 bytes,
match at each pixel for a given model which is very small by any standards. In
turn, verification and identification
against the pixels surrounding window. In
processing times are much shorter than they
[3], the cross section of a vessel in a retinal are for larger files. The rich, unique
image was modeled by a Gaussian shaped structure of the blood vessel pattern of the
curve, and then detected using rotated match retina allows up to 400 data points to be
filters. Classifier-based methods proceed in created. Reliable automatic recognition of
two steps. First, a low-level algorithm persons has long been an attractive goal.
produces a segmentation of spatially The blood vessel pattern of the retina rarely
connected regions. These candidate regions changes during a persons life (unless he or
are then classified as being vessel or not she is afflicted by an eye disease such as
vessel. In [5], regions segmented by user- glaucoma, cataracts, etc). As the retina is
assisted threshold were classified as blood located inside the eye, it is not exposed to
vessel or leakage according to their length to (threats posed by) the external environment.
width ratio. In [6], regions segmented by the
method in [3] were classified as vessel or
not vessel according to many properties,
including their response to a classic operator
designed to detect roads in aerial imagery.
The drawback of these methods is that the
large-scale properties of vessels cannot be
applied to the problem until after the low- Fig.2 (a) Retinal image
level segmentation has already finished.

3. Retina Recognition
The robust representations for
recognition must be invariant to changes in
the size, position, and orientation of the
patterns. In the case of retina recognition, it
creates a representation that is invariant to
the optical size of the iris in the image the Fig.2 (b) Retinal vascular tree
size of the retina the location of the retina
within the image and the retina orientation.
Fig.2 shows the sample retina recognition.
Fortunately, invariance to all of these factors
can readily be achieved and hence this
overcomes all the above drawbacks that are
present in most other authentication
techniques used in biometrics. The blood
vessel pattern of the retina does not or rarely
Fig.2 (c) Retina Features
The proposed person authentication
scheme contains three basic processes:
feature extraction, template generation, and
template matching.

3.1 Feature Extraction


Most of the previous works are based
on reference core points of a retina [1],
however, precise estimation remains a
Fig.4 - Illustration for minutiae-centered region
difficult problem and errors in the reference encoding
core points could lead to false reject.

To address this problem, we propose b = [ / db]


a minutiae-centered region encoding, to a = [ / da] (1)
avoid the reference core point determination o = [ / do]
and deal with noise. We use both location In equation (1), the parameter db
and orientation attributes of a minutia, and indicates the bandwidth of the region
these attributes are represented as a 3-tuple tessellation, da means the distortion
(x, y, ). tolerable difference of radial angle, and do is
the distortion tolerable difference of the
First, we construct a circular region
orientation of the neighbor minutia with
R around each minutia with the same radius.
respect to the core minutia. Suppose there
As for each region, the center minutia is
are m neighbor minutiae in a region R, then,
called the core minutia, and the others
R can be represented as a set of T:
named the neighbor minutiae. Then each
neighbor minutia will be converted into the
M = <T1, T2.Tm> (2)
polar coordinate system with respect to the
where the set M is called Minu Code.
corresponding core minutia, and be
Finally, suppose there are N minutiae
represented as a new 3-tuple (, , ), where
in a retina, the original retina template can
and indicate the radial distance and
be a collection of Minu Codes {M1, M2, ,
radial angle separately, and represents the
MN}. This collection is the result of our
orientation of the neighbor minutia with
feature extraction process.
respect to the core minutia, , [1,360]. An
illustration is given in Fig. 4.

Secondly, a tessellation quantification


is carried out on each of the neighbor
minutiae by tessellate the region of interest
centered at the core minutia.

Then, the 3-tuple (, , ) in the


polar coordinate system will be quantified
into a rougher 3-tuple T (b, a, o).

Fig.5 Intersection points determined


3.2 Template Generation 3. Matched = 0
4. For each intersection point I1 in S1 do
After finding the blood vessel step 5 to 7
intersection points of the input retina image, 5. Find the intersection point I2 in S2 and 8
the background blood vessel skeleton is neighbour sub regions of S2 which has
removed and keeps only intersection points. minimum distance Dmin with I1.
These intersection points collectively 6. If Dmin Dth and I2 is not already
generate a template. Each template contains matched then increment matched by 1
number of intersection points and 7. Mark I2 as matched.
coordinates of each intersection points. A 8. Total matched = total matched + matched
template is shown in Fig. 6 9. Calculate percent of intersection point
matching by the following equation:
P Match = (2* total matched /P1+P2)*100
where P1: Total number of intersection
points in Temp1
and P2: Total number of points in Temp2
10. Return P Match.
Finally we have taken the degree of
matching as follows:
Degree of Matching = max {Template
Matching (Temp1, Temp2), Template
Fig. 6 A Template of Fig.5 Matching (Temp 2, Temp1)}

3.3 Template Matching 4. Experimental results

For matching each template is The proposed method is applied to a


divided into some sub regions each of which dataset [2] containing 40 retina images of 40
containing some intersection points. The different persons each having 3 samples.
degree of matching of different templates is From this, 40 different persons i.e. 3 images
measured by the closeness of intersection for each person are taken. The experiments
points between the templates. The are performed in Matlab 7.0. GAR (Genuine
intersection points in two different templates Acceptance Rate), FAR (False Acceptance
for same person can have some translational Rate) and FRR (False Rejection Rate) are
and rotational displacements. We subdivided calculated to evaluate the performance of the
the total template in to 8 x 8 same size system. The database is tested for different
regions. Following algorithm performs the threshold values to calculate the GAR, FAR
matching between two templates: and FRR.
To get the GAR, each persons
Consider the Matching templates as
extracted features are being compared with
(Temp1, Temp2) and sub regions as (S1, S2)
other image instances of the same person, as
1. Total matched = 0 there are 3 images for each person. In all of
2. For each sub region S1 in Temp1 and the comparisons if match score is less than
corresponding sub region S2 in Temp2 do the fixed threshold then it implies that a
step 3 to 8 genuine person is not being accepted i.e.
false rejection. By this way get FRR. To get
the FAR each person features are being
compared with other 39 persons features, as
there are 40 persons in the database. In any
of the comparison if match score is more
than the fixed threshold then it implies that a
false person is being accepted. All such
comparisons are made on the database to
compute GAR, FRR and FAR at a fixed
threshold. Experiment is repeated by fixing
threshold to different thresholds.

Table representing GAR, FRR,


FAR at different thresholds
Fig 8. ROC curve between FAR & GAR

Threshold GAR FRR FAR


20 100% 0% 87.5%
25 100% 0% 62.5%
30 100% 0% 38.8%
35 100% 0% 26%
40 100% 0% 4%
45 100% 0% 0%
50 100% 0% 0%
70 100% 0% 0%
75 95% 5% 0%
80 51% 48.7% 0%
85 7.5% 92.5% 0%
90 0% 100% 0% Fig 9. ROC curve between FAR &FRR

5. Conclusion
100

90
We have implemented a new way of
80 person authentication based on retina pattern
70 recognition. This method involves vessel
60 segmentation, generation of template
consisting of the bifurcation points and
inter user
50
intra user

40
matching of the intersection points. The
30
number of matched intersection points is
20

10
used to quantify the degree of matching. We
0
have made a performance analysis by using
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77
the DRIVE database and found that the
proposed retina recognition method gives us
Fig 7. Inter and Intra class variations 100% accuracy rate. As a future work we
wish to provide retinal template security by
using noninvertibility and discriminability [11] C. Simon and I. Goldstein, A New
constructions. Scientific Method of Identification, New York
State Journal of Medicine, Vol. 35,No. 18, pp.
6. References 901-906, September, 1935.
[12] Jafariani.H, Retinal image base
recognition 11th Bioelectric Engineering
[1] S.M. Raiyan Kabir, Rezwanur Rahman,
Conference, AmirKabir university, Iran 2003.
Mursalin Habib and M Rezwan Khan, Person
[13] ZHI- XU et al: " The Blood Vessel
Identification by Retina Pattern Matching,
Recognition of Ocular Fundus." IEEE
ICECE, pages 522-525, December 2004.
Proceedings of the Fourth International
[2] DRIVE, Digital retinal images for vessel
Conference on Machine Learning and
extraction, 2007.
Cybernetics, August 2005.
[3] S.Chaudri, S. Chatterjee, N.Katz, M.Nelson
[14] M. Tanaka and K. Tanaka: "An automatic
and M. Goldbaum, Detection of Blood Vessels
technique for fundus-photograph mosaic and
in Retinal Images Using Two-Dimensional
vascular net reconstruction." in MEDINFO '80.
Filters, in IEEE Trans. On Medical Imaging,
Amsterdam. The Netherlands: North-Holland,
vol. 8, no. 3,Sep.1989,Pp. 263-269,1992.
1980, pp. 116-120.
[4] STARE, Structured analysis of the retina,
2007.
[5] M. E. Martinez-Perez, A. D. Hughes, A. V. L. Latha is currently working as an
Stanton, S. A. Thom, A. A.Bharath and K. H. Assistant professor in CSE department at
Parker. Retinal blood vessel segmentation by KCT, Coimbatore, India. She has received
means of scale-space analysis and region the Post-graduate qualification in Applied
growing. Lecture Notes in Computer Science. Electronics in the year 1996 from CIT,
Medical Image Computing and Computer- Coimbatore. She is now pursuing her
Assisted Intervention (MICCAI'99). Vol. 1679. research in the area of Biometric
19-22th September, 1999. Cambridge, Authentication. She has more than 12 years
England,pp. 90-97. of teaching experience. Her current research
[6] S.Tamura, K.Tanaka, S.Ohmori, K.Okazaki,
interest includes Security in computing and
A.Okada and M.Hoshi, Semiautomatic Leakage
Analyzing System for Time Series Fluorescein Digital image processing. She is a member
Ocular Fundus Angiography, in Pattern of ISTE and CSI.
Recognition, vol. 16, no. 2.Pp.149-162,1983. M. Pabitha received B.E degree in
[7] B. Cote, W. Hart, M. Golbaum, P.Kube and Computer Science from Anna University,
M. Nelson, Classification of Blood Vessels in Chennai, India in 2008. Currently, she is a
Ocular Fundus Images , technical report, student of M.E. CSE at KCT, Coimbatore,
Computer Science and engineering dept., India. Her current research interest includes
University of California, San Diego, 1994. multimodal biometrics, image processing
[8] Sameh A. Salem, Nancy M. Salem, and and template security.
Asoke K. Nandi,Segmentation of retinal blood Dr. S. Thangasamy is currently the
vessels using a novel clustering algorithm,
Dean of the Department of CSE at KCT,
European Signal Proccesing Conference
EUSIPCO 2006. Coimbatore, India. He has received PhD
[9] R. C. Gonzalez and R. E. Woods, Digital from I.I.T., Mumbai, India in the year 1983.
Image Processing., Pearsons Education Inc., He has worked in Bhabha Atomic Research
2005 Second Edition. Centre, Mumbai and at the University of
[10] S. Nanavati, M. Thieme and R. Nanavati, Twente, Enschede, Netherlands. His area of
Biometrics Identity Verification in a Networked interest includes Real Time Computer
World, John Wiley & Sons, Inc., 2002. systems, Software Engineering, Computer
Graphics and Multimedia Applications.

You might also like