You are on page 1of 4

AUTOMATIC SEGMENTATION OF IRIS IMAGES FOR THE PURPOSE OF

IDENTIFICATION

Amjad Zaim

InfoMD Consultancy Group


Toledo, Ohio USA
www.infomd.org
amjad@infomd.org

ABSTRACT eye (Figure 1; left). Ideally, it shares high-contrast


boundaries with the pupil but less-contrast boundaries
Automatic recognition of the human iris is essential for with the sclera. Diseases of the eye and exposure to certain
the purpose of personal identification and verification environmental conditions, however, can reverse this effect
from eye images. The human iris is known to possess and drastically alter its appearance although these
structures that are distinct and unique to each individual. conditions are rare [3]. Reactions to different levels of
Accurate classification, however, depends on proper illuminations have shown to produce small changes in the
segmentation of the iris and the pupil. In this paper, we diameter of the pupil and do result in severe distortion of
present a new method for automatically localizing and the iris. All these factors make the iris a potential
segmenting the iris with no operator intervention. Circular candidate for personal identification.
region growing is first used to localize the eye’s centroid. An identification system typically consists of 1) image
We then utilize several geometrical features of the eye to data acquisition, 2) iris localization and segmentation and
constraint a model built in the polar image space. The 3) pattern matching [1,4]. The design of an image-
model employs knowledge of anatomical attributes as well capturing system requires that sufficient resolution is
as gradient information to extract the iris boundaries. maintained in order to capture the small details of the iris,
Applying this method to 352 images revealed 92% typically 1cm in diameter, with a level of illumination
segmentation accuracy. The algorithm has shown to be high enough to provide good contrast in its interior
effective in various levels of illumination and for images portion without causing discomfort and irritation to the
with large field of view containing other facial features. individual’s eye. An 80-mm lens, f-stop 11 and a 1 cm
depth of field has been reported to produce reasonable
quality images [5].
Iris localization and segmentation of camera images,
1. INTRODUCTION however, poses a significant challenge for many reasons.
The eyelids can obscure substantial portions of the eye.
The problem of automatic identification and verification of The shape and dimensions of the eye vary from one
individuals to restrict access to secured sites has been subject to another and depends on its position in relation
tackled by a variety of biometrically-based approaches to the sensor. In addition, low-level illumination and
from fingerprints, voice, handwritten signatures and even certain diseases of the eye can greatly degrade the contrast
facial features. The iris has recently been recognized as a in pupil-iris boundaries. Previous studies have made use
highly distinct feature that is unique to each individual of the geometrical nature of the eye. The eye was modeled
[1,2]. It is composed of several layers which gives it its with circular contour fitted through gradient ascend to
unique appearance. This uniqueness is visually apparent search for the global minimum using deformable template
when looking at its rich and small details seen in high- [5]. An approach based on an analysis of the image
resolution camera images under proper focus and gradient pattern corresponding to an eye, including
illumination. The iris is the ring-shape structure that motion information has also been presented [6]. The
encircles the pupil, the dark centered portion of the eye, pupil/iris boundaries also have been detected and localized
and stretches radially to the sclera, the white portion of the via edge detection and Hough transform was used to

0-7803-9134-9/05/$20.00 ©2005 IEEE


capture the contours of interest [7]. Other methods 2.2. Centroid Localization
included symmetrical circular filter and principle
component analysis for feature extraction [8-10]. We Given the circular nature of the iris and the pupil, the first
developed an accurate model-based segmentation scheme step in segmentation after eyelashes removal begins with
that is tolerable of low-illumination level and wider filed localizing the centroid shared by both the iris and the
images, and is guarded against obstructive features such as pupil. The darkness of the pupil stems from its absorption
eyelids. The system has shown good performance when of light and, in some cases, diseases or improper
applied to 352 images with 92% accuracy. illumination may reverse this effect. We first apply Split
and Merge algorithm to detect connected regions of
2. IRIS SEGMENTATION uniform intensity [11,12]. A criterion for disjoining a
region Rm is that 80% of the pixels contained in Rm have
Our system models the iris as a ring-shaped object the property:
concentric around the disk-shaped pupil. Gray-scale
| I(x,y) – µ(m) | ≤ σ(m) (2)
camera images of 280x320 pixels containing one eye are
processed in the following fashion. First, occluding Where I(x,y) is the intensity level, µ(m) and σ(m) are
eyelashes are removed by morphological closing. Next, the the gray-level mean and the standard deviation in Rm,
center of the pupil is localized by fist applying Split and respectively. The shape of the extracted regions is
Merge algorithm to detect connected regions of uniform classified into a circle by growing a disk-shape template
intensity and then growing a circular template to centered at the first moment of each region. The template
distinguish the pupil from other potential circular or semi- that produces the maximum normalized energy is
circular objects. A model-based algorithm is then applied identified as the one that has the centroid of the pupil. The
in the polar-sampled space and an edge linking scheme is centroid is also moved around few pixels upward,
used to detect the horizontally-mapped boundaries of the downward and sideways until an optimal centroid is
iris. The algorithm returns a set of model parameters reached.
including the radii of the circular contours of the pupil
and the iris as well as the centroid they share. The steps
2.3. Cartesian-to-Polar Mapping
are discusses in more details in the next sections.
We have used Cartesian-to-Polar mapping in the past to
2.1. Morphological Closing
reconstruct rectangular-shaped ultrasound images from
fan-shaped images collected around a point source
Eyelashes can severely occlude the iris and interfere with
[13,14]. Samples were obtained from rays irradiating
the segmentation process. The eyelashes can be described
outward from a central point. Resampling the caretsian
in the image as the dark, long and narrow structures that
image space (x,y) into the polar image space (r,θ) is done
are oriented randomly around the eye (Figure 1; left).
according to:
Morphological closing is a well know filter frequently
used in image processing to fuse narrow breaks and long r = [( x − xo ) 2 + ( y − yo )]1 / 2 (3)
gulfs in binary and gray-scale images [8,9]. The Closing
of an image A by a structuring element B can be simply ⎛ y − yo ⎞ (4)
θ = arctan ⎜⎜ ⎟
defined as the dilation of A by B followed by erosion of ⎝ x − xo ⎟⎠
the results by B or:
For each grid point (x,y) of the destination image, the
A • B = ( A ⊕ B )Θ B (1) polar coordinates are computed with respect to a centroid
(xo,yo) and its grayscale value is interpolated from its
The initial dilation process removes the dark details
nearest neighbors in the source image. Ideally, a circle
and brightens the image. The subsequent erosion process
centered on the point (xo,yo) is mapped onto a line
darkens the image without reintroducing the details
stretching along the angular range (0-2π). For example,
removed by dilation. We used a flat linear structuring
the pupil and iris boundary contours are mapped to near
element of 15-pixel length. The closing process was
horizontal edges with linearity depending on how
applied iteratively while the linear structuring element
concentric the contours are with respect to the centroid.
was incrementally rotated by a 5-degrees interval to
Although this is not a one-to-one transformation, most of
account for the random orientation of the eyelashes. The
the image content is recovered in the polar space except
closed image contained smoother features with smeared
toward the periphery where significant loss of resolution
eyelashes (Figure 1; right).
occurs as a result of interpolation. We used a small angle
of 0.5o for sampling interval to minimize this effect which The horizontal direction with small tolerance is chosen to
becomes more prominent at some distance away from the be the criteria direction consistent throughout the edges.
center. In general, objects in the resultant transformed
4) Merge horizontally-connected edge chains by fitting
image appear much like flexible bent curved objects that
line segments to their edge points. Only the two sets of
were unfolded or stretched into their flat form (Figure 2;
line segments that differ by less than 3o are kept.
left).
5) The distance between the two lines that correspond to
the highest number of segments are tested against the two
2.2. Gradient-Based Edge Detection and Linking
conditions: (riris/rpupil < 4.0) AND (riris/rpupil > 1.75). If any
of the two conditions are true, ignore and consider next
Our model exploits few observations that can be made
possible segment sets.
from the polar-sampled image. First, the intensity
distribution of the eye changes as one crosses from pupil The result of the above algorithm is a map of two line
to iris and from iris to sclera. Hence, taking the radial segments overlaying the edges of the iris and the pupil
derivative of the image intensity in the direction of r (Figure 3). The average vertical distances to the first and
reveals two set of boundaries. Second, the two boundaries second lines of edges are considered to be rpupil and riris,
are horizontal or near horizontal and can be detected respectively. When these parameters along with the
using the local maximum of the image gradient. We also centroid (xo,yo) are mapped back to the original iris image,
utilize the anatomical criteria of the eye that the iris-pupil the resulting contours can be seen to accurately overlays
radii ratio is greater than 4.0 or less than 1.75 [3]. The last the pupil and the iris (Figure 4).
observation is used as an upper constraint to prevent the
iris from crossing over to the sclera region or toward the
pupil in cases of severe noise or light reflection. The
following steps summarize our edge detection scheme:
1) For horizontal edges, the gradient of the image has the
following properties:
∇ G y = max, ∇ G x = 0 (5)
Figure 1: Original camera image of the eye showing the iris
We extract horizontal edges by obtaining the first gradient (left). The iris image after morphological closing with eyelashes
in the θ- and r- direction using the following Sobel masks: removed (right).

⎡ 1 2 1 ⎤ ⎡1 0 − 1⎤
⎢ (6)
S horizontal = ⎢
0 0 0 ⎥⎥ S vertical = ⎢

2 0 − 2 ⎥⎥
⎢⎣ − 1 −2 − 1⎥⎦ ⎢⎣ 1 0 − 1 ⎥⎦

A typical gradient image in r-direction includes strong r


edges at high-contrast areas of dark-to-bright zones such ↑
as the pupil-iris interface, weaker and wider edges at areas →θ
of less-contrast along the iris-sclera interface, and other Figure 2: Result of polar mapping of the lower half of the
scattered edges (Figure 2; right). The gradient magnitude original image with pupil and iris mapped into horizontal
in θ-direction is zero along horizontal edges. structures (left). Gradient image produced by horizontal Sobel
operator (right).
2) We search for the edges that satisfy (5) by keeping the
points whose gradient magnitude is a local maximum in
the θ-direction and zero in the r-direction. For a central
pixel, P, the two neighbors in the direction that is closest
to the direction of the gradient gp are checked and if gp is
largest, it is incremented by half its magnitude while the
others are eliminated [12]. The process is iterated and the riris
result is an edge map with only one best point across a rpupil
given border at any point along θ-axis.
3) Build 8-connected pixels edge chains along the r- Figure 3: A map of line segments masking the iris and the pupil
direction using forward and backward neighbors along θ. edges.
5. REFERENCES

[1] R. Wildes, “Iris Recognition: An Emerging Biometric


Technology,” Proc. IEEE, vol. 85, pp. 1348-1363, 1997.

[2] R. G. Johnson, “Can iris patterns be used to identify people,”


Los Alamos National Laboratory, CA, Chemical and Laser
Sciences Division, Rep. LA-12331-PR, 1991.
Figure 4: Segmentation results with contours outlining the pupil
and the iris. [3] D. Miller, Ophthalmology. Houghton Mifflin, MA, 1979.

Method Accuracy Mean Time [4] J.Daugman, “Statistical Richness of Visual Phase
Wildes[1] 98.6% 8.28s Information: Update on Recognizing Persons by Iris Patterns,”
Daugman[5] 99.5% 6.56s Intl. J. of Computer Vision, vol. 45, pp. 25-38, 2001.
Proposed 92.7% 5.83s
[5] J.Daugman, “High Confidence Visual Recognition by a Test
Table 1. Comparison with other algorithms.. of Statistical Independence,” IEEE Trans. Pattern Analysis and
Machine Intelligence, vol. 15, pp. 1148-1161, 1993.
3. RESULTS
[6] R. P. Wildes, J. C. Asmuth, G. L. Green, S. C. Hsu, R. J.
Kolczynski, J. R. Matey, and S. E. McBride, “A Machine Vision
We obtained camera eye images of 100 human subjects System for Iris Recognition,” Mach. Vision App., vol. 9, pp.1–8,
with diverse shapes and orientation from an online 1996.
database. Good localization was obtained in images of low
contrast in the iris-sclera interface (12 to 20 gray-level [7] J. G. Daugman, “Complete discrete 2-D Gabor transforms by
difference). However, images of low contrast in the pupil- neural network for image analysis and Compression,” IEEE
iris interface were not available for tests. At the other Trans. Acoust., Speech, Signal Processing, vol. 36, pp. 1169–
1179, 1988.
extreme, excess illumination has not prevented accurate
localization but caused error in 9 localization attempts. [8] Li Ma, Y.Wang, T.Tan, “Iris Recognition Using Circular
Imperfection in the circular nature of the iris boundary Symmetric Filters,” Proceedings of the Sixteenth International
localization also caused 18 miss-localizations. Overall, a Conference on Pattern Recognition, vol. 2, pp. 414-417, 2002.
total of 352 eye images resulted in 320 correct
segmentation based on visual assessment and a [9] Kwanghyuk Bae, Seungin Noh, and Jaihei Kim, “Iris Feature
performance accuracy of 92%. The average execution time Extraction Using Independent Component Analysis”, AVBPA
needed for the entire process of segmentation is about 6 2003, LNCS, vol. 2688, pp. 838-844, 2003.
seconds when performed on a regular 789 MHz desktop
[10] R. Kothari, J. Mitchell, ”Detection of Eye Locations in
computer. Table 1 shows comparison of the proposed
Unconstrained Visual Images,” Proc. IEEE ICIP, pp. 519-522,
method with other segmentation algorithms. While our 1996.
method reports lower segmentation accuracy, it
outperforms the others in terms of speed. [11] K. R. Castleman, Digital Image Processing, Prentice-Hall,
Upper Saddle River, New Jersey, 1996.
4. CONCLUSION
[12] W, Niblack, An Introduction to Digital Image Processing,
Prentice-Hall, Upper Saddle River, New Jersey, 1996.
In this paper, we described a fast and effective real-time
algorithm for localizing and segmenting the iris and pupil [13] A, Zaim, R, Keck, R, S, Selman, and J, Jankun, "Three-
boundaries of the eye from camera images. Our approach Dimensional Ultrasound Image Matching System for
detects the center and the boundaries quickly and reliably, Photodynamic Therapy," Proceedings of BIOS-SPIE, vol. 4244,
even in the presence of eyelashes, under very low contrast pp. 327-337, 2001.
interface and in the presence of excess illumination.
Results have demonstrated 92% accuracy rate with a [14] J. Jankun and A. Zaim, “An image-guided robotic System
relatively rapid execution time. It is suggested that this for photodynamic Therapy of the Prostate,” SPIE Proceeding,
algorithm can serve as an essential component for iris vol. 39, pp. 22-30, 1999.
recognition applications.
[15] R, Gonzalez. Digital Image Processing, Addison-Wisely,
MA, 1996.

You might also like