You are on page 1of 4

A COMBINED PULLING & PUSHING AND ACTIVE CONTOUR METHOD

FOR PUPIL SEGMENTATION


Carlos A. C. M. Bastos, Tsang Ing Ren and George D. C. Calvalcanti
Center of Informatics, Federal University of Pernambuco
Recife, PE, Brazil www.cin.ufpe.br/~viisar
{cacmb, tir, gdcc}@cin.ufpe.br
ABSTRACT
Pupil segmentation is usually the rst step used for searching iris
regions. Iris localization is an extremely important procedure in iris
biometrics systems, since the correct segmentation of inner and outer
boundaries is critical to achieve high recognition rates. An iris lo-
calization method based on a spring force-driven iterative scheme,
called Pulling & Pushing have been proposed by He et al. 2006.
Here, we propose a pupil segmentation procedure that combines
Pulling & Pushing and Active Contour Models, overcoming and im-
proving the results of the previous method. We also developed a
new strategy to identify and ll reection points that appear inside
the pupil. We tested our method in MMU1 and Casia V1 and V3 iris
databases, obtaining accurate results.
Index TermsBiometrics, pupil segmentation, active contour,
iris recognition.
1. INTRODUCTION
Due to security reasons, there is an increasing need for reliable au-
thentication process in modern society. In the last few years, biomet-
ric identication methods, that use physical and behavioral charac-
teristics, have received greater attention. Among various biometric
methods, iris recognition systems are frequently cited as one of the
most reliable [1].
Iris is the colored tissue ring encircling the pupil, surrounded
by the sclera. Iris recognition is based on the analysis of its com-
plex texture pattern, which is composed by ridges, furrows, arching
ligaments, crypts, rings and freckles. Iris recognition systems are
frequently divided in four major steps [1]: (1) image acquisition, (2)
iris segmentation, (3) analysis and representation of iris texture and
(4) matching of iris representations. The segmentation step, which
determines the iris region present on a given image, is one of the
most critical points, because poor recognition rates can result from
mistakes in this process.
Because of the importance of optimal iris localization, many
algorithms have been proposed to obtain its precise localization [1].
An often used approach to detect the iris boundaries (pupil/iris and
iris/sclera) is to parametrize them as two non-concentric circles.
These approximations simplify the problem and it still provides a
reasonably estimation of the pupil and iris regions.
Taking into account these parametrization, Daugman [2] devel-
oped an integrodifferential operator; and Wildes [3] and Masek [4]
used edge detection and Hough transform to segment iris images.
However, these methods perform an exhaustive search through a
This work was partially supported by FACEPE.
large parameter space, making the process very time-consuming,
even though, efforts have been made to reduce the computational
cost [5]. In [6], He et al. proposed a novel method to search these pa-
rameters through an iterative process, based on Hookes law, called
Pulling & Pushing.
In this paper, we propose an extension to the Pulling & Pushing
(PP) method, which combines PP with an Active Contour Model
(ACM). We dene a different process that modies the original PP
owchart and the convergence criteria. These modications allow
the possibility of not having to use a robust technique to nd the
initial center estimation.
The remainder of this paper is organized as follows: Section 2
presents the Pulling & Pushing method. Section 3 introduces the
proposed method. The experimental results are described in Section
4 and in Section 5 some conclusions are presented.
2. PULLING & PUSHING
The Pulling & Pushing algorithm developed by He et al. [6], is in-
spired by Hookes law. The method uses a combination of N mass-
less springs joined at a common point O

, to estimate the center


Op(xp,yp) and radius Rp parameters. PP is an iterative process
which is split in ve steps: (1) initial estimation, (2) transforma-
tion to polar coordinates, (3) extraction of edge points, (4) push and
pull, and (5) convergence verication.
The rst stage gives a coarse estimation of the center O

=
O
l
(x
l
,y
l
), l = 0, where l denotes the iteration. He et al. applied an
AdaBoost cascade classier to obtain this estimation [6]. The sec-
ond stage transforms the image into polar coordinates starting from
the center O
l
.
The third stage performs an edge detection in polar coordinates.
This transformation simplies the detection of circular shapes be-
cause circles becomes lines when viewed in this coordinate system.
Various edge detectors can be used in this process such as the So-
bel operator or the Canny edge detector. However, only one edge
point must remain at each radial direction [6]. The remainder points
are then labeled P
l
i
(
l
i
,r
l
i
), i = 1, 2, . . . , N, in polar coordinates,
where i is the direction and ri is the distance from O
l
until P
l
i
.
In the fourth stage, we compute the equilibrium (mean) length
of the springs, this is given by: R
l+1
= (1/N)

N
i=1
r
l
i
. Thus each
spring will push or pull the center O
l
with the force

fi , given by:

f
l
i
= kpp

R
l+1
r
l
i


ei , i = 1, 2, . . . , N, where kpp = 1/N
is the springs constant and

ei is the unit direction vector of the i-
th spring (

O
l
P
l
i
). The sum of the forces of all springs gives the
850 978-1-4244-4296-6/10/$25.00 2010 IEEE ICASSP 2010
Fig. 1: Flowchart of the PPAC method.
resulting force, computed by:

F
l+1
=

N
i=1

f
l
i
. This force is the
displacement to be applied to the center of the system, so to move it
to the new position: O
l+1
(x
l+1
,y
l+1
)= O
l
+

F
l+1
.
The convergence test, performed on the fth stage, veries the
need of the algorithm to execute further iterations, allowing O
l
to
nd the equilibrium position. The algorithm stops when one of the
conditions is satised: (a) O
l+1
and R
l+1
converged; or (b) the
number of iterations exceeds certain value Imax. The convergence
described in (a) is computed by the equation: C
l+1
=

F
l+1

R
l+1
R
l

< Cmax, where Cmax is set to be a pixel. If neither


conditions are satised the process goes back to stage 2.
3. THE PROPOSED METHOD
The proposed method combines Pulling & Pushing and an Active
Contour Model at the edge detection stage. We also specify different
stop criteria due to changes in the model.
Specular reections usually appear as the brightest pixels in the
eye images and they can interfere at pupil edge detection, so it is nec-
essary to identify and, if possible, minimize their negative effects.
Therefore, we also propose a new algorithm, based on morphologi-
cal reconstruction in grayscale images, to identify and ll reection
points.
3.1. Reection Identication and Removal
Given a grayscale image I, in the range [0, L 1], we rst compute
its negative image, dened as: In
(x,y)
= (L 1) I
(x,y)
. Next,
we perform a morphological reconstruction [7], followed by a ood-
ll operation to complete the dark areas surrounded by lighter areas.
Then we go back to the original image range, computing: I

(x,y)
=
(L 1) In
(x,y)
.
As second step, we calculate the histogram of the difference im-
age: D
(x,y)
= I
(x,y)
I

(x,y)
. The third step determines the adaptive
threshold TH, which is dened as the rst histogram position with a
zero value. However, if the sum of all subsequent positions are also
zero, we set TH to the position in the histogram with the minimum
count value. In the fourth step, the image D
(x,y)
is thresholded by
TH. The remainder points are then identied as reection points.
Following, we perform a morphological dilation, in order to expand
the reection areas to its neighbors. Finally, we ll each reection
point p with the value: I
(xp,yp)
= (1/2) I

(xp,yp)
.
3.2. Active Contour
Active contour is used in the modied PP method as edge detec-
tor. The idea behind the Active Contour Models (or snakes) is a
curve evolution, subject to restrictions imposed by an image, in or-
der to detect objects in this particular image. Starting from a contour
around the object to be detected, the curve moves towards its interior
and must stop at the borders of the object [8]. The curve evolution is
guided by the minimization of an energy function.
Chan and Vese [8] proposed an AC model that does not have a
stopping criterion based on edges. Their energy function does not
need gradient information, being dened as [8]:
F(c1, c2, C) =

inside(C)
|u0(x, y) c1|
2
dxdy
+

outside(C)
|u0(x, y) c2|
2
dxdy, (1)
where u0 is the image, C is the variable that denes the limits of
the contour and the constants c1 and c2 are the mean values of the
internal and external regions of C, respectively.
We modied the values of c1 and c2 to be the minimum and
maximum values of their respective regions. Therefore, the inter-
nal area of the contour searches for darker regions (pupil) and move
away from lighter regions (iris/sclera/skin). We call this variation
min & max Active Contour. The implementation of this ACM uses
the concept of sparse eld method (SFM), proposed by Whitaker [9],
that allows one to implement level set active contours efciently.
3.3. Combined Pulling & Pushing and Active Contour
The proposed method modies the original Pulling & Pushing in
four areas: (1) division of the convergence in two phases: coarse and
ne; (2) utilization of min & max Active Contour as edge detector;
(3) calculation of the length of the springs; (4) modication of the
convergence criteria. The PPAC owchart is shown in Figure 1.
The method starts its execution in the coarse phase, with the ini-
tial estimation at image center, and

F
0

with a large value. The


second stage inuences the transformation to polar coordinates. At
coarse phase, the algorithm calculates the minimum distance be-
tween the center O
l
= (x
l
, y
l
) and the edges of the image, which
is used as the maximum allowed radius for transformation, that is:
rmax = min

x
l

W x
l

y
l

H y
l


1, where W
and H represents the image width and height, respectively. Further-
more, all directions are used, (0 to 2), this guarantees a relatively
larger coverage area, so enabling the active contour to search for the
pupil region. The algorithm enters the ne contour phase if the fol-
lowing criterion is satised:

F
l

< 1.
In the ne contour phase, the radial direction is limited to
rmax = R
l
+ r, where R
l
is the mean radius from the previous
iteration and r is a constant, forcing the transformed image to have
851
(a) Image S1019R06 from Casia-V3. (b) Image S1229L03 from Casia-V3.
(c) Image onal3 from MMU1. (d) Image tonghll1 from MMU1.
Fig. 2: Examples of correctly segmented images. The star shows
the initial estimation of the pupils center position and the cross is
the nal pupils center estimation. (a-b) from Casia-V3, (c-d) from
MMU1.
both pupil and iris regions. However, the directions are limited to
the sectors [3/4, 9/4] to avoid the inuence of occlusions from
the superior eyelids and its cilia. Furthermore, we limit rmax to
the minimum Rmin value, in cases where rmax < Rmin, therefore
avoiding the transformation of very small regions. Once the pro-
cedure enters the ne contour phase it continues, until the method
nishes.
In the third stage, we applied the min & max Active Contour to
detect the edges and, consequently, to nd the size of the springs in
relation to the center. Since the transformed image in polar coordi-
nates is represented as a rectangular region, the initialization is per-
formed through a rectangle with length RI, starting from its center,
that covers all possible directions. This is equivalent, in the original
image, to initialize the contour as a circle with radius RI, centered
in O
l
. Since the initialization of the active contour is one of the most
critical points for the curve to converge in a correct position, we use
two different values for RI, depending on the phase. At coarse con-
tour phase RI = Rm, where Rm is a value that depends on the
characteristics of the acquired images, i.e. it changes for different
databases. At ne contour phase RI = (3/4) R
l
, forcing the con-
tour to initialize near the desired edges.
The fourth stage denes a new way to calculate the size of the
springs. Since the nal result of the active contour is an area, its
boundaries are extracted. For each direction i, the spring length ri
is computed as the mean value of the boundary points in this particu-
lar direction. At the directions where the boundaries are not dened
we applied an interpolation scheme. Their size values are computed
as the mean between the last and the next valid contour points.
The fth and last stage modies the stop condition. The algo-
rithm convergence occurs in two ways: (a) reaching the maximum
number of iterations Imax; or (b) when

and R con-
verge. The criterion dened in (b) is divided in three: (1) force con-
vergence:

F
l+1

< F
Th
; (2) force variation convergence:

=
(a) Image S1036R02 from Casia-V3. (b) Image S1043L07 from Casia-V3.
(c) Image tanwnl1 from MMU1. (d) Image philipr1 from MMU1.
Fig. 3: Incorrect segmentation examples. Common problems are re-
ections near pupils borders, secondary reections, excessive eye-
lashes occlusion and shadows. (a-b) from Casia-V3, (c-d) from
MMU1.

F
l+1

F
l

< F
Th
, and (3) radius convergence:

R
l+1
R
l

<
R
Th
. Note that all three criteria must be satised.
Finally O
l+1
and R
l+1
are the the optimal circle parameters for
the pupil region.
4. EXPERIMENTAL RESULTS
We used Casia-V1, Casia-V3-Interval [11], and MMU1 [12] databases
to evaluate our combined PPAC method. These databases have
756, 2655, and 450 images, respectively.
The original PP algorithm requires that the initial center estima-
tion to be inside the pupil so, we did not compare our method directly
with it. The combined PPAC method does not have such restriction
and for all tested images, the initial estimation was the image cen-
ter. To avoid the negative inuence of specular reections inside the
pupil, we used two algorithms (and its combination, in cascade) for
specular reection identication and removal as preprocessing: the
one described in Section 3.1 and the algorithm proposed in [10].
The following parameters were used for all databases: N =
240, F
Th
= 0.3, F
Th
= 0.1, R
Th
= 1. For the Casia-V1
and V3 databases: r = 10 and Rmin = 40. And for the MMU1
database r = 5 and Rmin = 30.
Table 1 exhibit the results of pupil segmentation for the Casia-
V1 database. Since in this particular database the region of the pupil
was articially lled to protect the lighting scheme, no preprocess-
ing was used and therefore its results appear isolated. We used the
manually lled patterns as golden standard, allowing us to calculate
differences between the regions obtained by our proposed method to
them. Accordingly, we considered correct those pupil regions which
the areas overlap in at least 95%, and that are not 10% greater than
the corresponding golden standard.
Table 2 displays the results for Casia-V3 and MMU1 databases,
showing that our PPAC method is accurate and, on average, re-
852
(a) Initial Iteration, coarse, |F|= 6.5. (b) Iteration 3, coarse, |F|= 9.4.
(c) Iteration 5, coarse, |F|= 13.3. (d) Iteration 6, coarse, |F|= 8.5.
(e) Iteration 8, coarse, |F|= 0.7. (f) Iteration 10, ne, |F|= 1.5.
(g) Iteration 14, ne, |F|=0.4. (h) Final Iteration, ne, |F|= 0.2.
Fig. 4: The execution of the PPAC method. The image under seg-
mentation is the image shown in Figure 2 (a). The black and white
lines shows the nal contour at each iteration. Note that the reec-
tion points inside the pupil are properly lled.
quires few iterations to converge. Some correctly segmented images
from both bases can be seen in Figure 2. In Casia-V3 only 8 images
were not segmented properly. Some of them have secondary reec-
tions that confuse the contour, as shown in Figure 3 (a-b). In the
MMU1 database, the illumination scheme used at the time of acqui-
sition, caused the images to have low contrast and allow the appear-
ance of shadows, which make the segmentation more challenging.
Figure 3 (c-d) shows examples of such images.
We use the eye image present in Figure 2 (a) to demonstrate a
step-by-step execution of the PPAC method shown in Figure 4.
From (a) to (f) each sub-image label displays information about the
iteration, phase, and force modulus. Note that the scale changes at
every iteration, reecting modications of the pupils estimated cen-
ter position and radius or as the algorithm switches between phases.
Also, observe the distance between the initial estimation and the
pupils nal center position. The active contour is responsible for
pushing the center near to dark homogeneous areas, causing big dis-
placements at the rst iterations. Then the center enters the pupil
region enabling PPAC successfully nd its parameters.
Table 1: Pupil segmentation results for Casia-V1 database using the
PPAC method without any reection removal algorithm.
Database Images Correct Accuracy Iterations: , ()
Casia-V1 756 752 99.47% 13, (5)
Table 2: Pupil segmentation accuracy by the combination of PPAC
method and a reection removal algorithm (RRA).
Reection Removal Casia-V3 MMU1 Iterations: , ()
He [10] 96.76% 86.67% 14, (9) / 15, (10)
Prop. (Sec. 3.1) 99.36% 94.44% 13, (9) / 15, (8)
Proposed + He [10] 99.70% 96.22% 13, (9) / 15 (9)
5. CONCLUSIONS
We proposed a combined Pulling & Pushing and Active Contour
method for pupil segmentation. We have modied the computation
of c1 and c2 parameters of the energy function dened by Chan and
Vese [8], generating the min & max Active Contour. After this
modication our model was able to accurate search for dark areas.
We also present a new approach to ll specular reections inside the
pupil that makes this region to be more homogeneous, improving the
active contour performance.
According to the results obtained in three different bases, the
PPAC method proved to be robust and accurate. The combination
of our proposed reection identication and removal algorithm with
PPAC outperforms the combination of [10] with PPAC in 2.6%
for Casia-V3 and in 7.77% for MMU1. The most important advan-
tage of the PPAC is that it does not need an initial estimation inside
the pupil region to accurate nd its parameters. For future work we
will apply the same ideas for the iris segmentation.
6. REFERENCES
[1] K.W. Bowyer, K. Hollingsworth, and P. J. Flynn, Image Un-
derstanding for Iris Biometrics: A Survey, Computer Vision
and Image Understanding, vol. 110, no. 2, pp. 281307, 2008.
[2] J. Daugman, How Iris Recognition Works, IEEE Trans. Cir-
cuits and Syst. for Video Tech., vol. 14, no. 1, pp. 2130, 2004.
[3] R. Wildes, Iris Recognition: An Emerging Biometric Tech-
nology, Proc. IEEE, vol. 85, no. 9, pp. 13481365, 1997.
[4] L. Masek, Recognition of human iris patterns for biometric
identication, The University of Western Australia, 2003.
[5] X. Liu, K. W. Bowyer, P. J. Flynn, Experiments with an im-
proved iris segmentation algorithm, in: Workshop on Auto-
matic Identication Technologies, pp. 118123, 2005.
[6] Z. He, T. Tan, and Z. Sun, Iris Localization via Pulling and
Pushing, Int. Conf. Pattern Recog., vol. 4, pp. 366369, 2006.
[7] P. Soille, Morphological Image Analysis: Principles and Ap-
plications, Springer-Verlag, 1999.
[8] T. Chan and L. Vese, Active contours without edges, IEEE
Trans. Image Process., vol. 10, no. 2, pp. 266277, 2001.
[9] R. Whitaker, A level-set approach to 3D reconstruction from
range data, Int. Journal of Computer Vision, vol. 29, no. 3, pp.
203231, 1998.
[10] Z. He, T. Tan, Z. Sun and X. Qiu, Towards Accurate and Fast
Iris Segmentation for Iris Biometrics, IEEE Trans. on Pattern
Anal. and Machine Intel., vol. 31, no. 9, pp. 16701684, 2009.
[11] CASIA-Iris V1 & V3, http: //www.cbsr.ia.ac.cn/IrisDatabase,
2009.
[12] MMU Iris Image Database: Multimedia University,
http://pesona.mmu.edu.my/~ccteo/, 2009.
853

You might also like