You are on page 1of 5

JOURNAL OF COMPUTING, VOLUME 3, ISSUE 9, SEPTEMBER 2011, ISSN 2151-9617 HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING WWW.JOURNALOFCOMPUTING.

ORG

46

A Hybrid method for Iris Segmentation


Muaadh Sh. Azzubeiry and Dzulkifli Bin Mohammed
Abstract Advance development in security technology has caused many major corporations and governments to start employing modern techniques in identifying the identity of the individuals. Among the common biometric identification methods are facial recognition, fingerprint recognition, speaker verification and so on, presenting a new solution for applications that require a high degree of security. Among these biometric methods, iris recognition becomes an important topic in pattern recognition and it depends on the iris which is located in a place that is still stable through human life. Furthermore, the probability to find two identical irises approaching to zero value is quite easy. The identification system consists of several stages, and segmentation is the most crucial step. The current segmentation methods still have its limitation in localizing the iris due to the circular shape consideration of the pupil. In this paper an enhanced hybrid method, which can guarantee the accuracy of the iris identification system is proposed. The proposed method takes into account the elliptical shape of the pupil and iris. Moreover, Eyelid detection is another step that has been proposed in this paper as a part of segmentation stage. The dataset which is used is CASIA v3 including the three subsets: Interval, Lamp and Twin. The performance measurement of the proposed method is done by determining the number of success images. The results of the study are very promising with an accuracy of 99.1% as compared to the related existing methods. Index Terms Eyelid detection, Iris segmentation, Pupil detection

1 INTRODUCTION
he iris is known as the thin colored area, which is located between the cornea and the lens of the human's eye. Its center is closed by a part known as the pupil [1]. If we make a comparison among biometrics, iris identification systems will gain a good result. Iris texture has a very high degree of freedom and that make it extremely important. What's more, the chance of finding two persons who have the same identical irises is close to zero and most iris patterns remain stable over ones life time. Therefore, the iris identification system is the most reliable among other identification systems and for sure it can be useful in many secure places [2]. A standard iris identification system consists of four stages: acquisition, preprocessing, feature extraction and matching [3]. Preprocessing combines three steps: segmentation, normalization and enhancement. Acquisition is the process of getting the image from the source by using a particularly designed device for this purpose. Preprocessing is used to enhance the captured image and prepare it for the next stage which is feature extraction. Eventually, matching is performed to check whether the current features that have been extracted have a match with the existing features of candidate iris to identify the identical iris [4]. Iris segmentation is the most important and serious step in the iris identification system. It is to localize the exact iris area image from the human eye image. The output of this stage is very important and has played a pri

mary role in the steps after segmentation (normalization, enhancement, feature extraction and matching). The efficiency of iris identification system is primarily dependent on the accurate output of iris segmentation [5]. The iris segmentations role is to identify the iris region in the eye image (fitting detection for iris boundaries); we can see that in figure 1 below.

Figure 1: An iris segmentation stage

Therefore, in this paper we will concentrate on iris segmentation stage to increase the performance of iris recognition system. In addition, an enhanced method to guarantee the accuracy will be applied to increase the accuracy of the identification system.

2 RELATED WORK
Iris segmentation stage is the most important stage that can be the origin of the accuracy of the iris. Based on this fact a lot of researchers exerted their efforts trying to find a suitable algorithm to achieve the goal of including the whole iris. That was because missing to include any part will eventually affect the efficiency of the iris identification system. In [6, 7, 8] proposed the Integro-differential operator (IDO), which actually treated the pupil and limbus of the

Muaadh Sh. Azzubeiry, Faculty of Computer Science and Information Systems Universiti Teknologi Malaysia, 81310, Skudai, Johor, Malaysia. Dzulkifli bin Mohamad, Faculty of Computer Science and Information Systems Universiti Teknologi Malaysia, 81310, Skudai, Johor, Malaysia.

2011 Journal of Computing Press, NY, USA, ISSN 2151-9617

JOURNAL OF COMPUTING, VOLUME 3, ISSUE 9, SEPTEMBER 2011, ISSN 2151-9617 HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING WWW.JOURNALOFCOMPUTING.ORG

47

iris as a circular shape and trying to detect these boundaries by yielding a search for large circular variations in the image. An integro-differential operator (IDO) was used for locating the inner and outer boundaries of an iris by means of the following optimization:

and outer boundary of iris.

3 INSPIRATION
Most iris segmentation techniques assume the boundary of pupil and iris to be as a circle, but their boundary is not quite like a circle and a small error in detecting the boundary of the iris will lead to lose some information that exists around the iris. Furthermore, there are a lot of challenges that should be overcome by proposing an enhanced segmentation method that can yield precise results as input to the identification system to enhance its efficiency.

where I (x, y) is the image that contains the eye. The IDO seek over the image domain (x, y) for the maximum in the partial derivative that is blurred due to to increasing radius r, of the integral contour of I (x, y) that has been normalized along a circular arc ds of radius r and center coordinates (x0, y0). The symbol d enotes convol ti u on and G (r) is a smoothing function such as a Gaussian of scale . The common behavior of the IDO is actually a circular edge detector. The IDO searches for the gradient maxima over the 3D parameter space, so there are no threshold parameters required as in the Canny edge detector. By changing the contour path from circular to an accurate design, IDO also detects the upper and lower eyelid boundaries. The integro-differential can be considered as a modified version of the Hough transform, since it also makes use of the image's first derivatives and carry out a search to find geometric parameters. Since it works with raw derivative information, it does not suffer from the threshold problems of the Hough transform. However, the algorithm can fail where there is noise in the eye image, such as from reflections, since it works only on a local scale. A method introduced by [9] to segment iris which was based on rotation average analysis of intensity-inversed image. The method segmented the inner boundary and fitted the outer boundary (both as a circle) by least-square non-linear circular regression. As [10] stated an algorithm to detect the boundaries between both pupil and iris and sclera and iris. A technique called rectangular area is applied in order to find the pupil and detect the inner circle of iris and after that detect the iris outer boundary. A Circle Density Based Iris Segmentation method (CDIS) has been proposed by [11]. It consists of specular reflection deduction, eyelash elimination and Iris segmentation along with eyelid removal based on the local image statistics and block intensity. In addition, [12] proposed a method based on the adaptive boosting eye detection algorithm (AdaBoost), which is an algorithm that constructs a strong classifier by coupling the weak classifiers), in order to balance the iris detection errors caused by the two circular edge detection operations. On the other hand, [13] introduced a new method to segment iris which was based on rotation average analysis of intensity-inversed image in detail. The inner boundary is segmented and the outer boundary is fitted (both as a circle) by least-square non-linear circular regression. [14] Proposed an algorithm for detecting the boundaries between pupil and iris and also sclera and iris. The rectangular area method was applied in order to segment the pupil and detect the inner

4 PROPOSED IRIS SEGMENTATION METHOD


An eye image contains not only the iris region but also contains some unwanted parts like sclera, pupil and eyelids. Moreover, the pupil boundary can be considered as the inner boundary of the iris. For this reason, the proposed method goes through the following steps:

4.1 Pupil Detection In this step, we will detect the pupil in an iris image, taking into account the elliptical shape of the pupil. To do that first, we binarize the image then find the linear indices of the resulting entry image after removing the noise. After that, fit the values that have been found as ellipse. The center of the ellipse will be the intersection of the two lines which have been drawn from the min point to the max point in X- axis with the line from the min value to the max value of Y-axis. Direct Least Square fitting of ellipse which has been proposed by [15] was applied.

Original Image

1-Binary Image

2-Removing Noise

3-Edge Detection
Figure 2: Pupil Detection Steps

4-Pupil Detection

2011 Journal of Computing Press, NY, USA, ISSN 2151-9617

JOURNAL OF COMPUTING, VOLUME 3, ISSUE 9, SEPTEMBER 2011, ISSN 2151-9617 HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING WWW.JOURNALOFCOMPUTING.ORG

48

4.2 Iris localization In this step, we can detect the boundary of the iris based on its inner boundary. The same previous method of detecting the pupil is applied by considering the diameter of iris and the center of the pupil.

determined regions that have been mentioned, if the pixel is foreground (eyelid edge), the pixel it will record the x, y of that pixel. From figure 5 below, it can be seen that the green line is our border for each region. The number in each row, the column is how the pixel is traced. In the top-left side (red) region, tracing starts from 1 (see green line) then move to the upper row (2) then move again to upper row (3) until it reach the image border (4) it will start again but in the different column (indicated by number 5). Tracing will be stopped if foreground pixel (eyelid edge) is found.

4 3 2 1
Figure 3: Iris Localization

8 7 6 5 6 7 8

12 11 10 9 10 11 12

8 7 6 5 6 7 8

4 3 2 1 2 3 4

2 3 4

4.3 Eyelid Detection


To determine the edge of the eyelids, the following steps were followed: 1. 2. 3. 4. 5. 6. Extract edge of eyelid using Canny (define threshold value of Canny). Remove unwanted eyelashes and another edge (define threshold value of noise). Divide the eyes into 4 regions based on the center point of the pupil. Tracing pixels in each region to find the eyelid. Save the 4 points of the eyelid [(x, y) top and (x, y) bottom]. Connect each corresponding point to draw the curve/arc.
Figure 5: Searching technique for eyelid point

The determined points that hae been found, will look like the following figure

Figure 6: Determined points in a binarized image of the eye


Top left Top right

After determining the points, an arc should be drawn for each eyelid as shown in figure 7.

Bottom left

Bottom right

Figure 4: Eye regions determination

Figure 7: Eyelid Detection


2011 Journal of Computing Press, NY, USA, ISSN 2151-9617

We should check pixel by pixel in each region of the

JOURNAL OF COMPUTING, VOLUME 3, ISSUE 9, SEPTEMBER 2011, ISSN 2151-9617 HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING WWW.JOURNALOFCOMPUTING.ORG

49

EXPERIMENTAL RESULTS

The proposed method was applied using 3060 images including 200 irises taken from CASIA v.3 [17, also including the three subsets. All the steps were carried out using MATLAB on Fujitsu Siemens Laptop, Intel (R) Core(TM) 2 CPU T5600 @ 1.83 GHz, 1 GB RAM, Windows XP Professional Service Pack 3. The results of the proposed method were: iris localization on the interval subset was 99.9%. The same result also was for both subset (Lamp and Twin). Elliptical consideration of the pupil in the segmentation step has played a significant role by including the whole pupil area and then including the whole iris. On the other hand, eyelid detection results were 99.9% on Interval subset while the results were 98% on Lamp and 97% on Twin. The challenge that has been faced was on Twin subset and the difficulty was in finding the right points that should be determined to draw the arc due to the complex borders after extraction. The final results of the proposed segmentation method (iris localization and eyelid detection) with a comparison among some existing methods are presented in Table 1. TABLE 1 THE PERFORMANCE OF THE PROPOSED METHOD AMONG SOME
EXISTING METHODS

Figure 9: Segmentation results with A) Circular shape consideration B) Elliptical shape consideration For Lamp subset.

Method Proposed Gupta [11] Masek Libor [16]

Iris Segmentation Accuracy Interval 99.9% 99.9% 83% Lamp 98.95% 81% 82% Twin 98.45% 80% 81%

Figure 10: Segmentation results with A) Circular shape consideration B) Elliptical shape consideration For Twin subset.

6 CONCLUSION
In this paper, we enhanced a method that can guarantee the accuracy during the segmentation stage to include the whole iris image by considering the elliptical shape of the pupil and the iris. The proposed method includes pupil detection, iris ocalization and eyelid detection. Most techniques that haves been used were based on [1] and [11] techniques.

The results were obtained by calculating the number of images that have been segmented relative to the accurate total number of images. Figure 8,9,10 show the results of segmentation by using either method that considers the pupil as a circle or methods that consider the pupil as an ellipse. Hence, it was concluded that the resulted images of the elliptical shape consideration is better than those which resulted by using circular shape consideration.

7
[1] [2]

REFERENCES
Daugman, J.G. (2004), how iris recognition works, IEEE Trans. Circ. Syst. Video Technol, Vol: 14, pp.2130. Park, H.A. and Park K.R. (2007), Iris recognition based on score level fusion by using SVM, Pattern Recognition Letters Vol:28 (15), pp.20192028. Miyazawa, K., Ito K., Aoki T., Kobayashi K. and Nakajima H. (2008), An effective approach for iris recognition using phasebased image matching, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol:30 (10), pp. 17411756, Yu, L., Zhang, D. and Wang K. (2007), The relative distance of key point based iris recognition, Pattern Recognition ,Vol:40 (2),pp. 423430. Kim , J., Cho S. and Choi J. (2004), Iris recognition using wavelet features, Journal of VLSI Signal Processing,Vol:38 (2) ,

[3]

[4]

Figure 8: Segmentation results with A) Circular shape consideration B) Elliptical shape consideration for Interval subset.

[5]

2011 Journal of Computing Press, NY, USA, ISSN 2151-9617

JOURNAL OF COMPUTING, VOLUME 3, ISSUE 9, SEPTEMBER 2011, ISSN 2151-9617 HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING WWW.JOURNALOFCOMPUTING.ORG

50

[6]

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15] [16] [17]

[18]

pp.147156. Daugman, J.G (1993), "High Confidence Visual Recognition of Persons by a Test of Statistical Independence", IEEE Transactions On Pattern Analysis and Machine Intelligence, Vol. 15, pp.1148-1161. Daugman, J.G (2001), Statistical Richness of Visual Phase Information: Update on Recognizing Persons by Iris Patterns, International Journal of Computer Vision, vol:45, no. 1, pp. 2538. Daugman, J.: New methods in iris recognition. IEEE Transactions on Systems, Man, and Cybernetics-Part B 37(5), 11671175 (2007) Wei, Li and Lin-Hua Jiang (2009), Fast Iris Segmentation by Rotation Average Analysis of Intensity-Inversed Image, Springer Berlin / Heidelberg Vol: 5855, pp. 340-349. Rahib Hidayat Abiyev and Koray Altunkaya (2009), Neural Network Based Biometric Personal Identification with Fast Iris Segmentation: The Institute of Control, Robotics and Systems Engineers and The Korean Institute of Electrical Engineers, copublished with Springer-Verlag GmbH,Vol:7, ppt. 17-23. Gupta, Anand, Anita Kumari, Boris Kundu, and Isha Agarwal (2009),CDIS: Circle Density Based Iris Segmentation. Second International Conference, IC3 2009, Noida, Dae, Sik Jeong, Jae Won Hwang, Byung Jun Kang, Kang Ryoung Park, Chee Sun Won, Dong-Kwon Park and Jaihie Kim (2010). A new iris segmentation method for non-ideal iris images, Image and Vision Computing Vol: 28, pp. 254260. Wei, Li and Lin-Hua Jiang (2009), Fast Iris Segmentation by Rotation Average Analysis of Intensity-Inversed Image , Springer Berlin / Heidelberg Vol: 5855, pp. 340-349. Rahib Hidayat Abiyev and Koray Altunkaya (2009), Neural Network Based Biometric Personal Identification with Fast Iris Segmentation: The Institute of Control, Robotics and Systems Engineers and The Korean Institute of Electrical Engineers, copublished with Springer-Verlag GmbH, Vol: 7, ppt. 17-23. Fitzgibbon , A., Pilu M., and Fisher R. B., Direct least square fitting of ellipses, IEEE Trans. Pattern Anal. Machine Intell., vol. 21, pp. 477480, May 1999. Masek L. and Kovesi P. (2003), MATLAB Source Code for a Biometric Identification System Based on Iris Patterns. The School of Computer Science and Software Engineering, The University of Western Australia. CASIA iris database, Institute of Automation, Chinese Academy of Sciences, http://sinobiometrics.com/casiairis.h.

National University of Malaysia in 1978, a Postgraduate Diploma from the University of Glasgow, UK in 1981, a Master of Science from the University of Technology Malaysia in 1990 and Ph.D. from the University of Technology Malaysia in 1997. He held different positions in UTM. He is a consultant for different firms. He supervised more than 120 master and Ph.D. students. Furthermore, he evaluated/examined more than 200 post-graduates. Prof. Dr. Dzulkifli has received numerous awards and published more than 200 research papers in the international journals and conferences. His areas of interest are biometrics, pattern recognition, multimedia signal processing.

Muaadh Sh. Azzubeiry is a Ph.D. student in Computer Science and Information Systems Faculty at the Universiti Teknologi Malaysia, Malaysia. He received his Bachelor of Science in Computer Science in 2002 at Almustansiriah University, Baghdad, Iraq and a Master of Science in Computer Science in 2010 from the University of Technology Malaysia, Malaysia -UTM. In 2011, he started his Ph.D. in Computer Science at the Department of Computer Graphics and Multimedia, UTM. His research interests include pattern recognition and biometrics and security systems (fingerprint classification and recognition, signature verification, iris identification and face recognition). He is till date a lecturer at Sanaa Community College 2002, Yemen.

Prof. Dr. Dzulkifli bin Mohamad is now a Professor at the University of Technology Malaysia. He received his Bachelor of Science from
2011 Journal of Computing Press, NY, USA, ISSN 2151-9617

You might also like