You are on page 1of 37

Computer Vision and Image Understanding 110 (2), 281-307, May 2008.

Image Understanding for Iris Biometrics: A Survey


Kevin W. Bowyer , Karen Hollingsworth, and Patrick J. Flynn
Department of Computer Science and Engineering University of Notre Dame Notre Dame, Indiana 46556

Abstract This survey covers the historical development and current state of the art in image understanding for iris biometrics. Most research publications can be categorized as making their primary contribution to one of the four major modules in iris biometrics: image acquisition, iris segmentation, texture analysis and matching of texture representations. Other important research includes experimental evaluations, image databases, applications and systems, and medical conditions that may aect the iris. We also suggest a short list of recommended readings for someone new to the eld to quickly grasp the big picture of iris biometrics.
Key words: biometrics, identity verication, iris recognition, texture analysis

1. Introduction Whenever people log onto computers, access an ATM, pass through airport security, use credit cards, or enter high-security areas, they need to ver Corresponding author. Telephone: (574) 631-9978. Fax: (574) 631-9260 Email addresses: kwb@cse.nd.edu (Kevin W. Bowyer), kholling@nd.edu (Karen Hollingsworth), flynn@nd.edu (Patrick J. Flynn). To appear in Computer Vision and Image Understanding

ify their identities. People typically use user names, passwords, and identication cards to prove that they are who they claim to be. However, passwords can be forgotten, and identication cards can be lost or stolen. Thus, there is tremendous interest in improved methods of reliable and secure identication of people. Biometric methods, which identify people based on physical or behavioral characteristics, are of interest because people cannot forget or lose their physical characteristics in the way that they can lose passwords or identity cards. Biomet5 October 2007

ric methods based on the spatial pattern of the iris are believed to allow very high accuracy, and there has been an explosion of interest in iris biometrics in recent years. This paper is intended to provide a thorough review of the use of the iris as a biometric feature. This paper is organized as follows. Section 2 reviews basic background concepts in iris anatomy and biometric performance. Section 3 traces the early development of iris biometrics, providing appropriate context to evaluate more recent research. Sections 4 through 7 survey publications whose primary result relates to one of the four modules of an iris biometrics system: (1) image acquisition, (2) segmentation of the iris region, (3) analysis and representation of the iris texture, or (4) matching of iris representations. Section 8 discusses evaluations of iris biometrics technology and iris image databases. Section 9 gives an overview of various applications and systems. Section 10 briey outlines some medical conditions that can potentially aect the iris texture pattern. Finally, Section 11 concludes with a short list of recommended readings for the new researcher. 2. Background Concepts This section briey reviews basic concepts of iris anatomy and biometric systems performance. Readers who are already familiar with these topics should be able to skip to the next section. 2.1. Iris Anatomy The iris is the colored ring of tissue around the pupil through which light...enters the interior of the eye. [112] Two muscles, the dilator and the sphincter muscles, control the size of the iris to adjust the amount of light entering the pupil. Figure 1 shows an example image acquired by a commercial iris biometrics system. The sclera, a white region of connective tissue and blood vessels, surrounds the iris. A clear covering called the cornea covers the iris and the pupil. The pupil region generally appears darker than the iris. However, the pupil may have specular highlights, and cataracts can lighten the pupil. The iris typically has a rich pattern of furrows, ridges, and pigment spots. The surface of the iris is composed of two regions, the central pupillary zone and the outer ciliary zone. The collarette is the border between these two regions. 2

The minute details of the iris texture are believed to be determined randomly during the fetal development of the eye. They are also believed to be dierent between persons and between the left and right eye of the same person [36]. The color of the iris can change as the amount of pigment in the iris increases during childhood. Nevertheless, for most of a humans lifespan, the appearance of the iris is relatively constant. 2.2. Performance of Biometric Systems Biometrics can be used in at least two dierent types of applications. In a verication scenario, a person claims a particular identity and the biometric system is used to verify or reject the claim. Verication is done by matching a biometric sample acquired at the time of the claim against the sample previously enrolled for the claimed identity. If the two samples match well enough, the identity claim is veried, and if the two samples do not match well enough, the claim is rejected. Thus there are four possible outcomes. A true accept occurs when the system accepts, or veries, an identity claim, and the claim is true. A false accept occurs when the system accepts an identity claim, but the claim is not true. A true reject occurs when the system rejects an identity claim and the claim is false. A false reject occurs when the system rejects an identity claim, but the claim is true. The two types of errors that can be made are a false accept and a false reject. Biometric performance in a verication scenario is often summarized in a receiver operating characteristic (ROC) curve. The ROC curve plots the verication rate on the Y axis and the false accept rate on the X axis, or, alternatively, the false reject rate on the Y axis and the false accept rate on the X axis. The equal-error rate (EER) is a single number often quoted from the ROC curve. The EER is where the false accept rate equals the false reject rate. The terms verication and authentication are often used interchangeably in this context. In an identication scenario, a biometric sample is acquired without any associated identity claim. The task is to identify the unknown sample as matching one of a set of previously enrolled known samples. The set of enrolled samples is often called a gallery, and the unknown sample is often called a probe. The probe is matched against all of the entries in the gallery, and the closest match, assuming it is close enough, is used to identify the un-

Fig. 1. Image 02463d1276 from the Iris Challenge Evaluation Dataset. Elements seen in a typical iris image are labeled here. The ICE dataset is described in the text.

known sample. Similar to the verication scenario, there are four possible outcomes. A true positive occurs when the system says that an unknown sample matches a particular person in the gallery and the match is correct. A false positive occurs when the system says that an unknown sample matches a particular person in the gallery and the match is not correct. A true negative occurs when the system says that the sample does not match any of the entries in the gallery, and the sample in fact does not. A false negative occurs when the system says that the sample does not match any of the entries in the gallery, but the sample in fact does belong to someone in the gallery. Performance in an identication scenario is often summarized in a cumulative match characteristic (CMC) curve. The CMC curve plots the percent correctly recognized on the Y axis and the cumulative rank considered as a correct match on the X axis. For a cumulative rank of 2, if the correct match occurs for either the rst-ranked or the second-ranked entry in the gallery, then it is con3

sidered as correct recognition, and so on. The rankone-recognition rate is a single number often quoted from the CMC curve. The terms identication and recognition are often used interchangeably in this context. 3. Early History of Iris Biometrics The early history of iris biometrics can be considered as approximately up through 2001. About a dozen iris biometrics publications, including patents, from this period are covered in this survey. Iris biometric research accelerated and broadened dramatically since 2001. For example, about forty of the iris biometrics publications covered in this paper were published in 2006. 3.1. Flom and Sars Concept Patent The idea of using the iris as a biometric is over 100 years old [8]. However, the idea of automating

iris recognition is more recent. In 1987, Flom and Sar obtained a patent for an unimplemented conceptual design of an automated iris biometrics system [49]. Their description suggested highly controlled conditions, including a headrest, a target image to direct the subjects gaze, and a manual operator. To account for the expansion and contraction of the pupil, they suggested changing the illumination to force the pupil to a predetermined size. While the imaging conditions that they describe may not be practical, some of their other suggestions have clearly inuenced later research. They suggest using pattern recognition tools, including dierence operators, edge detection algorithms, and the Hough transform, to extract iris descriptors. To detect the pupil, they suggest an algorithm that nds large connected regions of pixels with intensity values below a given threshold. They also suggest that a description of an individuals iris could be stored on a credit card or identication card to support a verication task. Johnston [73] published a report in 1992 on an investigation of the feasibility of iris biometrics conducted at Los Alamos National Laboratory after Flom and Sars patent but prior to Daugmans work, reported below. Iris images were acquired for 650 persons, followed up over a 15-month period. The pattern of an individual iris was observed to be unchanged over the 15 months. The complexity of an iris image, including specular highlights and reections, was noted. It was concluded that iris biometrics held potential for both verication and identication scenarios, but no experimental results are presented. 3.2. Daugmans Approach The most important work in the early history of iris biometrics is that of Daugman. Daugmans 1994 patent [30] and early publications (e.g., [29]) described an operational iris recognition system in some detail. It is fair to say that iris biometrics as a eld has developed with the concepts in Daugmans approach becoming a standard reference model. Also, due to the Flom and Sar patent and the Daugman patent being held for some time by the same company, nearly all existing commercial iris biometric technology is based on Daugmans work. Daugmans patent states that the system acquires through a video camera a digitized image of an eye of the human to be identied. A 2004 pa4

per [33] said that image acquisition should use nearinfrared illumination so that the illumination could be controlled, yet remain unintrusive to humans. Near-infrared illumination also helps reveal the detailed structure of heavily pigmented (dark) irises. Melanin pigment absorbs much of visible light, but reects more of the longer wavelengths of light. Systems built on Daugmans concepts require subjects to position their eye within the cameras eld of view. The system assesses the focus of the image in real time by looking at the power in the middle and upper frequency bands of the 2-D Fourier spectrum. The algorithm seeks to maximize this spectral power by adjusting the focus of the system, or giving the subject audio feedback to adjust their position in front of the camera. More detail on the focusing procedure is given in the appendix of [33]. Given an image of the eye, the next step is to nd the part of the image that corresponds to the iris. Researchers in the area eld of facial recognition had previously proposed a method for searching for eyes in a face by using deformable templates. A deformable template was specied by a set of parameters and allowed knowledge about the expected shape of an eye to guide the detection process [179]. Daugmans early work approximated the pupillary and limbic boundaries of the eye as circles. Thus, a boundary could be described with three parameters: the radius r, and the coordinates of the center of the circle, x0 and y0 . He proposed an integro-dierential operator for detecting the iris boundary by searching the parameter space. His operator is max(r, x0 , y0 ) G (r) r
r,x0 ,y0

I (x, y ) ds 2r

(1)

where G (r) is a smoothing function and I (x, y ) is the image of the eye. All early research in iris segmentation assumed that the iris had a circular boundary. However, often the pupillary and limbic boundaries are not perfectly circular. Recently, Daugman has studied alternative segmentation techniques to better model the iris boundaries [35]. Even when the inner and outer boundaries of the iris are found, some of the iris still may be occluded by eyelids or eyelashes. Upon isolating the iris region, the next step is to describe the features of the iris in a way that facilitates comparison of irises. 1 The rst diculty lies
1

Some authors have pointed out that the plural of iris is irides. We consider that the use of irises is also commonly

in the fact that not all images of an iris are the same size. The distance from the camera aects the size of the iris in the image. Also, changes in illumination can cause the iris to dilate or contract. This problem was addressed by mapping the extracted iris region into a normalized coordinate system. To accomplish this normalization, every location on the iris image was dened by two coordinates, (i) an angle between 0 and 360 degrees, and (ii) a radial coordinate that ranges between 0 and 1 regardless of the overall size of the image. This normalization assumes that the iris stretches linearly when the pupil dilates and contracts. A paper by Wyatt [173] explains that this assumption is a good approximation, but it does not perfectly match the actual deformation of an iris. The normalized iris image can be displayed as a rectangular image, with the radial coordinate on the vertical axis, and the angular coordinate on the horizontal axis. In such a representation, the pupillary boundary is on the bottom of the image, and the limbic boundary is on the top. The left side of the normalized image marks 0 degrees on the iris image, and the right side marks 360 degrees. The division between 0 and 360 degrees is somewhat arbitrary, because a simple tilt of the head can aect the angular coordinate. Daugman accounts for this rotation later, in the matching technique. Directly comparing the pixel intensity of two different iris images could be prone to error because of dierences in lighting between two dierent images. Daugman uses convolution with 2-dimensional Gabor lters to extract the texture from the normalized iris image. In his system, the lters are multiplied by the raw image pixel data and integrated over their domain of support to generate coecients which describe, extract, and encode image texture information. [30] After the texture in the image is analyzed and represented, it is matched against the stored representation of other irises. If iris recognition were to be implemented on a large scale, the comparison between two images would have to be very fast. Thus, Daugman chose to quantize each lters phase response into a pair of bits in the texture representation. Each complex coecient was transformed into a two-bit code: the rst bit was equal to 1 if the real part of the coecient was positive, and the second bit was equal to 1 if the imaginary part of the coecient was positive. Thus after analyzing the texture
accepted as the plural of iris, and we use the simpler word here.

of the image using the Gabor lters, the information from the iris image was summarized in a 256 byte (2048 bit) binary code. The resulting binary iris codes can be compared eciently using bitwise operations. 2 Daugman uses a metric called the normalized Hamming distance, which measures the fraction of bits for which two iris codes disagree. 3 A low normalized Hamming distance implies strong similarity of the iris codes. If parts of the irises are occluded, the normalized Hamming distance is the fraction of bits that disagree in the areas that are not occluded on either image. To account for rotation, comparison between a pair of images involves computing the normalized Hamming distance for several dierent orientations that correspond to circular permutations of the code in the angular coordinate. The minimum computed normalized Hamming distance is assumed to correspond to the correct alignment of the two images. The modules of an iris biometrics system generally following Daugmans approach are depicted in Figure 2. The goal of image acquisition is to acquire an image that has sucient quality to support reliable biometrics processing. The goal of segmentation is to isolate the region that represents the iris. The goal of texture analysis is to derive a representation of the iris texture that can be used to match two irises. The goal of matching is to evaluate the similarity of two iris representations. The distinctive essence of Daugmans approach lies in conceiving the representation of the iris texture to be a binary code obtained by quantizing the phase response of a texture lter. This representation has several inherent advantages. Among these are the speed of matching through the normalized Hamming distance, easy handling of rotation of the iris, and an interpretation of the matching as the result of a statistical test of independence [29].

The term, iris code was used by Daugman in his 1993 paper. We use this term to refer to any binary representation of iris texture that is similar to Daugmans representation. 3 The Hamming distance is the number of bits that disagree. The normalized Hamming distance is the fraction of bits that disagree. Since normalized Hamming distance is used so frequently, many papers simply mention Hamming distance when referring to the normalized Hamming distance. We also follow this trend in subsequent sections of this paper.

Fig. 2. Major Steps In Iris Biometrics Processing.

3.3. Wildes Approach Wildes [168] describes an iris biometrics system developed at Sarno Labs that uses a very dierent technical approach from that of Daugman. Whereas Daugmans system acquires the image using an LED-based point light source in conjunction with a standard video camera, the Wildes system uses a diuse source and polarization in conjunction with a low light level camera. When localizing the iris boundary, Daugmans approach looks for a maximum in an integro-dierential operator that responds to circular boundary. By contrast, Wildes approach involves computing an binary edge map followed by a Hough transform to detect circles. In matching two irises, Daugmans approach involves computation of the normalized Hamming distance between iris codes, whereas Wildes applies a Laplacian of Gaussian lter at multiple scales to produce a template and computes the normalized correlation as a similarity measure. Wildes briey describes [168] the results of two experimental evaluations of the approach, involving images from several hundreds of irises. This paper demonstrates that multiple distinct technical approaches exist for each of the main modules of an iris biometrics system. There are advantages and disadvantages to both 6

Daugmans and Wildes designs. Daugmans acquisition system is simpler than Wildes system, but Wildes system has a less-intrusive light source designed to eliminate specular reections. For segmentation, Wildes approach is expected to be more stable to noise perturbations; however, it makes less use of available data, due to binary edge abstraction, and therefore might be less sensitive to some details. Also, Wildes approach encompassed eyelid detection and localization. For matching, the Wildes approach made use of more of the available data, by not binarizing the bandpass ltered result, and hence might be capable of ner distinctions; however, it yields a less compact representation. Furthermore, the Wildes method used a data-driven approach to image registration to align two instances to be compared, which might better respond to the real geometric deformations between the instances, but comes at increased computation. In 1996 and 1998, Wildes et al. led two patents [172] which described their automated segmentation method, the normalized spatial correlation for matching, and an acquisition system allowing a user to self-position his or her eye. A more recent book chapter by Wildes [169] largely follows the treatment in his earlier paper [168]. However, some of the technical details of the system are updated and there is discussion of some experimental evaluations of iris

biometrics done since the earlier paper. Earlier and less detailed descriptions of the system appear in [170,171]. 4. Image Acquisition This section covers publications that relate primarily to image acquisition. These generally fall into one of two categories, corresponding to the rst two subsections. The rst category is engineering image acquisition to make it less intrusive for the user. The Iris on the Move project is a major example of this [97]. The other category is developing metrics for iris image quality, in order to allow more accurate determination of good and bad images. All current commercial iris biometrics systems still have constrained image acquisition conditions. Near-infrared illumination, in the 700-900 nm range, is used to light the face, and the user is prompted with visual and/or auditory feedback to position the eye so that it can be in focus and of sucient size in the image. An example of such a system is shown in gure 3. In 2004, Daugman suggested that the iris should have a diameter of at least 140 pixels [33]. The International Standards Organization (ISO) Iris Image Standard released in 2005 is more demanding, specifying a diameter of 200 pixels [67]. 4.1. Engineering Less Intrusive Image Acquisition As early as 1996, Sensar Inc. and the David Sarno Research Center [54] developed a system that would actively nd the eye of the nearest user who stood between 1 and 3 feet from the cameras. Their system used two wide eld-of-view cameras and a cross-correlation-based stereo algorithm to search for the coarse location of the head. They used a template-based method to search for the characteristic arrangements of features in the face. Next, a narrow eld-of-view (NFOV) camera would conrm the presence of the eye and acquire the eye image. Two incandescent lights, one on each side of the camera, illuminated the face. The NFOV camera eye-nding algorithm searched for the specular reections of these lights to locate the eye. Sensars system was also described by Vecchia et al. in 1998 [163]. Sensar piloted their system with automatic teller machine manufacturers in England and Japan [15]. Sensars system showed high performance but required specialized lighting conditions to nd the eye. 7

Sung et al. [148] suggested using a xed template to look for the inner eye corner because they claimed that the shape and orientation of the eye corner would be consistent across dierent people. Several papers have investigated how the working volume of an iris acquisition system can be expanded. Fancourt et al. [47] demonstrated that it is possible to acquire images at a distance of up to ten meters that are of sucient quality to support iris biometrics. However, their system required very constrained conditions. Abiantun et al. [3] sought to increase the vertical range of an acquisition system by using face detection on a video stream, and a rack-and-pinion system for moving the camera up or down a trackbar depending on whether the largest detected face is in the top or bottom half of the image. By using a hi-zoom NFOV camera, Sensars 1996 system handled subjects standing anywhere within a two foot deep region; in contrast, Narayanswamy and Silveira [105,106] sought to increase the depth-of-eld in which a camera at a xed focus, without a zoom lens, could still capture an acceptable iris image. Their approach combined an aspherical optical element and wavefront-coded processing of the image. Smith et al. [140] show the results of a wavefront coding technology experiment on a dataset of 150 images from 50 people. Other recent work has investigated speeding up and improving the focusing process. Park and Kim [115] propose an approach to fast acquisition of infocus iris images. They exploit the specular reection that can be expected to be found in the pupil region in iris images. To cope with the possible presence of eyeglasses, a dual illuminator scheme is used. A paper by He et al. [57] also discusses the acquisition of in-focus images. Their paper discusses the dierences between xed-focus imaging devices and auto-focus imaging devices. The eects of illumination by dierent near infrared wavelengths are illustrated. They conclude that illumination outside 700-900 nm cannot reveal the iris rich texture. However, irises with only moderate levels of pigmentation image reasonably well in visible light. The least-constrained system to date is described by Matey et al. [97]. They aim to acquire iris images as a person walks at normal speed through an access control point such as those common at airports. The image acquisition is based on high-resolution cameras, video synchronized strobed illumination, and specularity based image segmentation. The system aims to be able to capture useful images in a volume of space 20 cm wide and 10 cm deep, at a distance of

Fig. 3. Image acquisition using an LG2200 camera.

approximately 3 meters. The height of the capture volume is nominally 20 cm, but can be increased by using additional cameras. The envisioned scenario is that subjects are moderately cooperative; they look forward and do not engage in behavior intended to prevent iris image acquisition, such as squinting or looking away from the acquisition camera. Subjects may be required to remove sunglasses, depending on the optical density of those sunglasses. Most subjects should be able to wear normal eyeglasses or contact lenses. Experiments were performed with images from 119 Sarno employees. Results were that the overall recognition rate (total number of successful recognitions divided by the total number of attempts) for all subjects was 78%. The paper concludes, the Iris on the Move system is the rst, and at this time the only, system that can capture iris images of recognition quality from subjects walking at a normal pace through a minimally conning portal. An example of such a portal is shown in gure 4.

4.2. Quality Metrics for Iris Images Overall iris image quality is a function of focus, occlusion, lighting, number of pixels on the iris, and other factors. Several studies report that using an image quality metric can improve system performance [22,74], either by screening out poor-quality images or using a quality metric in the matching. However, there is no generally accepted measure of overall iris image quality. Several groups have studied how to determine the focus of an image. In 1999, Zhang and Salganico [183] led a patent discussing how to measure the focus of an image by analyzing the sharpness of the pupil/iris boundary. Daugman suggested that image focus could be measured by calculating the total high frequency power in the 2D Fourier spectrum of the image [31,33]. Daugman uses an 8x8 convolution kernel for focus assessment. Kang and Park [75] propose a 5x5 convolution kernel similar to Daugmans kernel. They note that the 5x5 kernel is faster and contains more high frequency bands than Daugmans. An image restoration step is proposed for any image with a focus score below a certain threshold. Wei et al. [167] also suggest using a 5x5 lter, 8

Fig. 4. This Iris on the Move portal acquires an iris image as subjects walk through a portal at normal walking speed. The portal itself contains infrared lights to illuminate the subject. Three high-zoom video cameras in the far cabinet take video streams of the subject.

with a similar shape as Daugmans, for detecting defocused images. Additionally, they detect motionblurred images using a variation of the sum modulus dierence (SMD) lter proposed by Jarvis [71]. Chen et al. [22] argue that iris texture is so localized that the quality varies from region to region. They use a wavelet-based transform because it can be used on a local area of the image. They report on experiments using images from the CASIA 1 database [17] and a database collected at West Virginia University. They report that the proposed quality index can reliably predict the matching performance of an iris recognition system [22] and that incorporating the measure of image quality into the matching can improve the EER. The idea that image quality can vary over the iris region seems to be a valid and potentially important point. As an alternative to examining the high-frequency power of an image directly, neural networks can be 9

used to evaluate the quality of an image. Proenca and Alexandre [125] train a neural network to identify ve types of noise (information other than iris): eyelids, eyelashes, pupil, strong reections, and weak reections. A strong reection is one that corresponds to a light source directly pointed at the iris. Krichen et al. [79] train a Gaussian Mixture Model on 30 high-quality images and use it to reject irises with occlusion and blur. Like the method of Chen et al. [22], both of these methods look at local regions of the iris. Ye et al. [177] train a compound neural network system to classify images into the categories of good and bad. While many papers simply try to detect defocused images, Kalka et al. [74] consider the eects of defocus blur, motion blur, o-angle view, occlusion, specularities, lighting, and pixel counts on image quality. Estimates for the individual factors are combined into an overall quality metric us-

ing a Dempster-Shafer approach. Experiments are performed using both the CASIA 1 dataset and a West Virginia University (WVU) dataset. The measurements show that the CASIA dataset contains higher quality data than WVU. It is shown that the quality metric can predict recognition performance reasonably well. It is also noted that the computation of the quality metric requires an initial segmentation, and that failed localization/segmentation will result in inaccurate quality scores. Nandakumar et al. [104] discuss issues related to fusing quality scores from dierent modalities in a multi-biometric system. For a given modality, rather than have a quality score for the gallery sample and the probe sample separately, they have a score for the (gallery sample, probe sample) combination. The particular multi-biometric example discussed in the paper is the combination of ngerprint and iris. The iris image quality metric used is essentially that used in [22]. The development of better image quality metrics appears to be an active area of research. A better metric would be one that better correlates with biometric accuracy. Such a metric might be achieved by an improved method of combining metrics for dierent factors such as occlusion and defocus, by less dependence on an accurate iris segmentation, by improved handling of variations throughout the iris region, or by other means.

5. Segmentation of the Iris Region As mentioned earlier, Daugmans original approach to iris segmentation uses an integrodierential operator, and Wildes [168] suggests a method involving edge detection and a Hough transform. Variations of the edge detection and Hough transform approach have since been used by a number of researchers. Figure 5 shows an image with detected edge points marked as white dots. The Hough transform considers a set of edge points and nds the circle that, in some sense, best ts the most edge points. Figure 6 shows examples of circles found for the pupillary and limbic boundaries. A number of papers in this area present various approaches to nding the pupillary and limbic boundaries. A smaller number of papers deal specifically with determining the parts of the iris region that are occluded by eyelids, eyelashes, or specularities. Occlusion due to eyelashes and specularities is sometimes loosely referred to as noise. These two categories of papers are reviewed in the next subsections. 5.1. Finding Pupillary and Limbic Boundaries Much of the research in segmentation has tried to improve upon Wildes et al.s idea of using edge detection and a Hough transform. To reduce computational complexity, Huang et al. [65] suggest the modication of rst nding the iris boundaries in a rescaled image, and then using that information to guide the search on the original image. They present a unique idea to make the matching step rotationinvariant. Using an image that has both eyes in it, they use the left eye for recognition and the direction to the right eye to establish a standard orientation. Liu et al. [88] use Canny edge detection and a Hough transform as well, but try to simplify the methods to improve the speed. The pupillary and limbic boundaries are modeled as two concentric circles. Sample images are shown for which this assumption seems plausible, but the idea is only applied to 5 dierent subjects. Sung et al. [149] use traditional methods for nding the iris boundaries, but additionally, they nd the collarette boundary using histogram equalization and high-pass ltering. Liu et al. [87] implement four improvements in their ND IRIS segmentation algorithm. Edge points around the specular highlights are removed by ignoring Canny edges at pixels with a high in10

4.3. Iris Image Datasets Used In Research Experimental research on segmentation, texture encoding, and matching requires an iris image dataset. Several datasets are discussed in detail in a later section, but one issue deserves a brief mention at this point. The rst iris image dataset to be widely used by the research community was the CASIA version 1 dataset. Unfortunately, this dataset had the (originally undocumented) feature that the pupil area in each image had been replaced with a circular region of constant intensity to mask out the specular reections from the NIR (near-infrared) illuminators [18]. This feature of the dataset naturally calls into question any results obtained using it, as the iris segmentation has been made articially easy [119].

Fig. 5. Iris Image with Edge Points Detected.

tensity value. Additionally, they use an improved Hough transform. They introduce a hypothesizeand-verify step to catch incorrect candidate iris locations and they present a method for improved detection of occlusion by eyelids. Experiments compare their results with that of Masek [96] and with the location reported by the LG 2200 iris biometrics system. The Masek iris location resulted in 90.9% rank-one recognition, the LG 2200 location resulted in 96.6%, and the ND IRIS location resulted in 97.1%. Some groups follow the general idea of Wildes et al., but additionally propose a method of nding a coarse location of the pupil to guide the subsequent search for the iris boundaries. Lili and Mei [85] nd an initial coarse localization of the iris based on the assumption that there are three main peaks in the image histogram, corresponding to the pupil, iris and sclera regions. They also use edge point detection and then t circles to the outer and inner boundaries of the iris. Iris image quality is evaluated in terms of sharpness, eyelid and eyelash occlusion, and pupil dilation. In the paper by He and Shi [56], the image is binarized to locate the pupil, and then edge detection and a Hough transform are used to nd the limbic boundary. Feng et al. [48] use a 11

coarse-to-ne strategy for nding boundaries approximated as (portions of) circles. One of their suggested improvements is to use the lower contour of the pupil in estimating parameters of the pupillary boundary because it is stable even when the iris image is seriously occluded. Other authors have also suggested approaches that nd a course localization of the pupil. Many of these approaches eectively proceed from the assumption that the pupil will be a uniform dark region, and report good results for locating the iris in the CASIA 1 dataset. Such approaches may run into problems when evaluated on real, un-edited iris images. Tian et al. [157] search for pixels that have a gray value below a xed threshold, search these pixels for the approximate pupil center, and then use edge detection and a Hough transform on a limited area based on the estimate of the pupil center. Xu et al. [174] divide the image into a rectangular grid, use the minimum mean intensity across the grid cells to generate a threshold for binarizing the image to obtain the pupil region, and then search out from this region to nd the limbic boundary. Zaim et al. [180] nd the pupil region by applying split and merge algorithm to detect connected regions of uniform intensity. Sun et al. [142] assume that in

Fig. 6. Example segmented iris image without signicant eyelid occlusion.

an iris image, the gray values inside the pupil are the lowest in the image, use this assumption to nd the pupil, and then constrain a Canny edge detector and Hough transform for the limbic boundary. In 2002, Camus and Wildes [16] presented a method that did not rely on edge detection and Hough transform. This algorithm was more similar to Daugmans algorithm, in that it searched in N 3 space for three parameters (x, y, and r). First, a threshold is used to identify specularities, which are then lled in using bilinear interpolation. Then, local minima of image intensity are used as seed points in a coarse-to-ne algorithm. The parameters are tuned to maximize a goodness-of-t criteria that is weighted to favor solutions where the iris has darker average intensity than the pupil and the pupil-to-iris radii ratio falls within an expected range. This algorithm nds the eye accurately for 99.5% of cases without glasses and 66.6% of cases with glasses, and it runs 3.5 times faster than Daugmans 2001 algorithm [31]. Several relatively unique approaches to iris segmentation have been proposed. Bonney et al. [12] 12

nd the pupil by using least signicant bit-plane and erosion-and-dilation operations. Once the pupil area is found, they calculate the standard deviation in the horizontal and vertical directions to search for the limbic boundary. Both pupillary and limbic boundaries are modeled as ellipses. El-Bakry [45] proposed a modular neural network for iris segmentation, but no experimental results are presented to show whether the proposed approach might realize advantages over known approaches. More recently, He et al. [58] proposed a Viola and Jones style cascade of classiers [164] for detecting the presence of the pupil region and then the boundaries of the region are adjusted to an optimal setting. Proenca et al. [124] evaluated four dierent clustering algorithms for preprocessing the images to enhance image contrast. Of the variations tried, the fuzzy k-means clustering algorithm used on the position and intensity feature vector performed the best. They compared their segmentation algorithm with implementations of algorithms by Daugman [29], Wildes [168], Camus and Wildes [16], Martin-Roche et al. [38], and Tuceryan [159]. They tested these

methods on the UBIRIS dataset, which contains one session of high-quality images, and a second session of lower-quality images. Wildes original methodology correctly segmented the images 98.68% of the time on the good quality dataset, and 96.68% of the time on their poorer quality dataset. The algorithm by Proenca et al. performed second-best with 98.02% accuracy on the good dataset, but they had the smallest performance degradation with 97.88% accuracy on the poorer quality dataset. A recent trend in segmentation aims at dealing with o-angle images. Dorairaj et al. [41] assume that an initial estimate of the angle of rotation is available, and then use Daugmans integrodierential operator as an objective function to rene the estimate. Once the angle is estimated, they apply a projective transformation to rotate the oangle image into a frontal view image. Li [84] ts an ellipse to the pupil boundary and then uses rotation and scaling to transform the o-angle image so that the boundary is circular. It is shown that the proposed calibration step can improve the separation between intra-class and inter-class dierences that is achieved by a Daugman-like algorithm. In 2005, Abhyankar et al. [1] show that iris segmentation driven by looking for circular boundaries performs worse when the iris image is o-angle. Like [41,84], they also consider projective transformations but the approach is found to suer with some serious drawbacks like blurring of the iris outer boundaries. They then present an approach involving bi-orthogonal wavelet networks. Later, in [2], they propose using active shape models for nding the elliptical iris boundaries of o-angle images. 5.2. Detecting Occlusion by Eyelids, Eyelashes and Specularities Kong and Zhang [77] present an approach intended to deal with the presence of eyelashes and specular reections. Eyelashes are dealt with as separable and mixed. Separable eyelashes can be distinguished against the texture of the iris, whereas mixed eyelashes present a larger region of occlusion. A modication of Boles method [9] is used in experiments with 238 images from 48 irises. Results indicate that this approach to accounting for eyelashes and specular reections can reduce the EER on this dataset by as much as 3%. Huang et al. [64] also propose to consider occlusion by eyelids, eyelashes, and specular highlights. 13

They extract edge information based on phase congruency, and use this to nd the probable boundary of noise or occlusion regions. Experiments show that adding the proposed renements to a previous algorithm improves the ROC curve obtained in a recognition experiment using an internal CASIA dataset of 2,255 images from 306 irises. A later paper [63] presents similar conclusions. Huang et al. state that [d]ue to the use of infrared light for illumination, images in the CASIA dataset do not contain specular reections. Thus, the proposed method has not been tested to remove reection noises. [64] However, it was recently disclosed that the lack of specularities in the CASIA 1 images is due to intentional editing of the images [18]; the use of infrared illumination of course does not prevent the occurrence of specularities. Bachoo and Tapamo [7] approach the detection of eyelash occlusion using the gray-level co-occurrence matrix (GLCM) pattern analysis technique. The GLCM is computed for 21x21 windows of the image using the most signicant 64 grey levels. A fuzzy C-means algorithm is used to cluster windows into from 2 to 5 types (skin, eyelash, sclera, pupil, and iris) based on features of the GLCM. There are no experimental results in the context of verication or recognition of identity. Possible challenges for this approach are choosing the correct window size and dealing with windows that have a mixture of types. Based on the works surveyed in this section, there appear to be several active open topics in iris image segmentation. One is handling pupillary and limbic boundaries that are not well approximated as circles, as can be the case when the images are acquired o-angle. Another is dealing with occlusion of the iris region by eyelids, eyelashes, and specularities. A third topic is robust segmentation when subjects are wearing glasses and contact lenses. 6. Analysis and Representation of the Iris Texture Looking at dierent approaches to analyzing the texture of the iris has perhaps been the most popular area of research in iris biometrics. One body of work eectively looks at using something other than a Gabor lter to produce a binary representation similar to Daugmans iris code. Another body of work looks at using dierent types of lters to represent the iris texture with a real-valued feature vector. This group of approaches is, in this sense,

Table 1 Segmentation Performance First Author, Year Camus, 2002 Sung, 2004 Bonney, 2004 X. Liu, 2005 Lili, 2005 Proenca, 2006 Abhyankar, 2006 Z. He, 2006 X. He, 2006 Size of Database Segmentation Results

640 images without glasses, 30 with 99.5% of cases without glasses, 66.6% of glasses cases with glasses, average accuracy: 98% 3167 images 100% segmentation of iris, 94.54% correct location of collarette

108 CASIA 1 images and 104 USNA pupil correctly isolated in 99.1% of cases, images limbic boundary correct in 66.5% of cases 4249 images 2400 images from a CASIA dataset 97.08% rank-one recognition 99.75% accurate

UBIRIS dataset: 1214 good quality 98.02% accurate on good dataset, 97.88% images, 663 noisy images. accurate on noisy dataset 1300 images from CASIA 1 and WVU 99.76% accurate 1200 CASIA images 1200 CASIA images 99.6% 99.7%

more like that of Wildes than that of Daugman. A smaller body of work looks at combinations of these two general categories of approach. The papers reviewed in section are organized into three subsections, corresponding to these dierent areas.

6.1. Alternate Means to a Binary Iris Code Many dierent lters have been suggested for use in feature extraction. Sun et al. [144] use a Gaussian lter. The gradient vector eld of an iris image is convolved with a Gaussian lter, yielding a local orientation at each pixel in the unwrapped template. They quantize the angle into six bins. (In contrast, Daugmans method quantizes phase information into four bins corresponding to the four quadrants of the complex plane.) This method was tested using an internal CASIA dataset of 2,255 images and compared against the authors implementations of three other methods. Another paper by the same group [146] presents similar ideas. Ma et al. [92] use a dyadic wavelet transform of a sequence of 1-D intensity signals around the inner part of the iris to create a binary iris code. Experiments are performed using an internal CASIA dataset representing 2,255 images of 306 dierent eyes from 213 dierent persons. The proposed method is compared to the authors own previous methods and to re-implementations of the methods of Daugman [33], Wildes [168], and Boles and Boashash [9], without implementation of eyelid and eyelash detection. The proposed method is reported 14

to achieve 0.07% equal error rate overall, and 0.09% for comparison of images acquired with approximately one month time lapse. An earlier algorithm in this line of work is described in [93]. Both Chenhong and Zhaoyang [23] and Chou et al. [26] convolve the iris image with a Laplacian-ofGaussian lter. Chenhong and Zhaoyang use this lter to nd blobs in the image that are relatively darker than surrounding regions. An iris code is then constructed based on the presence or absence of detected blobs at points in the image. Chou et al. use both derivative-of-Gaussian and Laplacian-ofGaussian lters to determine if a pixel is a step or ridge edge, respectively. One measure of the distance between two iris images is then represented by the ratio of the number of corresponding pixels at which the edge maps disagree divided by the number at which they agree. One motivation for these types of lters is that the number of free lter parameters is only three, and hence they can be easily determined. They suggest a genetic algorithm for designing the lter parameters. Yao et al. [176] use modied Log-Gabor lters because the Log-Gabor lters are strictly bandpass lters and the [Gabor lters] are not. They state that ordinary Gabor lters would under-represent the high frequency components in natural images. It is stated that using the modied lter improves the EER from 0.36% to 0.28%. Monro et al. [102] use the discrete cosine transform for feature extraction. They apply the DCT to overlapping rectangular image patches rotated 45 degrees from the radial axis. The dierences be-

Fig. 7. Normalized Iris Image.

tween the DCT coecients of adjacent patch vectors are then calculated and a binary code is generated from their zero crossings. In order to increase the speed of the matching, the three most discriminating binarized DCT coecients are kept, and the remaining coecients discarded. Three papers [131,129,80] recommend using wavelet packets rather than standard wavelet transforms. Rydgren et al. [131] state that the wavelet packet approach can be a good alternative to the standard wavelet transform since it oers a more detailed division of the frequency plane. They consider several dierent types of wavelets: Haar, Daubechies, biorthogonal, coiet, and symlet. It is reported that the performance for the benchmark Gabor wavelet is so far superior but it is likely that the performance of the wavelet packets algorithm can be increased in the future. The experimental results are based on 82 images from a total of 33 dierent irises, obtained from Miles Research. The Miles Research images are not IR-illuminated, and the size of the database and the nature of the images may be factors in interpreting the applicability of the results. A later paper by the same group [129] uses the biorthogonal 1.3 wavelet in a 3-level wavelet packet decomposition. Krichen et al. [80] also consider using wavelet packets for visible light images. They report that for their own visible-light dataset, the performance of the wavelet packets is an improvement over the classical wavelet approach, but that for the CASIA 1 infrared image dataset the two methods have more similar performance. A detailed comparison of seven dierent lter types is given by Thornton et al. [156]. They consider the Haar wavelet, Daubechies wavelet, order three, Coiet wavelet, order one, Symlet wavelet, order two, Biorthogonal wavelet, orders two and two, circular symmetric lters, and Gabor wavelets. They applied a single bandpass lter of each type and determined that the Gabor wavelet gave the best equal error rate. They then tune the parameters of the Gabor lter to optimize performance. They report that Although we conclude that Gabor wavelets are the most discriminative bandpass lters for iris patterns among the candidates we 15

considered, we note that the performance of the Gabor wavelet seems to be highly dependent upon the parameters that determine its specic form. The performance of an iris recognition system depends not only on the lter chosen, but also on the parameters of the lter and the scales at which the lter is applied. Huang and Hu [62] present an approach to nding the right scale for analysis of iris images. They perform a wavelet analysis at multiple scales to nd zero-crossings and local extrema and state that the appropriate scale for a wavelet transform is searched for between zero and six scales by minimizing the Hamming distance of two iris codes. Experimental results are reported for a small set of iris images, involving four images each of ve people. Chen et al. [21] do not delve into what type of wavelet to use, but instead focus on how the output of a wavelet transform is mapped to a binary code. They compare two methods, gradient direction coding with Gray code and delta modulation coding. On the CASIA 1 dataset, they obtain an EER as low as 0.95% with gradient direction coding and an iris code of 585 bytes. However, this result is based on only those images that successfully pass the pre-processing module, and 132 of 756 CASIA 1 images did not pass the pre-processing module. Some research has begun to look at ways to account for non-linear deformations of the iris that occur when the pupil dilates. Thornton et al. [155] nd the maximum a posteriori probability estimate of the parameters of the relative deformation between a pair of images. They try two methods for extracting texture information from the image: wavelet-phase codes and correlation lters. Their algorithm is tested on the CASIA 1 database and the Carnegie Mellon University database. The results show that estimating the relative deformation between the two images improves performance, no matter which database is used, and no matter whether wavelet-phase codes or correlation lters are used. Wei et al. [166] model nonlinear iris stretch as a sum of linear stretch, and a Gaussian deviation term. Their model also yields an improvement over a simple linear rubber-sheet model. Other methods of creating a binary iris code are

also presented in the literature. Tisse et al. [158] do their texture analysis by computing the analytic image. The analytic image is the sum of the original image signal and the Hilbert transform of the original signal. A predened template is used to omit the computations at the top and bottom of the iris region, where eyelid occlusion may occur. Thoonsaengngam et al. [152] perform feature extraction by the use of local histogram equalization and a quotient thresholding technique. The quotient thresholding technique binarizes an image so that the ratio between foreground and background of each image, called decision ratio, is maintained. Matching of iris images is done by maximizing the number of aligned foreground pixels over rotating and translating the template within a range of +/10 degrees and +/- 10 pixels, respectively. The performance results reported in many of the papers in this section and in later sections are very good. Many papers [21,26,27,53,78,101,152,154,176] report equal error rates of less than 1% on the CASIA 1 dataset. Others [26,27,53,116] report correct recognition rates above 99% on the same data. However, there are now much larger and more challenging datasets of unedited images available. Table 2 shows some reported performance results for other datasets. The reported performance levels on the 2255-image CASIA dataset are high; this trend suggests that there may be a dierence in diculty of this dataset as compared to other datasets. 6.2. Real-Valued Feature Vectors Other researchers have also used various wavelets, but rather than using the output of the wavelet transform to create a binary feature vector, the output is kept as a real-valued feature vector and methods other than Hamming distance are used for comparison. An early example of this is the work by Boles and Boashash [9]. They consider concentric circular bands of the iris region as 1-D intensity signals. A wavelet transform is performed on a 1D signal, and a zero-crossing representation is extracted. Two dissimilarity functions are considered, one which makes a global measurement of the difference in energy between two zero-crossing representations and one which compares two representations based on the dimensions of the rectangular pulses of the zero-crossing representations. Although the global measurement requires more computation, it is used because it does not require that 16

the number of zero-crossings be the same in the two representations. Experiments are performed using two dierent images of each of two dierent irises, and it is veried that images of the same iris yield a smaller dissimilarity value than images of dierent irises. This experimental evaluation is quite modest by current standards. In [133], Sanchez-Avila and Sanchez-Reillo present an approach similar to that of Boles and Boashash [9]. They encode the iris texture by considering a set of 1-D signals from annular regions of the iris, taking a dyadic wavelet transform of each signal, and nding zero-crossings. The Euclidean distance on the original feature values, the Hamming distance on the binarization of the feature values, and a distance measure more directly related to the zero-crossing representation are compared. Their later paper [134] compares their approach with a Daugman-like iris code approach. They experiment with a database of images from 50 people, and thus 100 irises, with at least 30 images of each iris. The images were acquired over an 11-month period. They nd that the Daugman-like approach, using Gabor ltering and iris codes, achieves better performance than the zero-crossings approach with two of their distance measures. But the zero-crossings approach with binary Hamming distance measure achieves even slightly higher performance. They also report that the zero-crossings based approaches are faster than the Daugman-like approach. Several other researchers have tried using wavelet transforms to create real feature vectors. Alim and Sharkas [4] try four dierent methods: Gabor phase coecients, a histogram of phase coecients, a four- and six-level decomposition of the Daubechies wavelet, and a discrete cosine transform (DCT). The output of each feature extraction method is then used to train a neural network. The best performance, at 96% recognition, was found with the DCT and a neural network with 50 input neurons and 10 hidden neurons. Jang et al. [70] use Daubechies wavelet transform to compose the image into subbands. The mean, variance, standard deviation, and energy from the gray-level histogram of the subbands are used as feature vectors. They tested two dierent matching algorithms and concluded that a support vector machine method worked better than simple Euclidean distance. Gan and Liang [51] use Daubechies-4 wavelet packet decomposition but use weighted Euclidean distance for matching. There are statistical methods that can be used either as an alternative or supplement to wavelets for

Table 2 Reported Recognition Results First Author, Year Alim, 2004 [4] Jang, 2004 [70] Krichen, 2004 [80] Liu, 2005 [87] Ma, 2002 [94] Ma, 2003 [91] Ma, 2004 [93] Ma, 2004 [92] Monro, 2007 [102] Proenca, 2007 [122] Rossant, 2005 [129] Rydgren, 2004 [131] Sanchez-Reillo, 2001 [136] Son, 2004 [141] Sun, 2004 [144,146] Takano, 2004 [150] Thornton, 2006 [154] Thornton, 2007 [155] Tisse, 2002 [158] Yu, 2006 [181] Size of Database not given Results 96.17%

1694 images including 160 w/ glasses 99.1% and 11 w/ contact lenses 700 visible-light images 4249 images 1088 images 2255 images 2255 images 2255 images FAR/FRR: 0% / 0.57% 97.08% 99.85%, FAR/FRR: 0.1/ 0.83 99.43%, FAR/FRR: 0.1/ 0.97 99.60%, EER: 0.29% 100%, EER: 0.07%

2156 CASIA images and 2955 U. of 100% Bath images 800 ICE images 149 images 82 images 200+ images EER: 1.03% 100% 100% 98.3%, EER: 3.6%

1200 images, (600 used for training) 99.4% 2255 images images from 10 people CMU database, 2000+ images CMU database, 2000+ images 300+ images 1016 images 100% FAR/FRR: 0%/26% EER: 0.23 EER: 0.39% FAR/FRR: 0%/11% 99.74%

feature extraction. Huang et al. [65] used independent component analysis (ICA) for feature extraction. Dorairaj et al. [40] experiment with both principal component analysis (PCA) and independent component analysis (ICA). However, unlike Huang et al., they apply PCA and ICA to the entire iris region rather than small windows of the iris region. In addition to looking at a Daubechies discrete wavelet transform (DWT), Son et al. [141] try three dierent statistical methods: principal component analysis (PCA), linear discriminant analysis (LDA), and direct linear discriminant analysis (DLDA). They try ve dierent combinations: DWT+PCA, LDA, DWT+LDA, DLDA, and DWT+DLDA. For the matching step, they try two dierent classication techniques, support vector machines and nearest neighbor approach. The combination that worked the best was using the two-dimensional Daubechies wavelet transform to extract iris features, direct linear discriminant analysis to re17

duce the dimensionality of the feature vector, and support-vector-machines for matching. Ma et al. [91] use a variant of the Gabor lter at two scales to analyze the iris texture. They use Fishers linear discriminant to reduce the original 1,536 features from the Gabor lters to a feature vector of size 200. Their experimental results show that the proposed method performs nearly as well as their implementation of Daugmans algorithm, and is a statistically signicant improvement over other algorithms they use for comparison. The experimental results are presented using ROC curves, with 95% condence intervals shown on the graphs. Another group to try linear discriminant analysis is Chu et al. [27]. They use LPCC and LDA for extracting iris features. LPCC (Linear Prediction Cepstral Coecients) is an algorithm that is commonly used for extracting features in speech signals. For matching, they use a probabilistic neural network with particle swarm optimization. Another paper by

the same authors [20] presents similar results but gives more detail about the neural network used. Some of the papers in the literature report unique methods of feature extraction that do not follow the major trends. Takano et al. [150] avoid using any type of transform and instead essentially uses the normalized iris image as the feature vector, inputting a normalized 120x25 pixel r- image to a rotation spreading neural network. Ives et al. [69] create a normalized histogram of pixel values for the segmented iris region. A close match between the probe histogram and the enrolled histogram allows verication of identity. One motivation for this approach is that the histogram matching avoids the need for rotating the iris code, and so may allow faster recognition. However, the reported EER from experiments on the CASIA 1 dataset is 14%. Gu et al. [53] use a steerable pyramid to decompose an iris image into a set of subbands. Then a fractal dimension estimator is applied in each resulting image subband, yielding a set of features that measure self-similarity on a band-by-band basis. Hosseini et al. [61] use a shape analysis technique. Sample shapes detected in the iris are represented using radius-vector functions and support functions. Miyazawa et al. [101] apply a band-limited phaseonly correlation approach to iris matching. This method computes a function of the 2D discrete Fourier transforms of two images. For a matching score, they use the maximum peak value of this function within an 11 by 11 window centered at the origin. Yu et al. [181] divide an iris image into 16 subimages, each of size 32x32. A set of 32 key points are found in each sub-image. These are the maximum values in each of the ltered versions of the subimage, where 2D Gabor lters are used. The center of mass of the key points within the subimage is found. The feature vector derived from the iris pattern is a set of relative distances from the key points to the center of mass, in this case 32x16 = 512 distance values. The Euclidean distance between two feature vectors is used as a measure of dissimilarity between two irises. 6.3. Combination of Feature Types One group of work investigates combining information from two dierent types of feature vectors. For example, Sun et al. [145,147] propose a cascaded system in which the rst stage is a traditional 18

Daugman-like classier. If the similarity between irises is above a high threshold, then verication is accepted. Otherwise, if similarity is below a low threshold, then verication is rejected. If the similarity is between thresholds, then the decision is passed to a second classier that looks at global features - areas enclosed by zero-crossing boundaries. Sun et al. [143] later present a better global classier. They investigate analyzing the iris features using local binary patterns (LBP) organized into a simple graph structure. The region of the normalized iris image nearer the pupil is divided into 32 blocks, 16 rows of 2, and a LBP histogram is computed for each block. Matching of two images is done by matching (the LBP histogram of) corresponding blocks, subject to a threshold, so that the matching score of two images is from 0 to 32. The fusion of results from this method in combination with the results from either Daugmans [29] or Mas [92] methods gives an improvement in performance. Zhang et al. [182] also describe a system that encodes both global and local texture information using a log Gabor wavelet lter. The global features are intended to be invariant to iris rotation and small errors in localization. The local features are essentially the normal iris code. Unlike Sun et al. [147], Zhang et al. consider global features rst, and then the local features. Two other groups, Vatsa et al. [162] and Park and Lee [114] both present systems that use two types of feature vectors. Vatsa et al. [162] uses a typical Daugman-style iris code as a textural feature. A topological feature is obtained by using the high-order four bits of an iris image to create binary templates for the image, nding connected components, and computing the Euler number of each template, which represents the dierence between the number of connected components and the number of holes. The feature vector is then the Euler numbers from the four templates. Park and Lee [114] use a directional lter bank to decompose the iris image. One feature vector is computed as the binarized directional sub-band outputs at various scales. A second feature vector is computed as the blockwise normalized directional energy values. Thus a person is enrolled into the system with two types of feature vectors. Recognition is then done by matching each independently, and combining the results. Experiments show that the combination of the two is more powerful than either alone. It is clear that researchers have considered a wide variety of possible lters for analyzing iris tex-

tures, including log-Gabor, Laplacian-of-Gaussian, Haar, Daubechies, discrete cosine transform, biorthogonal and others. Considering the results reviewed here, there is no consensus on which types of lters give the best performance. Table 3 summarizes some of the varying conclusions reached in dierent studies. Variation in results may be due to the same general lter being used with dierent parameters in dierent studies, to using dierent image datasets, and/or to interactions with dierent segmentation and matching modules. We also note that even though a number of papers make experimental comparisons, very little eort is made to test the observed dierence in performance for statistical signicance. 7. Matching Iris Representations Papers surveyed in this section are categorized into four subsections. First, there are a number of papers showing that performance can be improved by using multiple images to enroll an iris. Second, there are several papers that suggest that the part of the iris closer to the pupil may be more useful than that closer to the sclera. Third, there are a few papers that look at an indexing step to select a subset of enrolled irises to match against for recognition, and ignore the others. Lastly, there are several authors who have contributed to developing a theory of decision-making in the context of binary iris codes. 7.1. Multi-Image Iris Enrollment In biometrics in general, it has been found that using multiple samples for enrollment and comparing the probe to multiple gallery samples will result in improved performance [13,19,120]. Several papers show that this is also true for iris recognition. Du [42] performs experiments using one, two, and three images to enroll a given iris. The resulting rank-one recognition rates are 98.5%, 99.5%, and 99.8%, respectively. Liu and Xie [86] present an algorithm that uses direct linear discriminant analysis (DLDA). In testing their algorithm on 1200 images from the CASIA 2 dataset, they show that recognition performance for their algorithm increases dramatically in going from two training samples per iris to four training samples, and then incrementally from 4 to 8, and 8 to 10. They also present an experiment comparing four wavelet bases; they nd lit19

tle dierence between them, with the Haar wavelet performing at least as well as the others. Algorithms that use multiple training samples to enroll an image must decide how to combine the scores from multiple comparisons. In 2003, Ma et al. [91] suggested analyzing multiple images and keeping the best-quality image. In their 2004 paper [92], they state that when matching the input feature vector with the three templates of a class, the average of the three scores is taken as the nal matching distance. Krichen et al. [78] represent each subject in the gallery with three images, so that for each client and for each test image, we keep the minimum value of its similarity measure to the three references [images] of the client. The use of the min operation to fuse a set of similarity scores, as opposed to the use of the average in [92] just above, is generally more appropriate when there may be large outlier type errors in the scores. Schmid et al. [138] also assume that multiple scans of an iris are available. Their baseline form of multi-sample matching is to use the average Hamming distance. This is compared to using a log-likelihood ratio, and it is found that, in many cases, the log-likelihood ratio outperforms the average Hamming distance. Some groups use multiple enrollment images not merely to improve performance, but because their ideas or chosen techniques require multiple images. Hollingsworth et al. [60] acquire multiple iris codes from the same eye and evaluate which bits are the most consistent bits in the iris code. They suggest masking the inconsistent bits in the iris code to improve performance. Many data mining techniques require multiple images for training a classier. Roy and Bhattacharya [130] use six images of each iris to train a support vector machine. Thornton et al. [153,154] use a set of training images for designing a correlation lter. They compare their method to a Gabor wavelet encoding method, PCA, and normalized correlation, and conclude that correlation lters outperform the other methods. Abhyankar et al. [1] use multi-image enrollment specically to tackle the problem of o-angle images. They work with a dataset of 202 irises. From an original straight-on image, twenty synthetic o-angle images are generated, representing between 0 and 60 degrees oangle. Seven of the 20 images are randomly selected and used for training a bi-orthogonal wavelet network, and the other 13 images are used for testing. It is reported that for an angle up to 42 degrees oset, all the templates were recognized correctly.

Table 3 Results of Selected Filter Comparisons First Author, Year Alim, 2004 Du, 2006 Krichen, 2004 Liu, 2006 Rydgren, 2004 Sun, 2004 Thornton, 2005 Thornton, 2007 Yao, 2006 Operation found to perform the best Compared to Discrete cosine transform 2D log-Gabor wavelet packets Haar, Biorthogonal-1.1 Gabor wavelets Robust direction estimation Correlation lters Gabor wavelets modied log-Gabor 32 Gabor Daubechies 1D log-Gabor Gabor wavelets Daubechies, Rbio3.1 wavelet packets, Haar wavelets, Daubechies, Biorthogonal, and others Gabor lter, quadratic spline wavelet, and discrete Haar wavelet 1D log-Gabor and 2D Gabor Haar, Daubechies, Coiet, Symlet, Biorthogonal, Circular Symmetric complex Gabor lters used by Daugman phase coecients or

7.2. Matching Sub-Regions of the Iris Several authors have chosen to omit part of the iris region near the limbic boundary from their analysis [158,93,101]. The motivation may be to avoid possible occlusion by the eyelids and eyelashes, or the idea that the structures near the pupillary boundary are inherently more discriminatory. Sanchez-Reillo and Sanchez-Avila [136] detect iris boundaries using an integro-dierential type operator and then divide the iris into four portions (top, bottom, left and right) and the top and bottom portions are discarded due to possible occlusion. Ma et al. [94] chose a dierent part of the iris. They use the threequarters of the iris region closest to the pupil. They then look at feature representation using a circular symmetric lter (CSF) which is developed on the basis of Gabor lters [94]. Du et al. [43] study the accuracy of iris recognition when only part of the image is available. With respect to the partial iris image analysis, they conclude that these experimental results support the conjecture that a more distinguishable and individually more unique signal is found in the inner rings of the iris. As one traverses to the limbic boundary of the iris, the pattern becomes less dened, and ultimately less useful in determining identity [43]. A similar paper by Du et al. [44] concludes that a partial iris image can be used for human identication using rank 5 or rank 10 systems. Pereira et al. [116] look at using all possible combinations of ve out of ten concentric bands of the 20

iris region. They nd that using the combination of bands 2, 3, 4, 5, and 7 gives the largest decidability value. The bands are numbered from the pupillary boundary out to the limbic boundary, and so the region that they nd to perform well is the part close to the pupil. This analysis is done using a simple segmentation of the iris region as two circles that are not necessarily concentric. Therefore, it is possible that band 1, the innermost band, was aected by inaccuracies in the pupillary boundary, and that bands 8, 9, and 10 were aected by segmentation problems with eyelashes and eyelids. As a follow-up to this initial idea, they [117] look at dividing the iris into a greater number of concentric bands and using a genetic algorithm to determine which bands to use in the iris matching. Proenca et al. [122] designed a recognition algorithm based on the assumption that noise (e.g. specularities, occlusion) is localized in one particular region of the iris image. Like Sanchez-Reillo and Sanchez-Avila [136], Proenca et al. divide the iris into four regions: top, bottom, left, and right. They also look at the inner half of the iris and the outer half of the iris. However, rather than simply omitting parts of the iris, they compare all six sections of the iris in a probe to the corresponding section of the iris image from the gallery to get six similarity scores. They experimentally determine six dierent thresholds. If one of the similarity scores is less than the smallest threshold, or if two scores are less than the second smallest threshold, etc., then the comparison is judged to be a correct match.

7.3. Indexing In Recognition Matching Several researchers have looked at possible ways of quickly screening out some iris images from passing on to a more computationally expensive matching step. Qui et al. [127] divide irises into categories based on discriminative visual features which they call iris-textons. They use a K-means algorithm to determine which category an iris falls into, and achieves a correct classication rate of 95% into their ve categories. Yu et al. [178] compute fractal dimension of an upper region and a lower region of the iris image, and use two thresholds to classify the iris into one of four categories. Using a small number of classication rules, they are able to achieve a 98% correct classication of 872 images from 436 irises into the four categories. Ganeshan et al. [52] propose a very simple test to screen images using correlation of a Laplacianof-Gaussian lter at four scales. They state that an intermediate step in iris identication is determination of the ratio of limbus diameter to pupil diameter for both irises. If the two irises match, the next step is determination of the correlation... Experimental results are shown for images from just two persons, and this test will likely encounter problems whenever conditions change so as to aect pupil dilation between image acquisitions. Fu et al. [50] argue for what is termed articial color ltering. Here, articial color is something attributed to the object using data obtained through measurements employing multiple overlapping spectral sensitivity curves. Observations at dierent points in the iris image are converted to a binary match/non-match of articial color, and the number of matches is used as a measure of gross similarity. It is suggested that this approach may be useful, especially when used in conjunction with the much-better-developed spatial pattern recognition of irises. However, this approach may not be compatible with the current generation of iris imaging devices. 7.4. Statistical Analysis of Iris-Code Matching A key concept of Daugmans approach to iris biometrics is the linking of the Hamming distance to a condence limit for a match decision. The texture computations going into the iris code are not all statistically independent of each other. But given the Hamming distance distributions for a large number 21

of true matches and a large number of true nonmatches, the distributions can be t with a binomial curve to nd the eective number of degrees of freedom. The eective number of degrees of freedom then allows the calculation of a condence limit for a match of two iris codes. Daugman and Downing [36] describe an experiment to determine the statistical variability of iris patterns. Their experiment evaluates 2.3 million comparisons between dierent iris pairs. The mean Hamming distance between two dierent irises is 0.499, with a standard deviation of 0.032. This distribution closely follows a binomial distribution with 244 degrees of freedom. The distribution of Hamming distances for the comparisons between the left and right irises of the same person is found to be not statistically signicantly dierent from the distribution of comparisons between dierent persons. Daugmans 2003 paper [32] presents similar results as [36], but with a larger dataset of 9.1 million iris code matches. This number of matches could derive from matching each of a set of just over 3,000 iris images against all others. The match data are shown to be t reasonably well by a binomial distribution with p = 0.5 and 249 degrees of freedom. Figures 9 and 10 of [32] compare the performance of iris recognition under less favourable conditions (images acquired by dierent camera platforms) and under ideal (indeed, articial) conditions. The important point in this comparison is that variation in camera, lighting, and camera-to-subject distance can degrade recognition performance. This supports the idea that one major research theme in iris biometrics is or should be the performance under lessthan-ideal imaging conditions. Bolle et al. [11] approach the problem of analytic modeling of the individuality of the iris texture as a biometric. Following on concepts developed by Daugman, they consider the probability of bit values in an iris code and the Hamming distance between iris codes to develop an analytical model of the false reject rate and false accept rate as a function of the probability p if a bit in the iris code being ipped due to noise. The model predicts that the iris FAR performance is relatively stable and is not aected by p and that the theoretical FRR accuracy performance degrades rapidly when the bit ip rate p increases. They also indicate that the FAR performance predicted by the foregoing analytical model is in excellent agreement with the empirical numbers reported by Daugman.

Kong et al. [76] present an analysis to show that the iris code is a clustering algorithm, in the sense of using a cosine measure to assign an image patch to one of the prototypes. They propose using a ner-grain coding of the texture, and give a brief discussion of the basis for the imposter distribution being represented as binomial. There are no experimental results of image segmentation or iris matching. 8. Iris Biometrics Evaluations and Databases There have been few publicly-accessible, largescale evaluations of iris biometrics technology. There are, as already described, a number of papers that compare a proposed algorithm to Daugmans algorithm. However, this generally means a comparison to a particular re-implementation of Daugmans algorithm as described in his earliest publications. Thus the Daugmans algorithm used for comparison purposes in two dierent papers may not be exactly the same algorithm and may not give the same performance on the same dataset. There are also, as mentioned earlier, many research papers that compare dierent texture lters in a relatively controlled manner. However, the datasets used in such experiments have generally been small relative to what is needed to draw conclusions about statistical significance of observed dierences, and often the experimental structure confounds issues of image segmentation and texture analysis. As one example of a research-level comparison of algorithms, Vatsa et al. [161] implemented and compared four algorithms. They looked at Daugmans method [31]; Mas algorithm which uses circular symmetry lters to capture local texture information and create a feature vector [94]; Sanchez-Avilas algorithm based on zero-crossings [37]; and Tisses algorithm which uses emergent frequency and instantaneous phase [158]. A comparison of the four algorithms, using the CASIA 1 database, showed that Daugmans algorithm performed the best with 99.90% accuracy, then Mas algorithm with 98.00%, Avilas with 97.89%, and Tisses algorithm with 89.37%. A widely-publicized evaluation of biometric technology done by the International Biometric Group in 2004 and 2005 [66] had a specic and limited focus: The scenario test evaluated enrollment and matching software from Iridian and acquisition devices from LG, OKI, and Panasonic [66]. Iris 22

samples were acquired from 1,224 individuals, 458 of whom participated in data acquisition again at a second session several weeks after the rst. The report gives failure to enroll (FTE) rates for the three systems evaluated, where FTE was dened as the proportion of enrollment transactions in which zero [irises] were enrolled. Enrollment of one or both [irises] was considered to be a successful enrollment. The report also gives false match rates (FMR) and false non-match rates (FNMR) for enrollment with one system and recognition with the same or another system. One conclusion is that cross-device equal error rates, while higher than intra-device error rates, were robust. With respect to the errors encountered in the evaluation, it is reported that errors were not distributed evenly across test subjects. Certain test subjects were more prone than others to FTA, FTE, and genuine matching errors such as FNMR. It is also reported that one test subject was unable to enroll any [irises] whatsoever. Some of these high-level patterns in the overall results may be representative of what would happen in general application of iris biometrics. Authenti-Corp released a report in 2007 [6] that evaluates three commercial iris recognition systems in the context of three main questions: (1) What are the realistic error rates and transaction times for various commercial iris recognition products? (2) Are ISO-standard iris images interchangeable (interoperable) between products? (3) What is the inuence of o-axis user presentation on the ability of iris recognition products to acquire and recognize iris images? The experimental dataset for this report included about 29,000 images from over 250 persons. The report includes a small, controlled oaxis experiment in addition to the main, large scenario evaluation, and notes that the current generation of iris recognition products is designed for operational scenarios where the eyes are placed in an optimal position relative to the products camera to obtain ideal on-axis eye alignment. The data collection for the experiment includes a time lapse of up to six weeks, and the report nds that this level of time lapse does not have a measurable inuence on performance. The report also notes that, across the products tested, there is a tradeo between speed and accuracy, with higher accuracy requiring longer transaction times. A dierent sort of iris technology program, the Iris Challenge Evaluation (ICE), was conducted under the auspices of the National Institute of Standards

and Technology (NIST) [110]: The ICE 2005 is a technology development project for iris recognition. The ICE 2006 is the rst large-scale, open, independent technology evaluation for iris recognition. The primary goals of the ICE projects are to promote the development and advancement of iris recognition technology and assess its state-of-the-art capability. The ICE projects are open to academia, industry, and research institutes. The initial report from the ICE 2006 evaluation is now available [118], as well as results from ICE 2005 [110]. One way in which the ICE diers from other programs is that it makes source code and data sets for iris biometrics available to the research community. As part of ICE, source code for a baseline Daugman-like iris biometrics system and a dataset of approximately 3,000 iris images had been distributed to over 40 research groups by early 2006. The ICE 2005 results that were presented in early 2006 compared self-reported results from nine different research groups [110]. Participants included groups from industry and from academia, and from several dierent countries. The groups that participated in ICE 2005 did not all submit descriptions of their algorithms, but presentations by three of the groups, Cambridge University, Tohoku University, and Iritech, Inc., are online at http:// iris.nist.gov/ice/presentations.htm. Iris images for the ICE program were acquired using an LG 2200 system, with the ability to save raw images that would not ordinarily pass the builtin quality checks. Thus this evaluation seeks to investigate performance using images of less-thanideal quality. The ICE 2006 evaluation was based on 59,558 images from 240 subjects, with a time lapse of one year for some data. A large dierence in execution time was observed for the iris biometrics systems participating in ICE 2006, with a factor of 50 dierence in speed between the three systems whose performance is included in the report. The ICE 2006 report is combined with the Face Recognition Vendor Test (FRVT) 2006 report, and includes face and iris results for the same set of people [118]. In [108], Newton and Phillips compare the ndings of the evaluations by NIST, Authenticorp, and the International Biometrics Group [66,6,110]. They note that all three tests produced consistent results and demonstrate repeatability. The evaluations may have produced similar results because most of the algorithms were based on Daugmans work, and Daugman-based algorithms dominate the market. The best performers in all three evaluations 23

achieved a false non-match rate of about 0.01 at a false match rate of 0.001. A new competition, the Noisy Iris Challenge Evaluation (NICE) [121], scheduled for 2008, focuses exclusively on the segmentation and noise detection stages of iris recognition. This competition will use data from a second version of the UBIRIS database. This data contains noisy images which are intended to simulate less constrained imaging environments. 8.1. Iris Image Databases The CASIA version 1 database [17] contains 756 iris images from 108 Chinese subjects. As mentioned earlier, the images were edited to make the pupil a circular region of constant intensity. CASIAs website says, In order to protect our intellectual property rights in the design of our iris camera (especially the NIR illumination scheme), the pupil regions of all iris images in CASIA V1.0 were automatically detected and replaced with a circular region of constant intensity to mask out the specular reections from the NIR illuminators. Some of the original unmodied images are now available, as a subset of the 22,051-image CASIA version 3 dataset. Figure 9 shows an example image pair from CASIA 1 and CASIA 3. The CASIA 3 dataset is apparently distinct from a 2,255-image dataset used in various publications by the CASIA research group [64,94,9193,146,144]. The iris image datasets used in the Iris Challenge Evaluations (ICE) in 2005 and 2006 [110] were acquired at the University of Notre Dame, and contain iris images of a wide range of quality, including some o-axis images. The ICE 2005 database is currently available, and the larger ICE 2006 database should soon be released. One unusual aspect of these images is that the intensity values are automatically contrast-stretched by the LG 2200 to use 171 gray levels between 0 and 255. A histogram of the gray values in the image used in Figure 1 is given in Figure 10. One common challenge that iris recognition needs to handle is that of a subject wearing contacts. Soft contact lenses often cover the entire iris, but hard contacts are generally smaller, so the edge of the contact may obscure the texture of the iris as in Figure 11. There are some contacts that have the brand of the lens or other information printed on them. For example, Figure 12 shows a contact with a small AV (for the AccuVue brand) printed on

Fig. 8. Results of the Iris Challenge Evaluation 2005.

Fig. 9. Picture 001 1 1.bmp (left) is one of the edited images from CASIA 1. Picture S1143R01.jpg from CASIA 3 (right) is the unedited image.

it. People wearing glasses also present a challenge. Diculties include severe specular reections, dirt on the lenses, and optical distortion of the iris. Also, segmentation algorithms can confuse the rims of the glasses with the boundaries of the iris. 8.1.1. Synthetic Iris Images Large iris image datasets are essential for evaluating the performance of iris biometrics systems. This issue has motivated research in generating iris images synthetically. However, the recent introduction 24

of datasets with thousands to tens of thousands of real iris images (e.g., [17,110,160]) may decrease the level of interest in creating and using synthetic iris image datasets. Lefohn et al. [83] consider the methods used by ocularists to make realistic-looking articial eyes, and mimic these methods through computer graphics. Cui et al. [28] generate synthetic iris images by using principal component analysis (PCA) on a set of real iris images to generate an iris space. Wecker et al. [165] generate synthetic images through combina-

Histogram of Pixel Intensities


10000 9000 8000 7000

Pixel Count

6000 5000 4000 3000 2000 1000 0

50

100

150

200

250

Intensity

Fig. 10. Histogram of an image acquired by LG 2200, using 171 of 256 intensity levels.

Table 4 Iris Databases Database Number Irises CASIA 1 [17] CASIA 3 [17] 108 1500

of Number of Im- Camera Used ages 756 22051 2953 60000 450 995 1877 16000 CASIA camera

How to Obtain Download application: www.cbsr.ia.ac.cn/Databases.htm

CASIA camera & OKI Download application: irispass-h www.cbsr.ia.ac.cn/Databases.htm LG2200 LG2200 LG IrisAccess E-mail ice@nist.gov E-mail ice@nist.gov Download pesona.mmu.edu.my/~ccteo/ from

ICE2005 [109] 244 ICE2006 [109] 480 MMU1 [103] MMU2 [103] UBIRIS [123] 90 199 241

Panasonic BM-ET100US E-mail ccteo@mmu.edu.my Authenticam Nikon E5700 Download from iris.di.ubi.pt See

U of Bath [160] 800

ISG LightWise LW-1.3-S- Fax application. 1394 www.bath.ac.uk/eleceng/research/sipg/irisweb/

UPOL [39] WVU

128 488

384 3099

SONY DXC-950P 3CCD Download from www.inf.upol.cz/iris OKI irispass-h E-mail arun.ross@mail.wvu.edu

tions of real images. As they point out, very little work has been done on verifying synthetic biometrics. Makthal and Ross [95] present an approach based on using Markov random elds and samples of multiple real iris images. Yanushkevich et al. [175] discuss synthetic iris image generation in terms of assembly approaches that involve the use of a library of primitives (e.g., collarette designs) and transformation approaches that involve deformation or rearrangement of texture information from an input iris picture. Zuo and Schmid [184] present a relatively complex model for generating synthetic 25

iris images. It involves 3D bers generated based on 13 parameters, projection onto a 2D image space, addition of a collarette eect, blurring, Gaussian noise, eyelids, pupil, and eyelash eects. They note that: since synthetic images are known to introduce a bias that is impossible to predict, the data have to be used with caution. Subjective examination of example synthetic images created in these various works suggests that they often seem to have a more regular iris texture than is the case with real iris images, and to lack the coarser-scale structure that appears in some real iris

Fig. 11. Image of an eye with a hard contact lens.

Fig. 12. An iris overlaid by a contact lens with printed text AV.

26

images. Often there are no specular highlights in the pupil region, no shadows, no highlights from interreections, and eectively uniform focus across the image. Real iris images tend to exhibit all of these eects to some degree. 9. Applications and Systems This section is divided into two subsections. The rst subsection deals with various issues that arise in using iris biometrics as part of a larger system or application. Such issues include implementing iris biometrics in hardware or on a smartcard, detecting attempts to spoof an identity, and dealing with identity theft through cancelable biometrics. The second section covers descriptions of some commercial systems and products that use iris biometrics. 9.1. Application Implementation Issues In any setting where security is important, there is the possibility that an imposter may try to gain unauthorized access. Consider the following type of identity theft scenario. An imposter acquires an image of a Jane Does iris and uses the image, or the iris code extracted from the image, to be authenticated as Jane Doe. With a plain biometric system, it is impossible for Jane Doe to stop the imposter from masquerading as her. However, if a system used a cancelable iris biometric, it would be possible to revoke a previous enrollment and re-enroll a person [10]. Chong et al. [25] propose a method of creating a cancelable iris biometric. Their particular scheme works by multiplying training images with the user-specic random kernel in the frequency domain before biometric lter is created. The random kernel could be envisioned to be provided by a smartcard or USB token that is issued at time of enrollment. An imposter could still compromise the system by obtaining an iris image and the smartcard, but the smartcard could be canceled and reissued in order to stop the identity theft. Chin et al. [24] also study cancelable biometrics, proposing a method that combines an iris code and an assigned pseudo-random number. Another security measure that would make it more dicult for an imposter to steal an identity would be to incorporate some method of liveness detection in the system, to detect whether or not the iris being imaged is that of a real, live eye. Lee et al. [81] use collimated infrared illuminators and 27

take additional images to check for the specular highlights that appear in live irises. The specular highlights are those from Purkinje images that result from specular reections from the outer surface of the cornea, the inner surface of the cornea, the outer surface of the lens, and the inner surface of the lens. Experiments are performed with thirty persons, ten without glasses or contact lenses, ten with contact lenses, and ten with glasses. Dierent implementations of iris biometrics systems may require dierent levels of security and different hardware support. Sanchez-Reillo et al. [137] discuss the problem of adjusting the size of the iris code to correspond to a particular level of security. It is assumed that the false acceptance rate for iris biometrics is essentially zero, and that varying the length of the iris code will lead to dierent false rejection rates. Sanchez-Reillo [135] also discusses the development of a smart card that can perform the verication of a biometric template. Speaker recognition, hand geometry, and iris biometrics are compared in this context, and the limitations of hosting these biometrics onto an open operating system smartcard are discussed. Liu-Jimenez et al. [89,90] implement the algorithms for feature extraction and for matching two iris codes on a Field Programmable Gate Array (FPGA). It is stated that this implementation reduces the computational time by more than 200%, as the matching processes a word of the feature vector at the same time the feature extraction block is processing the following word [90]. Ives et al. [68] explore the eects of compression on iris images used for biometrics. They refer to storage of the iris image; the iris code used for recognition only requires 256 bytes of space in Daugmans algorithm. It appears that some iris images, after being compressed, may result in frequent false rejects. The overall conclusion was that iris database storage could be reduced in size, possibly by a factor of 20 or even higher. Qiu et al. [126] consider the problem of predicting the ethnicity of a person from their iris image. The ethnic classication considered is (Asian, nonAsian). An Adaboost ensemble, apparently using a decision tree base classier, is able to achieve 86.5% correct classication on the test set, having selected 6 out of the 480 possible features. Thomas et al. [151] demonstrated the ability to predict a persons gender by looking at their iris. Using an ensemble of decision trees, they developed a classication model that achieved close to 80% accuracy. These works point out a possible privacy issue arising with iris biomet-

rics, in that information might be obtained about a person other than simply whether their identity claim is true or false. Bringer et al. [14] demonstrate a technique for using iris biometrics in a cryptography setting. Typically, cryptographic applications demand highly accurate and consistent keys, but two samples from the same biometric are rarely identical. Bringer et al. use a technique called iterative min-sum decoding for error correction on a biometric signal. They test their error-tolerant authentication method on irises from the ICE database and show that their method approaches the theoretical lower limits for the false accept and false reject rates. Hao, Anderson, and Daugman [55] also eectively used an iris biometric signal to generate a cryptographic key. Their method uses Hadamard and Reed-Solomon error correcting codes. Lee et al. [82] discuss another cryptography application of iris biometrics - fuzzy vaults. A fuzzy vault system combines cryptographic keys and biometric templates before storing the keys. In this way, an attacker cannot easily recover either the key of the biometric template. An authorized user can retrieve the key by presenting his biometric data. Several previous works used ngerprint data in fuzzy vault systems. Lee et al. proposed a method of using iris data in such a system. 9.2. Application Systems Negin et al. [107] describe the Sensar iris biometrics products, a public-use system and a personal-use system. Both systems seem meant primarily for authentication applications. The public-use system allows the user to stand one to three feet from the system and uses several cameras and face template matching to nd the users eyes. There is also an LED used as a gaze director to focus the subjects gaze in the appropriate direction. The personal-use system uses a single camera which is to be manually positioned three to four inches from the eye. Pacut et al. [113] describe an iris biometrics system developed in Poland. This system uses infrared illumination in image acquisition, Zak-Gabor wavelets in texture encoding, some optimization of texture features, and a Hamming distance comparison. Experiments are reported on their own iris image dataset representing 180 individuals. The particular biometric application envisioned is 28

remote network access. Schonberg and Kirovski [139] describe the EyeCert system, which would issue identity cards to authorized users. The barcode on the cards would store both biometric information about the persons iris, as well as other information such as a name, expiration date, birth date, and so forth. The system is designed to allow identity verication to be done oine, thus avoiding potential problems that would come with systems that require constant access to a centralized database. The approach uses multiple images per iris and a representative set of irises from the population to train the method, and larger training sets are likely to produce improved quality of feature set extraction and compression. Jeong et al. [72] aim at developing the iris recognition system in mobile phone only by using a builtin mega-pixel camera and software without additional hardware component. This implies limited memory and lack of a oating-point processor. The particular mobile phone used in this work is a Samsung SPH-S3200 with a 3.2 mega-pixel CCD sensor. Processing time for iris recognition on the mobile phone is estimated at just less than 2 seconds. In [34], Daugman describes how iris recognition is currently being used to check visitors to the United Arab Emirates against a watch-list of persons who are denied entry to the country. The UAE database contains 632,500 dierent iris images. In an all-against-all comparison, no false matches were found with Hamming distances below about 0.26. Daugman reports that to date, some 47000 persons have been caught trying to enter the UAE under false travel documents, by this iris recognition system. The Abu Dhabi Directorate of Police report that so far there have been no matches made that were not eventually conrmed by other data. 10. Medical Conditions Potentially Aecting Iris Biometrics Many envisioned applications for iris biometrics involve serving the needs of the general public. If biometrics are used to access government benets, enhance airline security by verifying traveler identity, or ensure against fraudulent voting in elections, it is important that the technology does not disadvantage any subset of the public. There are several ways in which disadvantage might be created. One is that there may be some subset of the public that cannot easily be enrolled into the biometric system

because the biometric is not, for this subset of people, suciently unique. Another is that there may be some subset of the public that cannot easily use the system on an ongoing basis because the biometric is not, for this subset of people, suciently stable. There are various medical conditions that may result in such problems. A cataract is a clouding of the lens, the part of the eye responsible for focusing light and producing clear, sharp images. Cataracts are a natural result of aging: about 50% of people aged 65-74 and about 70% of those 75 and older have visually significant cataracts [99]. Eye injuries, certain medications, and diseases such as diabetes and alcoholism have also been known to cause cataracts. Cataracts can be removed through surgery. Roizenblatt et al. [128] study how cataract surgery aects the image of the iris. They captured iris images from 55 patients. Each patient had his or her eye photographed three times before cataract surgery and three times after the surgery. The surgery was performed by second year residents in their rst semester of phacoemulsication training. Phacoemulsication is a common method of cataract removal. At a threshold Hamming distance of 0.4, which is higher than that used in most systems, six of the 55 patients were no longer recognized after the surgery. They conclude that patients who have cataract surgery may be advised to re-enroll in iris biometric systems. Glaucoma refers to a group of diseases that reduce vision. The main types of glaucoma are marked by an increase of pressure inside the eye. Pressure in the eye can cause optic nerve damage and vision loss. Glaucoma generally occurs with increased incidence as people age. It is also more common among people of African descent, and in conjunction with other conditions such as diabetes [100]. A 2005 European Commission report [46] states that: it has been shown that glaucoma can cause iris recognition to fail as it creates spots on the persons iris. Thus a person with glaucoma might be enrolled into an iris biometrics system, use it successfully for some time, have their glaucoma condition advance, and then nd that the system no longer recognizes them. Two conditions that relate to eye movement are nystagmus and strabismus. Strabismus, more commonly known as cross-eyed or wall-eyed, is a vision condition in which a person cannot align both eyes simultaneously under normal conditions [111]. Nystagmus involves an involuntary rhythmic oscillation of one or both eyes, which may be accompanied by tilting of the head. One article suggests that 29

an identication system could accommodate people with nystagmus if the system had an eective method of correcting for tilted and o-axis images [59]. The system would probably need to work well on partially blurred images as well. Albinism is a genetic condition that results in the partial or full absence of pigment (color) from the skin, hair, and eyes [98]. Iris patterns imaged using infrared illumination reect the physical iris structures such as collagen bers and vasculature, rather than the pigmentation, so a lack of pigment alone should not cause a problem for iris recognition. However, the conditions of nystagmus and strabismus, mentioned above, are associated with albinism. Approximately 1 in 17,000 people are aected by albinism. Another relevant medical condition is aniridia, which is caused by a deletion on chromosome 11 [5]. In this condition, the person is eectively born without an iris, or with a partial iris. The pupil and the sclera are present and visible, but there is no substantial iris region. Persons with this condition would likely nd that they could not enroll in an iris biometrics system. Aniridia is estimated to have an incidence of between 1 in 50,000 and 1 in 100,000. This may seem rare, but if the population of the United States is 300 million persons, then there would be on the order of 4,000 citizens with aniridia. As the examples above illustrate, there are substantial segments of the general public who may potentially be disadvantaged in the deployment of iris biometrics on a national scale. This is a problem that has to date received little attention in the biometrics research community. This problem could partially be addressed by using multiple biometric modes [13]. 11. Conclusions The literature relevant to iris biometrics is large, growing rapidly and spread across a wide variety of sources. This survey suggests a structure for the iris biometrics literature and summarizes the current state of the art. There are still a number of active research topics within iris biometrics. Many of these are related to the desire to make iris recognition practical in less-controlled conditions. More research should be done to see how recognition could be improved for people wearing glasses or contacts. Another area that has not received much attention yet is how to combine multiple images or use multi-

Fig. 13. Aniridia is the condition of not having an iris [132]

ple biometrics (e.g. face and iris recognition) to improve performance. The eciency of the matching algorithms will also become more important as iris biometrics is deployed in recognition applications for large populations. 11.1. A Short Recommended Reading List Because the iris biometrics literature is so large, we suggest a short list of papers that a person who is new to the eld might read in order to have a more detailed understanding of some major issues. We do not mean to identify these papers as necessarily the most important contributions in a specic technical sense. But these should be readable papers that illustrate major issues or directions. The place to start is with a description of Daugmans original approach to iris recognition. Because his work was the rst to describe a specic implementation of iris recognition, and also because of the elegance of his view of comparing iris codes as a test of statistical independence, the Daugman approach is the standard reference point. If the reader is most interested in how Daugman initially presented his ideas, we suggest his 1993 paper [29]. For 30

a more recent, readable overview of Daugmans approach, we recommend [33]. In the past decade, Daugman has modied and improved his recognition algorithms. A recent paper, [35] presents alternative methods of segmentation based on active contours, a way to transform an o-angle iris image into a more frontal view, and a description of new score normalization scheme to use when computing Hamming distance that would account for the total amount of unmasked data available in the comparison. Because Daugmans approach has been so central, it is perhaps important to understand that it is, at least in principle, just one specic technical approach among a variety of possibilities. The iris biometrics approach developed at Sarno Labs has an element of being designed intentionally to be technically distinct from Daugmans approach. To understand this system as one that makes a dierent technical choice at each step, we recommend the paper by Wildes [168]. One of the major current practical limitations of iris biometrics is the degree of cooperation required on the part of the person whose image is to be acquired. As described in earlier sections, there have

been a variety of research eorts aimed at acquiring images in a more exible manner and/or being able to use images of more widely varying quality. As an example of a very ambitious attempt at acquiring usable iris images in a more practical application scenario, we recommend a paper describing the Iris on the Move project at Sarno labs [97]. The reported performance rates vary dramatically depending on the quality of images available. Although there is no generally accepted measure of image quality, Chen et al. [22] present one possible metric, and suggest the potentially important point that image quality may vary from region to region in the image. It is natural to ask about the current state of the art in iris biometrics performance. A reader interested in an experimental comparison of dierent segmentation methods could read the paper by Proenca et al. [124]. Vatsa et al. [161] present a comparison of dierent texture extraction methods. The most ambitious eort in evaluating the current state of the art is certainly the Iris Challenge Evaluation (ICE) program sponsored by various U.S. government agencies. For an overview of the current state of the art in iris biometrics, we recommend reading the report available on the ICE web site [118]. The public should naturally be concerned about systems that plan to collect and store personally identifying information. For an introduction to some of the privacy and security issues inherent in biometric systems, we recommend a paper by Bolle et al. [10]. This paper considers replay attacks and bruteforce attacks, and introduces the idea of a cancelable biometric. This paper is worth reading in order to understand the motivation and basic conceptual design of cancelable biometrics. It is important to appreciate the potential limitations of iris biometrics in terms of subgroups of people that could potentially be disadvantaged. There has been little serious study about groups of people whose use of iris biometrics might be complicated by medical condition or disability. As an example of a simple study that demonstrates that iris texture can potentially change as a result of cataract surgery, we recommend [128]. This paper is, obviously, not about the technology of iris biometrics, but it illustrates how complications may arise in applying iris biometrics to entire populations of people.

12. Acknowledgements This eort is sponsored all or in part by the Central Intelligence Agency, the National Science Foundation under grant CNS 01-30839, and the National Geo-spatial Intelligence Agency. The opinions, ndings and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reect the views of the sponsors. Figure 9 in this paper includes images collected by the Chinese Academy of Sciences Institute of Automation (CASIA). We would like to thank Jim Wayman of San Jose State University, Eliza Du of Indiana University Purdue University at Indianapolis, Fred Wheeler of GE Research, Dale Rickman of Raytheon Corporation, and the anonymous reviewers for their comments, suggestions, and additions to this paper.

References
[1] Aditya Abhyankar, Lawrence Hornak, and Stephanie Schuckers. O-angle iris recognition using biorthogonal wavelet network system. In Fourth IEEE Workshop on Automatic Identication Technologies, pages 239244, October 2005. Aditya Abhyankar and Stephanie Schuckers. Active shape models for eective iris segmentation. In SPIE 6202: Biometric Technology for Human Identication III, pages 6202:H1H10, 2006. Ramzi Abiantun, Marios Savvides, and Pradeep Khosla. Automatic eye-level height system for face and iris recognition systems. In Fourth IEEE Workshop on Automatic Identication Technologies, pages 155159, October 2005. Ons Abdel Alim and Maha Sharkas. Iris recognition using discrete wavelet transform and articial neural net. In Int. Midwest Symp. on Circuits and Systems, pages I: 337340, December 2003. Aniridia Foundation International. What is aniridia? http://www.aniridia.net/what is.htm, accessed October 2006. Authenti-Corp. Draft nal report: Iris recognition study 2006. Technical report, 31 March 2007. Version 0.40. Asheer Kasar Bachoo and Jules-Raymond Tapamo. Texture detection for segmentation of iris images. In Annual Research Conference of the South African Institute of Computer Information Technologists, pages 236243, 2005. A. Bertillon. La couleur de liris. Revue scientique, 36(3):6573, 1885. Wageeh Boles and Boualem Boashash. A human identication technique using images of the iris and wavelet transform. IEEE Transactions On Signal Processing, 46(4):11851188, April 1998.

[2]

[3]

[4]

[5]

[6]

[7]

[8] [9]

31

[10] Ruud M. Bolle, Johnathan H. Connell, and Nalini K. Ratha. Biometrics perils and patches. In Pattern Recognition, volume 35, pages 27272738, 2002. [11] Ruud M. Bolle, Sharath Pankanti, Jonathon H. Connell, and Nalini Ratha. Iris individuality: A partial iris model. In Int. Conf. on Pattern Recognition, pages II: 927930, 2004. [12] Bradford Bonney, Robert Ives, Delores Etter, and Yingzi Du. Iris pattern extraction using bit planes and standard deviations. In Thirty-Eighth Asilomar Conference on Signals, Systems, and Computers, volume 1, pages 582 586, November 2004. [13] Kevin W. Bowyer, Kyong I. Chang, Ping Yan, Patrick J. Flynn, Earnie Hansley, and Sudeep Sarkar. Multi-modal biometrics: an overview. In Second Workshop on Multi-Modal User Authentication, May 2006. [14] J. Bringer, H. Chabanne, G. Cohen, B. Kindarji, and G. Z emor. Optimal iris fuzzy sketches. In Biometrics: Theory, Applications, and Systems, Sept 2007. [15] Ted Camus, Ulf Cahn von Seelen, Guanghua Zhang, Peter Venetianer, and Marcos Salganico. Sensar...secureTM iris identication system. In IEEE Workshop on Applications of Computer Vision, pages 12, 1998. [16] Theodore A. Camus and Richard P. Wildes. Reliable and fast eye nding in close-up images. In Int. Conf. on Pattern Recognition, pages 389394, 2002. [17] CASIA iris image database. http://www.cbsr.ia.ac.cn/ Databases.htm. [18] Note on CASIA-IrisV3. http://www.cbsr.ia.ac.cn/ IrisDatabase.htm. [19] Kyong I. Chang, Kevin W. Bowyer, and Patrick J. Flynn. An evaluation of multi-modal 2D+3D face biometrics. In IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 619624, April 2005. [20] Ching-Han Chen and Chia Te Chu. Low complexity iris recognition based on wavelet probabilistic neural networks. In Int. Joint Conference on Neural Networks, pages 19301935, 2005. [21] Wen-Shiung Chen, Kun-Huei Chih, Sheng-Wen Shih, and Chih-Ming Hsieh. Personal identication technique based on human iris recognition with wavelet transform. In Int. Conf. on Acoustics, Speech, and Signal Processing, pages II:949952, 2005. [22] Yi Chen, Sarat C. Dass, and Anil K. Jain. Localized iris image quality using 2-D wavelets. In Springer LNCS 3832: Int. Conf. on Biometrics, pages 373381, 2006. [23] Lu Chenhong and Lu Zhaoyang. Ecient iris recognition by computing discriminable textons. In Int. Conf. on Neural Networks and Brain, volume 2, pages 11641167, October 2005. [24] Chong Siew Chin, Andrew Teoh Beng Jin, and David Ngo Chek Ling. Personal security iris verication system based on random secret integration. Computer Vision and Image Understanding, 102:169177, 2006. [25] Siew Chin Chong, Andrew Beng Jin Teoh, and David Chek Ling Ngo. Iris authentication using privatized advanced correlation lter. In Springer LNCS 3832: Int. Conf. on Biometrics, pages 382388, January 2006. [26] Chia-Te Chou, Sheng-Wen Shih, Wen-Shiung Chen, and Victor W. Cheng. Iris recognition with multi-

[27]

[28]

[29]

[30]

[31]

[32]

[33]

[34]

[35]

[36]

[37]

[38]

[39] [40]

[41]

[42]

scale edge-type matching. In Int. Conf. on Pattern Recognition, pages 545548, August 2006. Chia Te Chu and Ching-Han Chen. High performance iris recognition based on LDA and LPCC. In 17th Int. Conference on Tools with Articial Intelligence, pages 417421, 2005. Jiali Cui, Yunhong Wang, Junzhou Huang, Tieniu Tan, and Zhenan Sun. An iris image synthesis method based on PCA and super-resolution. In Int. Conf. on Pattern Recognition, pages IV: 471474, 2004. John Daugman. High condence visual recognition of persons by a test of statistical independence. IEEE Transactions on Pattern Analysis and Machine Intelligence, 15(11):11481161, November 1993. John Daugman. Biometric personal identication system based on iris analysis. U.S. Patent No. 5,291,560, March 1994. John Daugman. Statistical richness of visual phase information: Update on recognizing persons by iris patterns. Int. Journal of Computer Vision, 45(1):25 38, 2001. John Daugman. The importance of being random: statistical principles of iris recognition. Pattern Recognition, 36:279291, 2003. John Daugman. How iris recognition works. IEEE Transactions on Circuits and Systems for Video Technology, 14(1):2130, 2004. John Daugman. Probing the uniqueness and randomness of iriscodes: Results from 200 billion iris pair comparisons. Proceedings of the IEEE, 94(11):19271935, 2006. John Daugman. New methods in iris recognition. IEEE Transactions on Systems, Man and Cybernetics - B, to appear. John Daugman and Cathryn Downing. Epigenetic randomness, complexity and singularity of human iris patterns. Proceedings of the Royal Society of London - B, 268:17371740, 2001. D. de Martin-Roche, Carmen Sanchez-Avila, and Raul Sanchez-Reillo. Iris recognition for biometric identication using dyadic wavelet transform zerocrossing. In IEEE Int. Carnahan Conf. on Security Technology, pages 272277, 2001. D. de Martin-Roche, Carmen Sanchez-Avila, and Raul Sanchez-Reillo. Iris-based biometric recognition using dyadic wavelet transform. IEEE Aerospace Electronic Systems Magazine, 17(10):36, 2002. Michal Dobes and Libor Machala. Iris database. http://www.inf.upol.cz/iris/. Vivekanand Dorairaj, Natalia A. Schmid, and Gamal Fahmy. Performance evaluation of iris based recognition system implementing PCA and ICA encoding techniques. In SPIE 5779: Biometric Technology for Human Identication II, volume 5779, pages 5158, 2005. Vivekanand Dorairaj, Natalia A. Schmid, and Gamal Fahmy. Performance evaluation of non-ideal iris based recognition system implementing global ICA encoding. In Int. Conf. on Image Processing, pages II:285288, 2005. Yingzi Du. Using 2D log-Gabor spatial lters for iris recognition. In SPIE 6202: Biometric Technology for Human Identication III, pages 62020:F1F8, 2006.

32

[43] Yingzi Du, Bradford Bonney, Robert Ives, Delores Etter, and Robert Schultz. Analysis of partial iris recognition using a 1-D approach. In Int. Conf. on Acoustics, Speech, and Signal Processing, volume 2, pages ii:961964, 2005. [44] Yingzi Du, Robert Ives, Bradford Bonney, and Delores Etter. Analysis of partial iris recognition. In SPIE 5779: Biometric Technology for Human Identication II, volume 5779, pages 3140, 2005. [45] Hazem M. El-Bakry. Fast iris detection for personal identication using modular neural networks. In IEEE Int. Symp. on Circuits and Systems, pages 5255, 2001. [46] European Commission. Biometrics at the frontiers: Assessing the impact on society. Institute for Prospective Technological Studies, Technical Report EUR 21585 EN(European Commission DirectorGeneral Joint Research Centre), February 2005. [47] Craig Fancourt, Luca Bogoni, Keith Hanna, Yanlin Guo, Richard Wildes, Naomi Takahashi, and Uday Jain. Iris recognition at a distance. In Int. Conf. on Audio- and Video-Based Biometric Person Authentication, pages 113, 2005. [48] Xinhua Feng, Chi Fang, Ziaoqing Ding, and Youshou Wu. Iris localization with dual coarse-to-ne strategy. In Int. Conf. on Pattern Recognition, pages 553556, August 2006. [49] Leonard Flom and Aran Sar. Iris recognition system. U.S. Patent 4,641,349, 1987. [50] Jian Fu, H. John Caueld, Seong-Moo Yoo, and Venkata Atluri. Use of articial color ltering to improve iris recognition and searching. Pattern Recognition Letters, 26(2):22442251, 2005. [51] Junying Gan and Yu Liang. Applications of wavelet packets decomposition in iris recognition. In Springer LNCS 3832: Int. Conf. on Biometrics, pages 443449, January 2006. [52] Balaji Ganeshan, Dhananjay Theckedath, Rupert Young, and Chris Chatwin. Biometric iris recognition system using a fast and robust iris localization and alignment procedure. Optics and Lasers in Engineering, 44:124, 2006. [53] Hong-ying Gu, Yue-ting Zhuang, and Yun-he Pan. An iris recognition method based on multi-orientation features and nonsymmetrical SVM. Journal of Zhejiang University Science, 6A(4):428432, 2005. [54] K. Hanna, R. Mandelbaum, D. Mishra, V. Paragano, and L. Wixson. A system for non-intrusive human iris acquisition and identication. IAPR Workshop on Machine Vision Applications, pages 200203, November 1996. [55] Feng Hao, Ross Anderson, and John Daugman. Combining crypto with biometrics eectively. IEEE Transactions on Computers, 55(9):10811088, September 2006. [56] XiaoFu He and PengFei Shi. A novel iris segmentation method for hand-held capture device. In Springer LNCS 3832: Int. Conf. on Biometrics, pages 479485, January 2006. [57] Yuqing He, Jiali Cui, Tieniu Tan, and Yangsheng Wang. Key techniques and methods for imaging iris in focus. In Int. Conf. on Pattern Recognition, pages 557561, August 2006.

[58] Zhaofeng He, Tieniu Tan, and Zhenan Sun. Iris localization via pulling and pushing. In Int. Conf. on Pattern Recognition, pages 366369, August 2006. [59] Austin Hicklin and Rajiv Khanna. The role of data quality in biometric systems. Mitretek Systems, 2006. http://www.noblis.org/BusinessAreas/ Role of Data Quality Final.pdf. [60] Karen Hollingsworth, Kevin Bowyer, and Patrick Flynn. All iris code bits are not created equal. In Biometrics: Theory, Applications, and Systems, Sept 2007. [61] S. Mahdi Hosseini, Babak N. Araabi, and Hamid Soltanian-Zadeh. Shape analysis of stroma for iris recognition. In Springer LNCS 4642: Int. Conf. on Biometrics, pages 790799, Aug 2007. [62] Huifang Huang and Guangshu Hu. Iris recognition based on adjustable scale wavelet transform. In Int. Conf. on Engineering in Medicine and Biology, pages 75337536, 2005. [63] Junzhou Huang, Yunhong Wang, Jiali Cui, and Tieniu Tan. Noise removal and impainting model for iris image. In Int. Conf. on Image Processing, pages 2:869 872, October 2004. [64] Junzhou Huang, Yunhong Wang, Tieniu Tan, and Jiali Cui. A new iris segmentation method for recognition. In Int. Conf. on Pattern Recognition, pages III: 554 557, 2004. [65] Ya-ping Huang, Si-wei Luo, and En-yi Chen. An ecient iris recognition system. In Int. Conf. of Machine Learning and Cybernetics, volume 1, pages 450454, November 2002. [66] International Biometric Group. Independent testing of iris recognition technology: Final report. May 2005. http://www.biometricgroup.com/ reports/public/reports/ITIRT report.htm. [67] ISO/IEC Standard 19794-6. Information technology - biometric data interchange formats, part 6: Iris image data. Technical report, International Standards Organization, 2005. [68] Robert W. Ives, Bradford L. Bonney, and Delores M. Etter. Eect of image compression on iris recognition. In Instrument and Measurement Technology Conference, pages 20542058, 2005. [69] Robert W. Ives, Anthony J. Guidry, and Dolores M. Etter. Iris recognition using histogram analysis. In Thirty-Eighth Asilomar Conference on Signals, Systems, and Computers, pages I:562566, November 2004. [70] Jain Jang, Kang Ryoung Park, Jinho Son, and Yillbyung Lee. A study on multi-unit iris recognition. In Int. Conf. on Control, Automation, Robotics and Vision, volume 2, pages 12441249, Dec 2004. [71] Ray A. Jarvis. Focus optimization criteria for computer image processing. Microscope, 24(2):163180, 1976. [72] Dae Sik Jeong, Hyun-Ae Park, Kang Ryoung Park, and Jaihie Kim. Iris recognition in mobile phone based on adaptive Gabor lter. In Springer LNCS 3832: Int. Conf. on Biometrics, pages 457463, January 2006. [73] R. Johnston. Can iris patterns be used to identify people? Los Alamos National Laboratory, Chemical and Laser Sciences Division Annual Report LA-12331PR, June 1992. pages 81-86.

33

[74] Nathan D. Kalka, Jinyu Zuo, Natalia A. Schmid, and Bojan Cukic. Image quality assessment for iris biometric. In SPIE 6202: Biometric Technology for Human Identication III, pages 6202:D1D11, 2006. [75] Byung Jun Kang and Kang Ryoung Park. A study on iris image restoration. In Int. Conf. on Audio- and Video-Based Biometric Person Authentication, pages 3140, 2005. [76] Adams Kong, David Zhang, and Mohamed Kamel. An anatomy of iriscode for precise phase representation. In Int. Conf. on Pattern Recognition, pages 429432, August 2006. [77] Wai-Kin Kong and David Zhang. Detecting eyelash and reection for accurate iris segmentation. Int. Journal of Pattern Recognition and Articial Intelligence, 17(6):10251034, 2003. [78] Emine Krichen, Lor` ene Allano, Sonia Garcia-Salicetti, and Bernadette Dorizzi. Specic texture analysis for iris recognition. In LNCS 3546: Int. Conf. on Audioand Video-Based Biometric Person Authentication, pages 2330, 2005. [79] Emine Krichen, Sonia Garcia-Salicetti, and Bernadette Dorizzi. A new probabilistic iris quality measure for comprehensive noise detection. In Biometrics: Theory, Applications, and Systems, Sept 2007. [80] Emine Krichen, M. Anouar Mellakh, Sonia GarciaSalicetti, and Bernadette Dorizzi. Iris identication using wavelet packets. In Int. Conf. on Pattern Recognition, pages IV: 335338, 2004. [81] Eui Chul Lee, Kang Ryung Park, and Jaihie Kim. Fake iris detection by using Purkinje image. In Springer LNCS 3832: Int. Conf. on Biometrics, pages 397403, January 2006. [82] Youn Joo Lee, Kwanghyuk Bae, Sung Joo Lee, Kang Ryoung Park, and Jaihie Kim. Biometric key binding: Fuzzy vault based on iris images. In Springer LNCS 4642: Int. Conf. on Biometrics, pages 800808, Aug 2007. [83] Aaron Lefohn, Brian Budge, Peter Shirley, Richard Caruso, and Erik Reinhard. An ocularists approach to human iris synthesis. IEEE Computer Graphics and Applications, pages 7075, November / December 2003. [84] Xin Li. Modeling intra-class variation for non-ideal iris recognition. In Springer LNCS 3832: Int. Conf. on Biometrics, pages 419427, January 2006. [85] Pan Lili and Xie Mei. The algorithm of iris image processing. In Fourth IEEE Workshop on Automatic Identication Technologies, pages 134138, October 2005. [86] Chengqiang Liu and Mei Xie. Iris recognition based on DLDA. In Int. Conf. on Pattern Recognition, pages 489492, August 2006. [87] Xiaomei Liu, Kevin W. Bowyer, and Patrick J. Flynn. Experiments with an improved iris segmentation algorithm. In Fourth IEEE Workshop on Automatic Identication Technologies, pages 118123, October 2005. [88] Yuanning Liu, Senmiao Yuan, Xiaodong Zhu, and Qingliang Cui. A practical iris acquisition system and a fast edges locating algorithm in iris recognition. In IEEE Instrumentation and Measurement Technology Conference, pages 166168, 2003.

[89] Judith Liu-Jimenez, Raul Sanchez-Reillo, and Carmen Sanchez-Avila. Biometric co-processor for an authentication system using iris biometrics. In IEEE Int. Carnahan Conf. on Security Technology, pages 131135, 2004. [90] Judith Liu-Jimenez, Raul Sanchez-Reillo, and Carmen Sanchez-Reillo. Full hardware solution for processing iris biometrics. In IEEE Int. Carnahan Conf. on Security Technology, pages 157163, October 2005. [91] Li Ma, Tieniu Tan, Yunhong Wang, and Dexin Zhang. Personal identication based on iris texture analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(12):15191533, 2003. [92] Li Ma, Tieniu Tan, Yunhong Wang, and Dexin Zhang. Ecient iris recognition by characterizing key local variations. IEEE Transactions on Image Processing, 13(6):739750, June 2004. [93] Li Ma, Tieniu Tan, Yunhong Wang, and Dexin Zhang. Local intensity variation analysis for iris recognition. Pattern Recognition, 37(6):12871298, Feb 2004. [94] Li Ma, Yunhong Wang, and Tieniu Tan. Iris recognition using circular symmetric lters. In Int. Conf. on Pattern Recognition, pages II: 414417, 2002. [95] Sarvesh Makthal and Arun Ross. Synthesis of iris images using Markov random elds. In 13th European Signal Processing Conference, September 2005. [96] Libor Masek. Recognition of human iris patterns for biometric identication. Masters thesis, University of Western Australia, 2003. http://www.csse.uwa.edu.au/~pk/ studentprojects/libor/. [97] J. R. Matey, O. Naroditsky, K. Hanna, R. Kolczynski, D. LoIacono, S. Mangru, M. Tinker, T. Zappia, and W. Y. Zhao. Iris on the MoveTM : Acquisition of images for iris recognition in less constrained environments. Proceedings of the IEEE, 94(11):19361946, 2006. [98] US NLM/NIH Medline Plus. Albinism. http://www.nlm.nih.gov/medlineplus/ ency/article/001479.htm. accessed January 2007. [99] US NLM/NIH Medline Plus. Cataract. http://www.nlm.nih.gov/medlineplus/ ency/article/001001.htm, accessed October 2006. [100] US NLM/NIH Medline Plus. Glaucoma. http://www.nlm.nih.gov/medlineplus/ ency/article/001620.htm, accessed October 2006. [101] Kazuyuki Miyazawa, Koichi Ito, Takafumi Aoki, Koji Kobayashi, and Hiroshi Nakajima. An ecient iris recognition algorithm using phase-based image matching. In Int. Conf. on Image Processing, pages II:4952, 2005. [102] Donald Monro, Soumyadip Rakshit, and Dexin Zhang. DCT-based iris recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(4):586 595, April 2007. [103] Multimedia University iris database. http://pesona.mmu.edu.my/~ccteo/. [104] Karthik Nandakumar, Yi Chen, Anil K. Jain, and Sarat Dass. Quality-based score level fusion in multibiometric systems. In Int. Conf. on Pattern Recognition, pages 473476, August 2006. [105] Ramkumar Narayanswamy and Paulo E. X. Silveira. Iris recognition at a distance with expanded imaging

34

[106]

[107]

[108]

[109] [110]

[111]

[112] [113]

[114]

[115]

[116]

[117]

[118]

[119]

[120]

volume. In SPIE 5779: Biometric Technology for Human Identication II, volume 5779, pages 4150, 2005. Ramkumar Narayanswamy and Paulo E. X. Silveira. Iris recognition at a distance with expanded imaging volume. In SPIE 6202: Biometric Technology for Human Identication III, pages 62020:G1G11, 2006. Michael Negin, Thomas A. Chmielewski Jr., Marcos Salganico, Theodore A. Camus, Ulf M. Cahn von Seelen, Peter L. Venetianer, and Guanghua G. Zhang. An iris biometric system for public and personal use. Computer, 21(2):7075, February 2000. Elaine Newton and P. Johnathan Phillips. Metaanalysis of third party evaluations of iris recognition. In Biometrics: Theory, Applications, and Systems, Sept 2007. National Insitute of Standards and Technology. Iris challenge evaluation data. http://iris.nist.gov/ice/. National Institute of Standards and Technology. Iris challenge evaluation 2005 workshop presentations. http://iris.nist.gov/ice/presentations.htm. Optometrists Network. Strabismus. http://www.strabismus.org/. Accessed November 2006. Clyde Oyster. The Human Eye Structure and Function. Sinauer Associates, 1999. Andrzej Pacut, Adam Czajka, and Przemek Strzelczyk. Iris biometrics for secure remote access. In Cyberspace Security and Defense: Research Issues, NATO Science Series II: Mathematics, Physics and Chemistry, volume 196. Springer, 2004. Chul-Hyun Park and Joon-Jae Lee. Extracting and combining multimodal directional iris features. In Springer LNCS 3832: Int. Conf. on Biometrics, pages 389396, January 2006. Kang Ryoung Park and Jaihie Kim. A real-time focusing algorithm for iris recognition camera. IEEE Transactions on Systems, Man, and Cybernetics, 35(3):441444, August 2005. Milena Bueno Pereira and Antonio Claudio Veiga. A method for improving the reliability of an iris recognition system. In IEEE Pacic Rim Conference on Communications, Computers and Signal Processing, pages 665668, 2005. Milena Bueno Pereira and Antonio Claudio Paschoarelli Veiga. Application of genetic algorithms to improve the reliability of an iris recognition system. In IEEE Workshop on Machine Learning for Signal Processing, pages 159 164, September 2005. P. J. Phillips, W. T. Scruggs, A. J. OToole, P. J. Flynn, K.W. Bowyer, C. L. Schott, and M. Sharpe. FRVT 2006 and ICE 2006 large-scale results. Technical report, National Institute of Standards and Technology, NISTIR 7408, March 2007. http://iris.nist.gov/ice. P. Johnathan Phillips, Kevin W. Bowyer, and Patrick J. Flynn. Comments on the CASIA version 1.0 iris dataset. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(10), Oct 2007. P. Jonathon Phillips, Patrick J. Flynn, Todd Scruggs, Kevin W. Bowyer, and William Worek. Preliminary Face Recognition Grand Challenge results. In Int.

[121]

[122]

[123] [124]

[125]

[126]

[127]

[128]

[129]

[130]

[131]

[132]

[133]

[134]

[135]

[136]

Conf. on Automatic Face and Gesture Recognition (FG 2006), April 2006. Hugo Proen ca and Lu s Alexandre. The NICE.I: Noisy iris challenge evaluation - part I. In Biometrics: Theory, Applications, and Systems, Sept 2007. Hugo Proen ca and Lu s Alexandre. Toward noncooperative iris recognition: A classication approach using multiple signatures. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(4):607612, April 2007. Hugo Proen ca and Lu s A. Alexandre. UBIRIS: A noisy iris image database. http://iris.di.ubi.pt/. Hugo Proen ca and Lu s A. Alexandre. Iris segmentation methodology for non-cooperative recognition. In IEE Proceedings on Vision, Image and Signal Processing, volume 153, pages 199205, April 2006. Hugo Proen ca and Lu s A. Alexandre. A method for the identication of noisy regions in normalized iris images. In Int. Conf. on Pattern Recognition, pages 405408, August 2006. Xianchao Qiu, Zhenan Sun, and Tieniu Tan. Global texture analysis of iris images for ethnic classication. In Springer LNCS 3832: Int. Conf. on Biometrics, pages 411418, January 2006. Xianchao Qiu, Zhenan Sun, and Tieniu Tan. Coarse iris classication by learned visual dictionary. In Springer LNCS 4642: Int. Conf. on Biometrics, pages 770779, Aug 2007. Roberto Roizenblatt, Paulo Schor, Fabio Dante, Jaime Roizenblatt, and Rubens Belfort Jr. Iris recognition as a biometric method after cataract surgery. Biomedical Engineering Online, 3(1):2, January 2004. Florence Rossant, Manuel Torres Eslave, Thomas Ea, Frederic Amiel, and Amara Amara. Iris identication and robustness evaluation of a wavelet packets based algorithm. In Int. Conf. on Image Processing, pages 25892594, September 2005. Kaushik Roy and Prabir Bhattacharya. Iris recognition with support vector machines. In Springer LNCS 3832: Int. Conf. on Biometrics, pages 486492, January 2006. Erik Rydgren, Thomas Ea, Frederic Amiel, Florence Rossant, Amara Amara, and Carmen Sanchez-Reillo. Iris features extraction using wavelet packets. In Int. Conf. on Image Processing, pages II:861864, 2004. Quratulain Saeed. Aniridia. http://www.biology.iupui.edu/biocourses/ Biol540H/AniridiaSQ.html, 2003. Accessed October 2006. Carmen Sanchez-Avila and Raul Sanchez-Reillo. Multiscale analysis for iris biometrics. In IEEE Int. Carnahan Conf. on Security Technology, pages 3538, 2002. Carmen Sanchez-Avila and Raul Sanchez-Reillo. Two dierent approaches for iris recognition using gabor lters and multiscale zero-crossing representation. Pattern Recognition, 38(2):231240, February 2005. Raul Sanchez-Reillo. Smart card information and operations using biometrics. In IEEE AESS Systems Magazine, pages 36, April 2001. Raul Sanchez-Reillo and Carmen Sanchez-Avila. Iris recognition with low template size. In Int.

35

[137]

[138]

[139]

[140]

[141]

[142]

[143]

[144]

[145]

[146]

[147]

[148]

[149]

[150]

[151]

Conf. on Audio- and Video-Based Biometric Person Authentication, pages 324329, 2001. Raul Sanchez-Reillo, Carmen Sanchez-Avila, and Jose A. Martin-Pereda. Minimal template size for iris recognition. In First Joint BMES/EMBS Conference: Serving Humanity, Advancing Technology, page 972, 1999. Natalia A. Schmid, Manasi V. Ketkar, Harshinder Singh, and Bojan Cukic. Performance analysis of iris-based identication system at the matching score level. IEEE Transactions on Information Forensics and Security, 1(2):154168, June 2006. Daniel Schonberg and Darko Kirovski. Eyecerts. IEEE Transactions on Information Forensics and Security, 1(2):144153, June 2006. Kelly Smith, V. Paul Pauca, Arun Ross, Todd Torgersen, and Michael King. Extended evaluation of simulated wavefront coding technology in iris recognition. In Biometrics: Theory, Applications, and Systems, Sept 2007. Byungjun Son, Hyunsuk Won, Gyundo Kee, and Yillbyung Lee. Discriminant iris feature and support vector machines for iris recognition. In Int. Conf. on Image Processing, volume 2, pages 865868, 2004. Caitung Sun, Chunguang Zhou, Yanchun Liang, and Xiangdong Liu. Study and improvement of iris location algorithm. In Springer LNCS 3832: Int. Conf. on Biometrics, pages 436442, January 2006. Zhenan Sun, Tieniu Tan, and Xianchao Qiu. Graph matching iris image blocks with local binary pattern. In Springer LNCS 3832: Int. Conf. on Biometrics, pages 366372, January 2006. Zhenan Sun, Tieniu Tan, and Yunhong Wang. Robust encoding of local ordinal measures: A general framework of iris recognition. In Proc. BioAW Workshop, pages 270282, 2004. Zhenan Sun, Yunhong Wang, Tieniu Tan, and Jiali Cui. Cascading statistical and structural classiers for iris recognition. In Int. Conf. on Image Processing, pages 12611262, 2004. Zhenan Sun, Yunhong Wang, Tieniu Tan, and Jiali Cui. Robust direction estimation of gradient vector eld for iris recognition. In Int. Conf. on Pattern Recognition, pages II: 783786, 2004. Zhenan Sun, Yunhong Wang, Tieniu Tan, and Jiali Cui. Improving iris recognition accuracy via cascaded classiers. IEEE Transactions on Systems, Man, and Cybernetics, 35(3):435441, August 2005. Eric Sung, Xilin Chen, Jie Zhu, and Jie Yang. Towards non-cooperative iris recognition systems. In Int. Conf. on Control, Automation, Robotics and Vision, volume 2, pages 990995, Dec 2002. Hanho Sung, Jaekyung Lim, Ji-hyun Park, and Yillbyung Lee. Iris recognition using collarette boundary localization. In Int. Conf. on Pattern Recognition, pages IV: 857860, 2004. Hironobu Takano, Maki Murakami, and Kiyomi Nakamura. Iris recognition by a rotation spreading neural network. In IEEE Int. Joint Conf. on Neural Networks, pages 4:25892594, July 2004. Vince Thomas, Nitesh Chawla, Kevin Bowyer, and Patrick Flynn. Learning to predict gender from iris

[152]

[153]

[154]

[155]

[156]

[157]

[158]

[159]

[160]

[161]

[162]

[163]

[164]

[165]

images. In Biometrics: Theory, Applications, and Systems, Sept 2007. Peeranat Thoonsaengngam, Kittipol Horapong, Somying Thainimit, and Vutipong Areekul. Ecient iris recognition using adaptive quotient thresholding. In Springer LNCS 3832: Int. Conf. on Biometrics, pages 472478, January 2006. Jason Thornton, Marios Savvides, and B.V.K. Vijaya Kumar. Robust iris recognition using advanced correlation techniques. In Second Int. Conf. on Image Analysis and Recognition, Springer Lecture Notes in Computer Science 3656, pages 10981105, 2005. Jason Thornton, Marios Savvides, and B.V.K. Vijaya Kumar. Enhanced iris matching using estimation of in-plane nonlinear deformations. In SPIE 6202: Biometric Technology for Human Identication III, pages 62020:E1E11, 2006. Jason Thornton, Marios Savvides, and B.V.K. Vijaya Kumar. A Baysian approach to deformed pattern matching of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(4):596606, April 2007. Jason Thornton, Marios Savvides, and B.V.K. Vijaya Kumar. An evaluation of iris pattern representations. In Biometrics: Theory, Applications, and Systems, Sept 2007. Qi-Chuan Tian, Quan Pan, Yong-Mei Cheng, and Quan-Xue Gao. Fast algorithm and application of hough transform in iris segmentation. In Int. Conf. on Machine Learning and Cybernetics, volume 7, pages 39773980, August 2004. Christel-loic Tisse, Lionel Martin, Lionel Torres, and Michel Robert. Person identication technique using human iris recognition. In Vision Interface, pages 294 299, 2002. Mihran Tuceryan. Moment based texture segmentation. In Pattern Recognition Letters, volume 15, pages 659668, 1994. University of Bath iris image database. http://www.bath.ac.uk/elec-eng/ research/sipg/irisweb/database.htm. Mayank Vatsa, Richa Singh, and P. Gupta. Comparison of iris recognition algorithms. In Int. Conf. on Intelligent Sensing and Information Processing, pages 354358, 2004. Mayank Vatsa, Richa Singh, and Afzel Noore. Reducing the false rejection rated of iris recognition using textural and topological features. Int. Journal of Signal Processing, 2(2):6672, 2005. Michael Della Vecchia, Thomas Chmielewski, Ted Camus, Marcos Salganico, and Michael Negin. Methodology and apparatus for using the human iris as a robust biometric. In SPIE Proceedings on Opththalmic Technologies VIII, volume 3246, pages 65 74, San Jose, CA, Jan 1998. Paul Viola and Michael Jones. Robust real-time object detection. In Int. Workshop on Statistical and Computational Theories of Vision - Modeling, Learning, Computing, and Sampling, 2001. Lakin Wecker, Faramarz Samavati, and Marina Gavrilova. Iris synthesis: a reverse subdivision application. In Third Int. Conf. on Computer Graphics

36

[166]

[167]

[168]

[169]

[170]

[171]

[172]

[173] [174]

[175]

[176]

[177]

[178]

[179]

[180]

[181]

[182]

and Interactive Techniques in Australasia and South East Asia, pages 121125, 2005. Zhuoshi Wei, Tieniu Tan, and Zhenan Sun. Nonlinear iris deformation correction based on Gaussian model. In Springer LNCS 4642: Int. Conf. on Biometrics, pages 780789, Aug 2007. Zhuoshi Wei, Tieniu Tan, Zhenan Sun, and Jiali Cui. Robust and fast assessment of iris image quality. In Springer LNCS 3832: Int. Conf. on Biometrics, pages 464471, January 2006. Richard P. Wildes. Iris recognition: An emerging biometric technology. Proceedings of the IEEE, 85(9):13481363, September 1997. Richard P. Wildes. Iris recognition. In Biometric Systems: Technology, Design and Performance Evaluation, pages 6395. Spring-Verlag, 2005. Richard P. Wildes, Jane C. Asmuth, Gilbert L. Green, Steven C. Hsu, Raymond J. Kolczynski, James C. Matey, and Sterling E. McBride. A system for automated iris recognition. Second IEEE Workshop on Applications of Computer Vision, pages 121128, 1994. Richard P. Wildes, Jane C. Asmuth, Gilbert L. Green, Steven C. Hsu, Raymond J. Kolczynski, James C. Matey, and Sterling E. McBride. A machine vision system for iris recognition. Machine Vision and Applications, 9:18, 1996. Wildes et al. Automated, non-invasive iris recognition system and method. U.S. Patent No. 5,572,596 and 5,751,836, 1996 and 1998. Harry Wyatt. A minimum wear-and-tear meshwork for the iris. Vision Research, 40:21672176, 2000. GuangZhu Xu, ZaiFeng Zhang, and YiDe Ma. Automatic iris segmentation based on local areas. In Int. Conf. on Pattern Recognition, pages 505508, August 2006. Svetlana Yanushkevich, Adrian Stoica, Vlap Shmerko, and Denis Popel. Biometric Inverse Problems. CRC Press, 2005. Peng Yao, Jun Li, Xueyi Ye, Zhenquan Zhuang, and Bin Li. Iris recognition algorithm using modied loggabor lters. In Int. Conf. on Pattern Recognition, pages 461464, August 2006. Xueyi Ye, Peng Yao, Fei Long, and Zhenquan Zhuang. Iris image real-time pre-estimation using compound neural network. In Springer LNCS 3832: Int. Conf. on Biometrics, pages 450456, January 2006. Li Yu, Kuanquan Wang, and David Zhang. A novel method for coarse iris classication. In Springer LNCS 3832: Int. Conf. on Biometrics, pages 404410, January 2006. Alan Yuille, Peter Hallinan, and David Cohn. Feature extraction from faces using deformable templates. Int. Journal of Computer Vision, 8(2):99111, 1992. A. Zaim, M. Quweider, J. Scargle, J. Iglesias, and R. Tang. A robust and accurate segmentation of iris images using optimal partitioning. In Int. Conf. on Pattern Recognition, pages 578581, August 2006. David Zhang, Li Yu, and Kuanquan Wang. The relative distance of key point based iris recognition. Pattern Recognition, 40(2):423430, 2007. Peng-Fei Zhang, De-Sheng Li, and Qi Wang. A novel iris recognition method based on feature fusion. In

Int. Conf. on Machine Learning and Cybernetics, pages 36613665, 2004. [183] Zhang et al. Method of measuring the focus of close-up image of eyes. U.S. Patent No. 5,953,440, 1999. [184] Jinyu Zuo and Natalia A. Schmid. A model based, anatomy based method for synthesizing iris images. In Springer LNCS 3832: Int. Conf. on Biometrics, pages 486492, January 2006.

37

You might also like