You are on page 1of 77

IRIS RECOGNITION SYSTEM USING MATLAB

A PROJECT REPORT
Submitted by

KAMAL MITRA Roll No. 12601012024

In fulfillment for the award of the degree of


MASTER OF COMPUTER APPLICATION

HERITAGE INSTITUTE OF TECHNOLOGY

WEST BENGAL UNIVERSITY OF TECHNOLOGY


KOLKATA
DEC 2014

1
BONAFIDE CERTIFICATE

Certified that this project report IRIS RECOGNITION SYSTEM USING MATLAB
is the bonafide work of KAMAL MITRA (Roll No: 12601012024) who carried out
the project work under my supervision.

Signature of HOD Signature of Mentor


____________________________ _____________________________
Dr. Siuli Roy Prof. Subhajit Rakshit
Head, Computer Application Centre Asst. Prof, Computer Application Centre,
Heritage Institute of Technology, Heritage Institute of Technology,
Kolkata. Kolkata.

__________________________
EXAMINER

2
ACKNOWLEDGEMENT

The project Iris Recognition System Using Matlab would not have been
possible without the constant guidance of our guide Prof. Subhajit Rakhshit,
Computer Application Centre. His knowledge and deep study helped us to improve
our knowledge a lot. We are immensely thankful to him for his valuable ideas on
improvement of the project.
Thanks, to Prof. Souvik Basu, our Departmental Coordinator, for all the help he
provided. His valuable insight helped us for the betterment of the project.
Finally, thanks to our Head of the Department Dr. Siuli Roy, for her valuable
thoughts and many useful concepts to reach the goal. Without her help it was very
tough to achieve the success.
We would also like to acknowledge the use of Matlab toolkit which has been
developed by Mathworks. This tool has been well documented and it was a great
benefit to be able to use that. Text from the documentation of this tool has been
used in this report too.

_________________

KAMAL MITRA
Roll No: 12601012024

MCA, Computer Application Center


Heritage Institute of Technology

3
CONTENTS

Chapter TOPIC PAGE NO


Chapter 1 Introduction (6-9)
1.1 Introduction of iris 6
1.2 Importance of Eye Recognition 6
1.3 Iris Recognition Process 9

Chapter 2 Iris Recognition: Literature Overview (11-48)


2.1 Image Acquisition 12
2.2 Iris Localization or Segmentation 13
2.2.1 Feature Detection 14
Classical Operators 14
Feature Detectors: The Moravec Operator 23
Corner Feature Detectors: The Plessey Operator 25
Neural-Fuzzy Feature Detectors 26
2.2.2 Boundary Detection 28
Integro-Differential Function 29
Hough Transform 29
2.2.3 Existing Approaches 30
Daugman Approach 30
Wildes Approach 31
Ya-Ping et al. Approach 32
2.3 Iris Normalization and Unwrapping 33
Daugmans Rubber Sheet Model 34
Virtual Circles 37
Image Registration 37
2.4 Feature Encoding 38
Gabor Filter 39
Log-Gabor Filter 42

4
Zero Crossings of the 1 D Wavelet 42
Haar Wavelet 43
2.5 Matching Algorithms 43
Hamming Distance 44

2.6 Previous Work 48

Matlab Code 50

Future Scope 74

References 75

5
Chapter 1

INTRODUCTION
1.1 Introduction
The iris is a protected internal organ of the eye, located behind the cornea
and the aqueous humor, but in front of the lens. A visible property of the
iris is the random morphogenesis of their minutiae. The phenotypic
expression even of two irises with the same genetic genotype (as in
identical twins, or the pair possessed by one individual) has uncorrelated
minutiae. The iris texture has no genetic penetrance in the expression and
is chaotic. In these respects the uniqueness of every iris parallels the
uniqueness of every hand written signature. But the iris enjoys further
practical advantages over hand written signature.

1.2. Importance of Eye Recognition


The reason for studying the human iris is identification of a person. Albert
Bertillon was the first person to notice this problem in the year 1880. He
was a French physicist, who identified criminals by their irises.
In the year 1987, this problem was studied by Leonard Flom and Aran
Safir. According to their patent [4], the iris is stable throughout the human
life. All the features are quite stable in number and position except rare
anomalies. It has been discovered that every iris is unique and no two
people can have identical irises.

6
In the year 1994, John Daugman patented his "Biometric personal
identification system based on iris analysis" [41]. The image analysis
algorithm finds the iris in a live video image of a person's face. It isolates
the eye and defines a circular papillary boundary between the iris and the
pupil portions of the image, and it defines another circular boundary
between the iris and the sclera portions of the image and the system
defines plurality of annular bands within the iris image. Then it adds the
iris code and due to Hamming distance compares the code with stored iris
codes.
In the year 1998, Richard Wildes patented "Automated non-invasive
recognition system and method" [11]. This system uses two cameras. The
first one is with low resolution and directs the second one with high
resolution. The obtained image from the second camera is reduced on the
iris due to papillary boundary, limbic boundary and eyelid boundary. In
the next step, the reduced image is compared with the stored images.
In the year 1999, Mitsuji Matsushita patented "Iris identification system and
iris identification method" [42]. This iris identification system is used to
identify customers. At first, the camera identifies the head of a customer,
finds the position of the eyes, zooms up and photographs the irises. The
computer extracts only a portion of the iris data that is significant to
identify the customer.
Using the human IRIS for identification has some advantages, but also
some disadvantages.

7
Advantages:
o IRIS is a highly protected, internal organ of the eye
o IRIS is visible from a distance
o IRIS patterns possess a high degree of randomness
o changing the size of the pupil confirms natural physiology
o limited genetic penetrance
o IRIS is stable throughout life.

Disadvantages:
o small target (1cm) to acquire from a distance (1m)
o moving target
o IRIS must be located behind a curved, wet and reflecting surface
o IRIS is obstructed by eyelashes, eyelids and reflections
o Its deformations are non-elastic as the pupil changes size.

8
1.3 Iris Recognition Process

Figure 1. Steps of Iris Recognition System

The above figure summarizes the steps to be followed when doing iris
recognition.
Step 1: Image acquisition, the first phase, is one of the major challenges of
automated iris recognition since we need to capture a high-quality image of
the iris while remaining noninvasive to the human operator.
Step 2: Iris localization takes place to detect the edge of the iris as well as
that of the pupil; thus extracting the iris region.

9
Step 3: Normalization is used to be able to transform the iris region to have
fixed dimensions, and hence removing the dimensional inconsistencies
between eye images due to the stretching of the iris caused by the pupil
dilation from varying levels of illumination.
Step 4: The normalized iris region is unwrapped into a rectangular region.
Step 5: Finally, it is time to extract the most discriminating feature in the iris
pattern so that a comparison between templates can be done. Therefore, the
obtained iris region is encoded using wavelets to construct the iris code.
As a result, a decision can be made in the matching step.

10
Chapter 2

Iris Recognition: Literature Overview


2.1 Image Acquisition
2.2 Iris Localization or Segmentation
2.3 Iris Normalization and Unwrapping
2.4 Feature Encoding
2.5 Matching Algorithm
2.6 Previous Work

11
2.1 Image Acquisition

Iris recognition has been an active research area in the last years, due its
high accuracy and the encouragement of both the government and private
entities to replace traditional security systems, which suffer of noticeable
margin of error. However, early research was obstructed by the lack of iris
images. Now several free databases exist on the internet for testing usage.
A well known database is the CASIA Iris Image Database (version 1.0) [5]
provided by the Chinese Academy of Sciences. The CASIA Iris Image
Database includes 756 iris images from 108 eyes collected over two
sessions. The images, taken in almost perfect imaging conditions, are noise-
free. More realistic imaging conditions were taken into consideration in the
UBIRIS Database [6]. UBIRIS Database includes 1877 images from 241
persons collected over in two sessions. The images collected in the first
photography session were low-noise images. On the other hand, images
collected in the second session were captured under natural luminosity
factor, thus allowing reflections, different contrast levels, and luminosity
and focus problems. Such images might be a good model for realistic
situations with minimal collaboration from the subjects.

Other databases exist, such as the LEI [7] and the UPOL [8]. UPOL
Database includes images from the internal part of the eye, having the
localization work almost done. As for the LEI Database, it includes images
with noise, however, it is a small database of 120 grayscale images.

12
2.2 Iris Localization or Segmentation
Iris recognition is based on the fact that the human iris contains unique
features that completely distinguish a person; the actual information I am
looking for is found within the iris patterns. Thus, it is logical that the first
step in implementing such a biometric system is isolating the iris region
from the other parts of the image, which are of no relevance. The iris region
is approximated by a ring, defined by the iris/sclera boundary and the
iris/pupil boundary. Thus, in this step, I should be able to detect these
boundaries and isolate the part of the image within.
Another important issue to take care of, in this step, is removing any
corruption to the iris region; eyelids and eyelashes, sometimes, occlude
parts of the iris region, thus hiding important information and at the same
time resulting in errors (it will be falsely represented as iris pattern data).
Another distorting factor is specular reflections which occur within the iris
region. In iris localization, a technique is required to locate and isolate the
iris region and exclude the corruptors as well.
In 1993, John Daugman [9] proposed one of the most significant
approaches in iris recognition, which became the basis of many functioning
systems today. Daugman applies an integro-differential operator to isolate
the boundaries. Since then, several approaches were proposed with small
differences [10]. In 1997, however, Richard Wildes [11] proposed another
approach which grabbed high interest in the field and became one of the
most widely used approaches. Wildes approach is divided into two steps;
he converts the image into binary edge map using a gradient-based edge
detector then applies the Hough transform to detect the boundaries. Many

13
papers [12], [13], [14], [15], [16], [17] have been proposed which apply slight
changes over Wildes approach. All of these approaches are based on first
finding a binary edge map using different operators, and then applying
some function to detect the boundaries. Since this is the case, in what
follows I am going to present the different feature detectors used to get a
binary edge map, then I shall present the different boundary detection
approaches. After that I am going to present an overview of different
approaches [3], [11], and [18] which is, somehow, a combination of
Daugman and Wildes approaches.

2.2.1 Feature Detection


Feature detection is the first step in image segmentation; the correctness of
localization is highly dependent on true edge detection of the boundaries
of the iris. The underlying problem, however, is that: given two adjacent
regions, determine whether or not they are "different enough" from each
other to declare the presence of an edge. Many researchers have addressed
this problem and several operators have been proposed. The operators are
mainly classified into Classical, Corner, Interesting Point, and Neural-
Fuzzy detectors.

2.2.1.1 Classical Operators


Classical operators work very well on high-contrast images, however, they
fail in detecting edges with small gray-scale jump [19]. They are classified
into three main categories: Gradient, Zero-Crossing, and Canny, in
addition to improved Classical operators. In what follows, I shall introduce

14
the Gradient-based Roberts, Sobel and Prewitt edge detection operators,
the Laplacian-based LOG edge detection operator (special case of the Zero-
Crossing operators), and the Canny edge detection operator, in addition to
the Compass operator.

1. Gradient-Based Operators
I have stated earlier that an edge is characterized by noticeable change in
intensity. The fact that the gradient of an image points in the direction of
most rapid change in intensity motivated the research on Gradient-based
operators. Gradient based operators detect an image by finding its
gradients magnitude at each pixel and comparing it to some threshold to
determine whether its an edge pixel or not [20]. An edge pixel is
characterized by two variables, the Edge strength, which is the magnitude
of the gradient at the given pixel, and the Edge direction, which is the angle
of the gradient at the given pixel [20].
The gradient of an image f is defined as:

The Edge strength is defined as:

And the Edge direction is defined as:

15
Figure shows the gradient at a given pixel in an image; note that its
direction is that of maximum change in intensity.

Figure 3.1 Gradient of an Image

The above definitions of the gradient apply to continuous functions.


However, the image under consideration is digital; that is I am considering
finding the gradient for a discrete image, and this is not defined in the
literature. Thus estimations were proposed using some operators. Some of
these operators are: Roberts, Sobel, and Prewitt. In what follows I shall
introduce the three operators briefly.

Roberts Operator
The Roberts operator provides a simple quick approximation of the
gradients magnitude; the operator performs 2-D spatial gradient
measurement on the image [21]; the operator approximates the gradient for
a pixel Pi(i, j) as [22]:
G[f(i,j)] = |f(i,j)-f(i+1,j+1)|+|f(i+1,j)-f(i,j+1)|

16
A more accurate representation consists of a pair of 22 convolution
kernels or masks Gx, and Gy [21], [22]; Gx and Gy are given as follows:
G[f(i,j)] = |Gx|+|Gy|

Applying the masks to the image and doing the required calculations, the
gradient can be computed as follows [22]:
G[f(i,j)] = |Gx|+|Gy|

The incentive behind the wide use of the Roberts operator is its fast
running time [21]. As for its main disadvantages [21], this operator is very
sensitive to noise, and fails to detect edges characterized by small intensity-
jumps.

Sobel Operator
The Sobel operator is similar in its functionality to the Roberts operator.
However, it uses larger convolution masks. The operator consists of two
3x3 masks Gx and Gy which are given by [22]:

The gradient is still approximated by applying the masks to the image and
computing its magnitude as [21]:

17
G[f(i,j)] = |Gx|+|Gy|

The following prototype will make things more clear. Suppose that we
want to compute the gradient for the pixel Pi(i, j) given the input image.
The following is a part of the input image:

Gx and Gy can be found as follows:

The Sobel operator do a better job in detecting edges compared to the


Roberts, and is less sensitive to noise, due to its larger convolution masks
[21]. However, for the same reason, the Sobel is slower than the Roberts.

Prewitt Operator
The Prewitt operator is very similar to that of Sobel. It consists of 3x3
convolution masks Gx and Gy given as:

18
The gradient is calculated in the same way as before. The only difference is
that the Prewitt doesnt give the pixels that are close to the centers of the
masks ([i,j]) any emphasis, whereas the Sobel does give them higher weight
(2*) [22].

2. Zero-Crossing Based Operators


Zero-Crossing based operators detect the existence of an edge by
determining whether or not the Laplacian or the estimated second
direction derivative has a Zero-Crossing within the pixel [13]. Edge
detection for such operators consists of three steps [22]: filtering,
enhancement, and detection. The different Zero-Crossing based operators
differ in the way the first two steps are implemented. In what follows, I am
going to present an overview of the LOG operator.

LOG Operator
The LOG operator, or Laplacian of Gaussian, is a special kind of Zero-
Crossing based operators, where filtering or smoothing is done using a
Gaussian filter and enhancement is done using its second derivative [22];
filtering and enhancement can be directly done by convolving the image
with a linear filter which is the Laplacian of the Gaussian filter. After that,

19
edge detection is performed by looking for zero crossings; Figure will
clarify things more.

Figure 3.2 First and Second Derivative of an Edge Illustrating the Zero-
Crossing

Notice that a change in the intensity of the image, which defines an edge,
will result in a zero crossing if taking the second derivative of the function.

3. Canny Operator
The Canny edge detector performs the following algorithm [22]:
First, smooth the image I(i,j) by convoluting it with a Gaussian smoothing
filter G(i,j,), where , the spread of the Gaussian, controls the degree of
smoothing. The result will be smoothed data S[i,j].

20
Next, compute the gradient magnitude and orientation using finite-
difference approximations for the partial derivatives; the gradient is
computed using two 2x2 convolution masks (similar to the Roberts
approximation).
Then, apply non-maxima suppression to the gradient magnitude; in this
approach, an edge point is a pixel whose strength is locally maximum in
the direction of the gradient. The result of this step is data which is almost
zero everywhere except at local maxima points.
Finally, use the double thresholding algorithm to detect and link edges; the
above computed data contains many false edge fragments caused by noise
and fine texture. These must be removed. Thresholding might be a good
solution for that.
However, incorrect choice of the threshold might easier cause detection of
false edges or exclusion of some. To overcome this problem, two threshold
values are applied to the data (one is double the other). With these
threshold values, two thresholded edge images are produced. One of the
images has gaps in the contours but contains fewer false edges. The
algorithm works on bridging these gaps by checking the second image at
the locations of the 8-neighbours for edges that can be linked to the
contour.

4. Improved Edge Detectors: Compass Operator

The Compass operator [24] is a generalization of the canny operator [23]. In


the Compass operator smoothing, which might exclude some edges and

21
corners, is generalized to vector quantization, which results in a more
detailed description of the image.
The Compass operator divides an image window into two sides and
compares them to see if they are different. This operator, as opposed to the
Classical operators, allows multiple values, which are quantization of the
different colors, to exist on each side. Quantization of the colors is done by
assigning them to a variable number of values, each with a weight
depending on how many pixels have that color in the image. The Compass
operator uses the EMD (Earth Mover's Distance) to compute the distance
between the two sides. The maximum EMD distance, oriented in some
angle, is defined as the strength of the edge. The strength combined with
its orientation form a quantity similar to a gradient. Thus standard
techniques such as non-maximal suppression and thresholding can be
performed to extract edges. The Compass operator seemed to outperform
the Canny operator; results showed that the Compass operator succeeded
in detecting edges and corners where the Canny operator failed to do so;
Figure 6 shows two images subjected to the two operators.

22
Figure 3.3 Feature Detection Using Canny and Compass Operators
[24]

2.2.1.2 Interesting Points Feature Detectors: The Moravec Operator

The Moravec operator [25] was designed to detect Points of Interest in an


image. Points of Interest are defined as points where there is a high
intensity variation in each of the eight directions (up and down, left and
right, and the 4 corners); according to this definition, Moravec detector was
a considered an edge detector, though that is not what Moravec had in
mind. The Moravec algorithm for detecting interesting points is as follows:

The operator takes as an input a gray scale image I(x,y), a window


size, and the threshold T, and outputs a map indicating the
location of detected interesting points.

23
Calculate the intensity variation from a shift (u, v), of each pixel in
each of the eight directions, as:

where the shifts (u,v) considered are:

Construct the map of interesting points by calculating the corner


ness measure
C(x, y) for each pixel (x, y):
C(x,y) = min(Vu,v(x,y))

Threshold the interest map by setting all C(x, y) below a threshold


T to zero.

Perform non-maximal suppression to find local maxima.

All non-zero points remaining in the map are Interesting Points.

Moravec's detector, though computationally-efficient, suffers from many


deficiencies that made it obsolete. Its response is anisotropic as it is

24
rationally invariant. Also it is sensitive to noise, which causes the detection
of false interesting points along edges and isolated pixels.

2.2.1.3 Corner Feature Detectors: The Plessey Operator


The Plessey corner detector [26] is the most widely used operator in corner
detection due to the fact that it addresses many of the deficiencies of the
Moravec operator [25]; the operator uses a Gaussian window to achieve a
more accurate estimate of the local intensity variation, and give high
emphasis on pixels close to the one being considered.
Also, the operator uses a function that calculates the intensity variation in
any direction, as opposed to the Moravec operator which is limited by 8-
directions.

The Plessey algorithm for detecting corners is as follows:


The operator takes as an input a grayscale image I(x,y), Gaussian
window, k value, and a threshold T, and outputs a map indicating
the location of each detected corner.
Calculate the autocorrelation matrix M:
A C
M=
C B
2
I I I I
2

Where, A= w, B= w, C= w
x y y y

is the convolution operator


W is the Gaussian window

25
Construct the cornerness map by calculating the cornerness
measure C(x, y)
for each pixel (x, y):
C(x,y)=det(M)-k(trace(M))2
det(M)=12=AB-C2
trace(M)=1+2=A+B
k=constant

Threshold the interest map by setting all C(x, y) below a threshold


T to zero.
Perform non-maximal suppression to find local maxima.
All non-zero points remaining in the cornerness map are corners.

The Plessey operator, though outperforms the Moravec operator, suffers


from a drawback over Moravec operator; the Plessey operator is
computationally demanding. However, this is not a series problem with
the increase in the computational power.

2.2.1.4 Neural-Fuzzy Feature Detectors


The main approach behind such feature detectors is to train a neural net
(NN) edge classifier by training the NN on a population of binary image
prototypes scored to fuzzy values by a classical operator.
Feature operators assign to the pixels in an image a label or value which
specifies the presence or absence of a feature characteristic. This label, or
fuzzy value, belongs to a membership function defining the extent to
which the pixel might be an edge or a corner. While dealing with the

26
Classical operators, the pixel is labeled crisply by thresholding the fuzzy
value obtained.
The simplest neural network edge detector was that proposed by Weller
[27] who used a training set of 20 examples of edge-situations in a 3x3
window to train a feedforward/back-propagation (FF/BP) neural network.
Another interesting approach was proposed by Bezdek et al. [28], [29].
Bezdek approach combines the training of a FF/BP neural network, using a
set of 256 examples of edge-situations in a 3x3 window, with a labeling
scheme (Edged-ness) based on a fuzzy membership values scored by a
Sobel operator. The Bezdek operator seems to perform much better than
the classical Sobel operator, especially in detecting edges characterized by a
small intensity-jump. Figure presents two feature-detected images using
the classical Sobel operator and the NN trained one. The Results are clearly
visible.

27
Figure 3.4: 316x500x24-bit color Kosh: Feature Detection Using Classical
Sobel Operator and the NN trained Sobel operator (respectively). [19]

On the other hand, Bezdek operator seems to suffer from some deficiencies
[19]; Bezdek approach cant be extended to larger window size, due to
highly-increasing training time, which might go up to tens of days for 5x5
windows or higher.
Cohen et al. [19] were able to find a new approach which solved this
problem and still give excellent results, even outperforms Bezdek
approach. Their strategy is based on training on binary prototypes (crisp
values); the fuzzy values are defuzzified using a threshold of 0.5. This
technique reduced the training of a 3x3 Sobel operator to about 1/15 the
time required by Bezdek approach. It also made the extension to higher
window sizes possible; the authors trained a Plessey operator (5x5
windows) and got excellent results.
After introducing the different feature detectors, I am going to introduce
the different approaches for boundary detection.

2.2.2 Boundary Detection


Different approaches have been used in detecting the outer and inner
contours of the iris boundary. Daugman [3] uses an integro-differential
operator on the raw image (doesnt apply feature detectors) to isolate the
iris, whereas Wildes [11] uses the circular Hough transform on the binary
edge map. Other approaches have been proposed, such as Active Contour
Models [30], and simple Circular Summation [31].

28
2.2.2.1 Integro-Differential Function
The integro-differential function finds for an image I(x,y), the maximum of
the absolute value of the convolution of a smoothing function G with the
partial derivative, with respect to r, of the normalized contour integral of
the image along an arc ds of a circle C((xo,yo), r) [3]:

This integro-differential operator serves to find the both boundaries of the


iris. The operator searches for the circular path where there is maximum
change in pixel intensity, by varying the radius and center x and y position
of the circular contour. The operator is applied iteratively with the amount
of smoothing progressively reduced in order to attain precise localization.

2.2.2.2 Hough Transform


From the edge map, votes are cast in Hough space for the center
coordinates (xc, yc), and the radius r of circles passing through each edge
point. The Hough transform for a circular boundary and a set of recovered
edge points (xj, yj) j = 1, n is defined as [11]:

29
Where

And

The Hough transform have few deficiencies; it fails to detect some circles
while performing edge detection due to the fact that it depends on a
threshold value; this value might not be critically specified, thus resulting
in edge points being neglected. Another point which is worth noting is the
fact the Hough transform is computationally exhaustive, leading to low
speed efficiency. Thus, it might not be suitable for real time applications.
Its worth noting that the integro-differential operator doesnt suffer from
the thresholding problem, since it works on raw derivative information.

2.2.3 Existing Approaches

2.2.3.1 Daugman Approach


Daugman [3] approach to iris localization is based on the integro-
differential operator previously introduced. This integro-differential

30
operator serves to find both the papillary boundary and the limbus
boundary of the iris. After finding these boundaries, Daugman localizes the
eyelids boundaries, using the same integro-differential approach with
arcuate contours (arcs), and with optimally-fitted spline parameters. These
localizations result in isolating the iris. However, in cases where there is
noise in the eye image, such as from reflections, this integro-differential
operator fails; this is due to the fact that it works only on a local scale.
Another deficiency of this operator is that its too computationally
exhaustive.

2.2.3.2 Wildes Approach


The Wildes [11] system performs its iris localization in two steps
(histogram-based approach):
1- Binary edge mapping
Map the image intensity information into a binary edge-
map by thresholding the magnitude of the image
intensity gradient.
2- Voting procedure via Hough transform
The edge points vote to instantiate particular contour
parameter values via Hough transforms on
parametric definitions of the iris boundary contours.

Wildes, in performing edge detection, bias the derivatives in the vertical


direction for detecting the outer circular boundary of the iris, and in the

31
horizontal direction for detecting the eyelids. This makes circle localization
more accurate, it also makes it more efficient.

2.2.3.3 Ya-Ping et al. Approach


This approach [18] is close to Daugman approach in the sense that it
depends on the same integro-differential operator. However, it made use of
the Canny operator to find the approximate boundaries first.

Their strategy is as follows:


Rescaling the image to reduce computational complexity.
Filter the image using a vertical median filter.
Use Canny operator to extract the image edges, and form a binary
image.
Choose the maximum circle (xs, ys, zs) based on histogram as the
outer (sclera) boundary.
Search the inner (pupillary) boundary (xp, yp, zp) where (xp, yp)
lies on the rectangle interval (xp 5, yp 5), noting that this
boundary lies within the
outer boundary.
Now, based on the boundaries found earlier, localize accurately the
new boundaries using the integro-differential operator.
Till this step, the eyelids or eyelashes are not being extracted from
the image.

32
Using Hough transform, search for the two curves satisfying x(t) =
at2 + bt +c, where t [0, 1]. These will determine the edge of the
upper and lower eyelids.
Determine the eyelashes according to one of the following two
situations:
o If one line exists on the area below upper eyelash, it is
considered as a
separate eyelash.
o If the variance of some given small window in the iris image
is less than
threshold, it is regarded as multiple eyelashes.
Now, the eyelashes and eyelids are marked, and thus can be
excluded when
the iris is encoded.

2.3 Iris Normalization and Unwrapping


Previously, I should be able to successfully extract the iris part from the eye
image. Now and in order to allow comparisons between different irises, I
should transform the extracted iris region so that it has a fixed dimensions,
and hence removing the dimensional inconsistencies between eye images
due to the stretching of the iris caused by the pupil dilation from varying
levels of illumination [30].
Therefore, this normalization process will produce irises with same fixed
dimensions so that two photographs for the same iris under different
lighting conditions will have the same characteristic features.

33
However an important note I must take care of when normalizing the
doughnut shaped iris region to have to have constant radius, is that, as
clearly shown in Figure , the centers of the iris and the pupil are not
concentric [3].

Figure 3.5 The Centers of the Pupil and the Iris are not Concentric

2.3.1 Daugmans Rubber Sheet Model


In fact, the homogeneous rubber sheet model devised by Daugman remaps
each point within the iris region to a pair of polar coordinates where the
radius r is on the interval [0,1], and the angle is on the interval [0,2Pi]. Then
the normalized iris region is unwrapped into a rectangular region [32].
Figure 9 illustrates the mechanism of this model.

34
Figure 3.6 Daugman's Rubber Sheet Model
The remapping (normalization) of the iris region from Cartesian
coordinates (x,y) to normalized non-concentric polar representation is
modeled as:

Where

I(x,y) is the iris region image, (x,y) are the original Cartesian coordinates,
(r,) are the corresponding normalized polar coordinates, (xp,yp) and (xI,yI)
are the coordinates of the pupil and iris boundaries along the direction
[3].

One important issue to note is that although this model takes into account
the pupil dilation, imaging distance, and non concentric pupil
displacement, it does not compensate for rotational inconsistencies. This

35
problem should be taken care of in the matching process where I have to
keep shifting the iris templates in the direction till they become aligned.
Using Daugmans robber sheet model for normalizing the iris region, the
pupil center is considered as the Reference point, and radial vectors pass
through the iris region. The radial resolution is represented by a number of
data points that are selected along each radial line. The angular resolution
is defined as the number of radial lines going around the iris region.

Since the pupil is non-concentric to the iris, a remapping/rescaling formula


is needed to rescale points depending on the angle around the circle. This is
given by

with

where ox , oy represent the displacement of the centre of the pupil relative to


the centre of the iris, and r represents the distance between the edge of the
pupil and edge of the iris at an angle around the region, and rI is the
radius of the iris (Refer to Figure 10). This remapping formula represents
the radius of the iris region as a function of the angle [32].
After getting this normalized polar representation of the iris region; this
region is unwrapped by choosing a constant number of points along each
36
radial line irrespective of how narrow or wide the radius is at a particular
angle, and thus producing a 2D array with vertical immersions of radial
resolution and horizontal dimensions of angular resolution (refer to Figure
10). Now in order to prevent non-iris region data from corrupting the
normalized representation, another 2D array is created for marking
reflections, eyelashes, and eyelids detected in the segmentation stage, and
data points which occur along the pupil border or the iris border are
discarded.

Figure 3.7 Getting the Radial and the Angular Resolutions

2.3.2 Virtual Circles


This approach devised by Boles differs from other techniques since
normalization is not done until I attempt to match the two iris regions.

37
Thus, a normalization resolution needs to be chosen, and the same number
of data points must be extracted from each iris and stored along virtual
concentric circles, with origin at the center of the pupil [33].

2.3.3 Image Registration


The image registration technique is employed by the Wildes et al. system
[11]. This technique wraps the newly acquired image Ia(x,y) into alignment
with a selected database image Id(x,y). The mapping function
(u(x,y),v(x,y)) used to transform the original coordinates should be
properly chosen to make the image intensity values of the new image as
close as possible to those of corresponding points in the Reference image.
Thus minimizing the following integral:

Then the following formula is used to get the new coordinates:

where s a scaling factor and R() is a matrix representing rotation by . In


implementation, given a pair of iris images Ia and Id, the warping

38
parameters s and are recovered via an iterative minimization procedure
[11].

2.4 Feature Encoding


Constructing the iris code is the final process. After being able to localize
the iris, it is time to extract the most discriminating feature in its pattern so
that a comparison between templates can be done. The iris pattern
provides two types of information: The amplitude information and the
phase information. As shown by Oppenheim and Lim, and because of the
dependence of the amplitude information on many extraneous factors, only
phase information is used to generate the iris code [34].
Wavelets can be used to decompose the data in the iris region into
components that appear at different resolutions, allowing therefore
features that occur at the same position and resolution to be matched up.

2.4.1 Gabor Filter


Let (x, y) be any chosen generic 2 D wavelet that can be called a mother
wavelet from which I can generate a complete self similar family of
parameterized daughter wavelets

Where

39
x and y incorporate dilations in the wavelet in size by 2m, translations in
position (p,q), and rotations through angle [35], [39].
An interesting and useful choice for (x, y) is the complex valued Gabor
wavelet which is defined as follow:

where specify wavelet position, specify effective width and


length, and specify a modulation wave vector which has spatial

frequency Because these wavelets are complex valued, it is


possible to use the real and imaginary parts of their convolution with an
image I(x,y) (which is in this case the iris pattern) to extract a description of
the image in term of amplitude and phase.
The amplitude modulation function is defined as follow:

And the phase modulation function is defined as:

As I said before, the phase angle can be quantized to construct the iris code.
This quantization is illustrated in Figure 11.

40
Figure 3.8 Pattern Encoding by Phase Modulation

By demodulating the iris pattern using 2 D Gabor wavelet, the pattern is


encoded into 256 bytes (2048 bits) iris code. Each resulting phasor angle is
quantized to one of four quadrants, setting therefore two bits of
information. This operation is repeated for each local element all across the
iris with many wavelet sizes, frequencies, orientations, to extract the 2048
bits.
The Daugman system makes use of polar coordinate for normalization;
therefore the Gabor filter is given as:

where are the same as before, and specify the center frequency
of the filter [36].
As a result, the iris code can be constructed by demodulating the iris
pattern using complex valued 2 D Gabor wavelets to extract the structure

41
of the iris as a sequence of phasors whose phase angles are mapped or
quantized into bits that construct the iris code.
The angle quantization is furthermore described by the following
conditional integrals where the iris image pixel data is given in the

dimensionless pseudo polar coordinate system [35].

2.4.2 Log-Gabor Filter


One disadvantage of Gabor filters is that they will have a non zero DC
component whenever the bandwidth is greater than one octave. One way
to overcome this issue and get a zero DC component for any bandwidth is
to use a Gabor filter which is Gaussian on a logarithmic scale. This type of
filter is called a log-Gabor filter which has the following frequency
response:

42
where represents the center frequency, and gives the bandwidth of the
filter [37].

2.4.3 Zero Crossings of the 1 D Wavelet


Another approach for encoding iris pattern data is to make use of the 1 D
wavelets suggested by Boles and Boashash. The mother wavelet is defined
as the second derivative of a smoothing function

The zero crossing of dyadic scales of these filters are then used to encode
features [33].

2.4.4 Haar Wavelet


Both Gabor wavelet and Haar wavelet are considered as the mother
wavelets. However encoding or quantization procedure is different when
using Haar wavelet. First, from multi-dimensionally filtering, a feature
vector with 87 dimensions is computed where each dimension has a real
value ranging from -1 to +1.
It is important here to note that I need to code the feature vector into a
binary vector code since comparing two binary codes is easier than
comparing two decimal codes. Therefore the feature vector is sign
quantized so that any positive value of a corresponding dimension is
represented as 1 and negative values as 0. As a result, an iris code or a
template of only 87 bits is obtained [38].

43
Lim et al. shows when comparing both Gabor wavelet and Haar wavelet,
that the recognition rate of Haar wavelet transform is slightly better than
Gabor transform by 0.9%.

2.5 Matching Algorithms


After being able to generate the iris code of the image, I need now to
compare this template and see if any matching occurs. I must note however
that due to processing the image from image acquisition to image
localization and then generating the binary code, slight errors come up.
Therefore a threshold is needed.
2.5.1 Hamming Distance
The Hamming distance approach is a matching metric employed by
Daugman. When comparing two bit patterns, Hamming distance
represents the number of bits that are different in the two patterns. In other
words, Hamming distance is the number of items that do not identically
agree when comparing two ordered list of items.
Therefore, using the Hamming distance, one can decide whether the two
patterns were generated from the same iris or from different ones. The
Hamming distance is defined as follow:

Where X and Y are the two bit patterns that I compare and N=2048 [35]. So
basically when the bit in pattern X is different than that of pattern Y, the
exclusive or gives a result of 1 which is accumulated till I go over all the

44
bits in the two patterns. Finally the result is divided by N which is the total
number of bits constituting the iris code.
Ideally, the Hamming distance between two iris codes generated for the
same iris pattern should be zero; however this will not happen in practice
due to fact that normalization is not perfect. Besides, I shall always have
noise undetected.
As a conclusion, the larger the Hamming distance (closer to 1), the more
the two patterns are different. And the closer this distance is to 0; the more
probable the two patterns are to be identical. Note that the bit patterns
produced by different people are independent due to the fact that a
persons iris region contains features with high degree of freedom. On the
other hand, two iris patterns produced by the same iris will be highly
correlated. So by properly choosing the threshold upon which I make the
matching decision, one can get better iris recognition results with very low
error probability.
So my main interest here reduces to properly choosing the threshold when
using the Hamming distance matching metric. The Hamming distance
follows a binomial distribution. So after doing 2.3 million comparisons, it
has been shown that the Hamming distance for iris codes corresponding to
different iris pattern has a mean value equal to 0.459 and standard
deviation equal to 0.0197. On the other hand, the mean Hamming distance
value for two iris codes corresponding to the same iris pattern and
therefore the same persons eye is equal to 0.11 and the standard deviation
is found to be equal to 0.065 (Refer to Figure 12). Note that the above
results are the worst case results since they were taken for a decision
45
environment under unfavorable conditions, using image acquired at
different distances, and by different optical platforms [3].

Figure 3.9 The Decision Environment for Iris Recognition Under Relatively
Unfavorable conditions

Better results were obtained for a decision environment under very


favorable conditions, using always the same camera, distance, and lighting
[3].

These results can be easily seen in Figure 3.11.

46
Figure 3.10 The Decision Environment for Iris Recognition Under Very
Favorable Conditions

As I emphasized in the normalization/unwrapping part, Daugmans


rubber sheet model does not take into account the rotational
inconsistencies. So in order to overcome this problem, and when
calculating the Hamming distance of two templates, one template is shifted
left and right (of course in separate trials) bit-wise and a number of
Hamming distance values are calculated from successive shifts. This bit-
wise shifting in the horizontal direction corresponds to rotation of the
original iris region by an angle that can be known from the angular
resolution used. If an angular resolution of 90 is used, each shift will
correspond to a rotation of 4 degrees in the iris region. This method is
suggested by Daugman [32], and corrects for misalignments in the
normalized iris pattern caused by rotational differences during imaging.
From the calculated Hamming distance values, only the lowest is taken,
since this corresponds to the best match between two templates.
47
Figure 3.11: Daugmans Model Taking Into Account the Rotational
Inconsistencies

2.6 Previous Work

In 1991, Johnson reported to actually realize a personal identification


system based on iris recognition . Subsequently, a prototype iris
recognition system was documented by Daugman in 1993 [3].

Wildes described a system for personal verification based on automatic iris


recognition in 1996 [11].

In 1998, Boles proposed an algorithm for iris feature extraction using zero-
crossing representation of 1-D wavelet transform [33].

48
All these algorithms are based on gray image, and color information was
not used in them. The main reason is that a gray iris image can provide
enough information to identify different individuals.

It seems that the French ophthalmologist Alphonse Bertillon was the first
one who proposed the use of iris pattern (color) as a basis for personal
identification.

Flom and Aran Safir suggested also using the iris as the basis for a
biometric in 1981.

John Daugman [3] in 1991 after collaborating with Flom and Safir for 4
years developed and introduced the application and usage of iris as a
biometric characteristic for individual identification. He has used 2D Gabor
filters and phase coding to obtain 2048 binary feature code and tested his
algorithm on many images successfully. After his work, various structures
for iris recognition were suggested by other people.

Wilds used laplacian pyramids and 4-level resolutions. His algorithm relies
on image registration and matching which requires many computations.

Boles' prototype [33] operates in building a one-dimensional representation


of the gray level profiles of the iris. He used zero-crossing of 1D wavelet
transform of the resulting representation.

Using a family of Gabor Filters was studied by Ma, Wang and Tang [43] in
some papers.

Tisse et al. constructed the analytic image (a combination of the original


image and its Hilbert transform) to demodulate the iris texture.

49
Woo Nam et al. [14] exploited a scale-space filtering to extract unique
features that uses the direction of concavity of image from an iris image.

Lim et al. [38] used 2D Haar wavelet and quantized the 4th level high
frequency information to form an 87-binary code length as feature vector
and applied a LVQ neural network for classification. A modified Haralicks
co-occurrence method with multilayer perceptron is also introduced for
extraction and classification of the irises.

MATLAB IMPLEMENTATION OF IRIS RECOGNITION

%PUPILFINDER.M

function [cx,cy,rx,ry]=pupilfinder(F)
% USE: [cx,cy,rx,ry]=pupilfinder(imagename)

50
% Arguments: imagename: is the input image of an human iris
% Purpose:
% perform image segmentation and finds the center and two
% (vertical and horizontal) radius of the iris pupil
% Example: [cx,cy,rx,ry]=pupilfinder('image.bmp')
% cx and cy is the position of the center of the pupil
% rx and ry is the horizontal radius and vertical radius of the pupil
G=imread('cc.bmp');
bw_70=(G>70);
bw_labeled=bwlabel(~bw_70,8);
mr=max(bw_labeled);
regions=max(mr);
for i=1:regions
[r,c]=find(bw_labeled==i);
if size(r,1) < 2500
region_size=size(r,1);
for j=1:size(c,1)
bw_labeled(r(j),c(j))=0;
end;
end;
end;
bw_pupil=bwlabel(bw_labeled,8);
%get centroid of the pupil
stats=regionprops(bw_pupil,'centroid');
ctx=stats.Centroid(1);
51
cty=stats.Centroid(2);
hor_center = bw_pupil(round(cty),:);
ver_center = bw_pupil(:,round(ctx));
%from the horizontal center line, get only the left half
left=hor_center(1:round(ctx));
%then flip horizontally
left=fliplr(left);
%get the position of the first pixel with value 0 (out of pupil bounds)
left_out=min(find(left==0));
%finally calculate the left pupil edge position
left_x = round(ctx-left_out);
%from the horizontal center line, get only the right half
right=hor_center(round(ctx):size(G,2));
%get the position of the first pixel with value 0 (out of pupil bounds)
right_out=min(find(right==0));
%finally calculate the left pupil edge position
right_x = round(ctx+right_out);
%adjust horizontal center and radius
rx = round((right_x-left_x)/2);
cx = left_x+rx;
%from the vertical center line, get only the upper half
top=ver_center(1:round(cty));
%then flip horizontally
top=flipud(top);
%get the position of the first pixel with value 0 (out of pupil bounds)
52
top_out=min(find(top==0));
%finally calculate the left pupil edge position
top_y = round(cty-top_out);
%from the vertical center line, get only the upper half
bot=ver_center(round(cty):size(G,1));
%get the position of the first pixel with value 0 (out of pupil bounds)
bot_out=min(find(bot==0));
%finally calculate the left pupil edge position
bot_y = round(cty+bot_out);
%adjust horizontal center and radius
ry = round((bot_y-top_y)/2);
cy = top_y+ry;

53
%IRISFINDER.M
function [right_x,right_y,left_x,left_y]=irisfinder(imagename)
% USE: [rx,ry,lx,ly]=irisfinder(imagename)
% Arguments: imagename: is the input image of an human iris
% Purpose:
% perform image segmentation and finds the edgepoints of
% the iris at the horizontal line that crosses the center
% of the pupil
% Example: [rx,ry,lx,ly]=irisfinder('image.bmp')
% rx and ry is the edge point of the iris on the right side
% lx and ly is the edge point of the iris on the left side
%read bitmap

54
F=imread('cc.bmp');
%find pupil center and radius
[cx,cy,rx,ry]=pupilfinder(F);
% Apply linear contrast filter
D=double(F);
G=uint8(D*1.4-20);
%obtain the horizontal line that passes through the iris center
l=G(cy,:);
margin = 10;
% Right side of the pupil
R=l(cx+rx+margin:size(l,2));
[right_x,avgs]=findirisedge(R);

right_x=cx+rx+margin+right_x;
right_y=cy;
% Left side of the pupil
L=l(1:cx-rx-margin);
L=fliplr(L);
[left_x,avgs]=findirisedge(L);
left_x=cx-rx-margin-left_x; left_y=cy;

55
%CIRCLE

function H=circle(center,radius,NOP,style)
%---------------------------------------------------------------------------------------------
% H=CIRCLE(CENTER,RADIUS,NOP,STYLE)
% This routine draws a circle with center defined as
% a vector CENTER, radius as a scaler RADIS. NOP is
% the number of points on the circle. As to STYLE,
% use it the same way as you use the rountine PLOT.

56
% Since the handle of the object is returned, you
% use routine SET to get the best result.
%
% Usage Examples,
%
% circle([1,3],3,1000,':');
% circle([2,4],2,1000,'--');
%---------------------------------------------------------------------------------------------

if (nargin <3),
error('Please see help for INPUT DATA.');
elseif (nargin==3)
style='b-';
end;
THETA=linspace(0,2*pi,NOP);
RHO=ones(1,NOP)*radius;
[X,Y] = pol2cart(THETA,RHO);
X=X+center(1);
Y=Y+center(2);
H=plot(X,Y,style);
axis square;

57
58
59
%PATTERN

clc;
clear;
%set base directory of irisBasis directory
irisDir = 'E:\Project-M.Tech\Final Code\IrisBasisAll';
destDir = 'E:\Project-M.Tech\Final Code\IrisBasisPattern';
clc;
T=[];
irisFiles = dir(irisDir);
for i=1:size(irisFiles,1)
if not(strcmp(irisFiles(i).name,'.')|strcmp(irisFiles(i).name,'..'))
irisFileName = [irisDir, '\', irisFiles(i).name];
F=imread(irisFileName);
G=im2double(F);
P=[];
for j=1:size(F,2)
P = [P G(j,:)];
end
P=[P str2num(irisFiles(i).name(1:3))];
T=[T;P];
irisFileName[size(P) size(T)]
end
end[destDir,'\abc.mat']
save 'destDir' T
60
%BOTHCIRCLE.M

fname='cc.bmp';
F=imread(fname);
imshow(F);
colormap('gray');
imagesc(F);
hold;
[cx,cy,rx,ry]=pupilfinder(fname);
%plot horizontal line
x=[cx-rx*2 cx+rx*2];
a=cx;
b=cy;
y=[cy cy];
plot(x,y,'y');
%plot vertical line
x=[cx cx];
y=[cy-ry*2 cy+ry*2];
plot(x,y,'y');

circle([cx cy], rx, 1000, '-');


%hold;
[rx,ry,lx,ly]=irisfinder(fname)
%plot horizontal line
x=[rx-lx*2 rx+lx*2];
61
y=[ry ry];
plot(a,b,'y');
%plot vertical line
x=[rx rx];
y=[ry-ly*2 ry+ly*2];
plot(a,b,'y');
circle([a b], lx+10, 1000, '-');
%[IB]=irisbasis('cc.bmp',100,100,1);
%imshow(uint8(IB));
%p=uint8(IB);
%imshow(F);
%i=imcrop;
%imshow(i);
%t=i;
%imshow(t);
% load
img = im2double(imread('cc.bmp'));
% black-white image by threshold on check how far each pixel from
"white"
bw = sum((1-img).^2, 3) > .5;
% show bw image
figure; imshow(bw); title('bw image');

%get bounding box (first row, first column, number rows, number
columns)
62
[row, col] = find(bw);
bounding_box = [min(row), min(col), max(col) - min(col) + 1, max(row) -
min(row) + 1];
%display with rectangle
rect = bounding_box([1,4,1,3]);
% rectangle wants x,y,w,h we have rows, columns, ... need to convert
figure; imshow(img); hold on;
rectangle('Position', rect);
I2 = imcrop(img,rect);
figure,imshow(I2);
%figure,imshow(rect),title('xyz');
%rect1 = bounding_box([1,1,1,1]);
I3=imcrop(I2,[20,60,60,50]);
figure,imshow(I3);

63
%PATTERN MATCHING
clc;
clear;
%set base directory of irisBasis directory
irisDir = 'E:\Project-M.Tech\Final Code\IrisBasisAll';
clc;
T=[];
irisFiles = dir(irisDir);
for i=1:size(irisFiles,1)
if not(strcmp(irisFiles(i).name,'.')|strcmp(irisFiles(i).name,'..'))
irisFileName = [irisDir, '\', irisFiles(i).name];
F=imread(irisFileName);
G=im2double(F);
%perform singular value decomposition
xpattern=svd(G);
P=[xpattern' str2num(irisFiles(i).name(1:3))];
T=[T;P];
irisFileName
[size(P) size(T)]
end
end
save('irisBasisSDV','T','-ASCII');

64
load('irisBasisSDV','-ascii');
irisBasisSDV
%get only first 3 dimensions'
nclasses=1;
size(irisBasisSDV)
TS=[irisBasisSDV(1,1:3) irisBasisSDV(1,41)];
%TS=[T(1:nclasses*7,1:3) T(1:nclasses*7,41)];
%display TS (full dataset)
TS
%display scatter points
scatter3(TS(:,1),TS(:,2),TS(:,3),8,TS(:,4),'filled');
%pause;
%form training set with the first 5 instances of each class
Training=[];
for i=1:nclasses
Training=[Training;TS(1,:)];
end;
scatter3(Training(:,1),Training(:,2),Training(:,3),8,Training(:,4),'filled');
%form the pattern matrix (patterns in columns - no class information)
P=Training';
P=P(1,:);
%get class column
targetTr=Training(:,4);
65
%convert sequential numbered classes to power of two:
% class 1 = 1
% class 2 = 2
% class 3 = 4
targetDec=2.^(targetTr-1);
%convert decimal to binary
% class 1 = 001
% class 2 = 010
% class 3 = 100
targetBin=dec2bin(targetDec);
%separate in columns
targetClass=[];
for i=1:nclasses
targetClass=[targetClass double(str2num(targetBin(:,i)))];
end;
%transpose
T=targetClass';
%----------------------------------------------------------- training
S1 = 300; % Number of neurons in the first hidden layer - changes
according to test

S2 = nclasses; % Number of neurons in the output layer


net=newff(minmax(P),[S1 S2],{'logsig' 'logsig'}, 'traingda');
%setup network parameters
net.trainFcn = 'traingda'; % Training function type
66
net.trainParam.lr = 0.1; % Learning rate
net.trainParam.lr_inc = 1.05; % Increment of a learning rate
net.trainParam.show = 300; % Frequency of progress displays (in epochs).
net.trainParam.epochs = 50000; % Maximum number of epochs to train.
net.trainParam.goal = 0.0000005; % Mean-squared error goal.
net.trainParam.min_grad=0.000000001;
net=init(net);
net=train(net,P,T);
inter=sim(net,P);
%find out winner class
[Y,I]=max(inter);
%pause;
%----------------------------------------------------------- simulation
%form testing set with the remaining 2 instances of each class
Test=[];
for i=1:nclasses
Test=[Test;TS((i*7)-1:(i*7),:)];
end;
scatter3(Test(:,1),Test(:,2),Test(:,3),8,Test(:,4),'filled');
%form the pattern matrix (patterns in columns - no class information)
P=Test';
P=P(1:3,:);
%get class column
targetTs=Test(:,4);
%convert sequential numbered classes to power of two:
67
% class 1 = 1
% class 2 = 2
% class 3 = 4
targetDec=2.^(targetTs-1);
%convert decimal to binary
% class 1 = 001
% class 2 = 010
% class 3 = 100
targetBin=dec2bin(targetDec);
%separate in columns
targetClass=[];
for i=1:nclasses
targetClass=[targetClass double(str2num(targetBin(:,i)))];
end;
%transpose
T=targetClass';
%perform network simulation
a=sim(net,P);
%find out winner class
[Y,I]=max(a);
I=I';
c=nclasses-I;
res=log2(2.^c)+1;
match=[targetTs res targetTs-res];
%correctly classified patterns in percentage - 100%=perfect match
68
class=(size(find(match(:,3)==0),1)/size(targetTs,1))*100

OUTPUT IMAGES

69
%CANNY.M
f=imread('ccc.bmp');
subplot(1,2,1);
imshow(f);title('original image');
BW2=edge(f,'canny');
figure,subplot(1,2,1);imshow(f);title('original image);
imshow(BW2);title('canny edge detector image');

70
%BOTHCIRCLE.M

fname='cc.bmp';
F=imread(fname);
imshow(F);
colormap('gray');
imagesc(F);
hold;
[cx,cy,rx,ry]=pupilfinder(fname);
%plot horizontal line
x=[cx-rx*2 cx+rx*2];
a=cx;
b=cy;
y=[cy cy];
plot(x,y,'y');
%plot vertical line
x=[cx cx];
y=[cy-ry*2 cy+ry*2];
plot(x,y,'y');

circle([cx cy], rx, 1000, '-');


%hold;
[rx,ry,lx,ly]=irisfinder(fname)
%plot horizontal line
x=[rx-lx*2 rx+lx*2];

71
y=[ry ry];
plot(a,b,'y');
%plot vertical line
x=[rx rx];
y=[ry-ly*2 ry+ly*2];
plot(a,b,'y');
circle([a b], lx+10, 1000, '-');
%[IB]=irisbasis('cc.bmp',100,100,1);
%imshow(uint8(IB));
%p=uint8(IB);
%imshow(F);
%i=imcrop;
%imshow(i);
%t=i;
%imshow(t);
% load
img = im2double(imread('cc.bmp'));
% black-white image by threshold on check how far each pixel from
"white"
bw = sum((1-img).^2, 3) > .5;
% show bw image
figure; imshow(bw); title('bw image');

%get bounding box (first row, first column, number rows, number
columns)
72
[row, col] = find(bw);
bounding_box = [min(row), min(col), max(col) - min(col) + 1, max(row) -
min(row) + 1];
%display with rectangle
rect = bounding_box([1,4,1,3]);
% rectangle wants x,y,w,h we have rows, columns, ... need to convert
figure; imshow(img); hold on;
rectangle('Position', rect);
I2 = imcrop(img,rect);
figure,imshow(I2);
%figure,imshow(rect),title('xyz');
%rect1 = bounding_box([1,1,1,1]);
I3=imcrop(I2,[20,60,60,50]);
figure,imshow(I3);

73
74
FUTURE SCOPE

The iris detection is the initial steps for analysis of facial images in image
processing environment. It can be further used in detecting faces, tracking
eyes in many situations such as gaze direction, dividing alertness systems,
recognition of faces, facial expression analysis. Face recognition is a field of
biometrics together with fingerprint recognition, iris recognition, and
speech recognition and so on.

Face detection has gained incised interest in recent years. Many


applications are used for IRIS detection, localization and becoming an
integral part of our life.

For example, IRIS recognition systems are being tested and installed at
airports to provide a new level of security; human-computer interfaces.

However, there still are two main restrictions in using the proposed
algorithm:-

1) The light condition must be normal. In other words the face to be


detected cannot be too bright or too dark. In addition, the proposed
algorithm does not allow the vast shadows on the faces because they
might interfere with the geometry properties of facial component.
2) The facial components must appear on the images as clearly as
possible.

75
REFERENCES

1. www.google.com
2.www.wikipedia.com
3. LiborMasek, Peter Kovesi, Iris
Recognition,http://www.csse.uwa.edu.au/~pk/studentprojects/libor/in
dex.html

4.ChineseAcademyofSciences (CASIA) , Center for


BiometricsandSecurityResearch
www.sinobiometrics.co

5.JohnDaugmanhttp://www.cl.cam.ac.uk/users/jgd1000/
International Biometric group, Independent Testing of Iris Recognition
Technology

6. http://faculty.qu.edu.qa/qidwai/DIP/downloads.html
7. findbiometrics.com
8. eye-controls.com
9. Daugman, J., Complete Discrete 2-D Gabor Transforms by Neural
networks for Image
Analysis and Compression, IEEE Transactions on Acoustics, Speech,
and Signal Processing, Vol. 36,
no. 7, July 1988, pp. 1169-1179.

10. Daugman, J. How Iris Recognition Works, available at


http://www.ncits.org/tc_home/m1htm/docs/m1020044.pdf.

11. Daugman, J., High Confidence Visual Recognition of Persons by a


Test of Statistical
Independence,IEEE transactions on pattern analysis and machine
intelligence, vol. 15, no.11, November 1993, pp. 1148-1161.

76
12. Gonzalez, R.C., Woods, R.E, Digital Image Processing, 2rd ed.,
Prentice Hall (2002).

13. Lim, S., Lee, K., Byeon, O., Kim, T, Efficient Iris Recognition
through Improvement of Feature
Vector and Classifier, ETRI Journal, Volume 23, Number 2, June 2001,
pp. 61-70.

14. Wildes, R.P, Iris Recogntion: An Emerging Biometric Technology,


Proceedings of the IEEE,
VOL. 85, NO. 9, September 1997, pp. 1348-1363.

77

You might also like