Professional Documents
Culture Documents
A PROJECT REPORT
Submitted by
1
BONAFIDE CERTIFICATE
Certified that this project report IRIS RECOGNITION SYSTEM USING MATLAB
is the bonafide work of KAMAL MITRA (Roll No: 12601012024) who carried out
the project work under my supervision.
__________________________
EXAMINER
2
ACKNOWLEDGEMENT
The project Iris Recognition System Using Matlab would not have been
possible without the constant guidance of our guide Prof. Subhajit Rakhshit,
Computer Application Centre. His knowledge and deep study helped us to improve
our knowledge a lot. We are immensely thankful to him for his valuable ideas on
improvement of the project.
Thanks, to Prof. Souvik Basu, our Departmental Coordinator, for all the help he
provided. His valuable insight helped us for the betterment of the project.
Finally, thanks to our Head of the Department Dr. Siuli Roy, for her valuable
thoughts and many useful concepts to reach the goal. Without her help it was very
tough to achieve the success.
We would also like to acknowledge the use of Matlab toolkit which has been
developed by Mathworks. This tool has been well documented and it was a great
benefit to be able to use that. Text from the documentation of this tool has been
used in this report too.
_________________
KAMAL MITRA
Roll No: 12601012024
3
CONTENTS
4
Zero Crossings of the 1 D Wavelet 42
Haar Wavelet 43
2.5 Matching Algorithms 43
Hamming Distance 44
Matlab Code 50
Future Scope 74
References 75
5
Chapter 1
INTRODUCTION
1.1 Introduction
The iris is a protected internal organ of the eye, located behind the cornea
and the aqueous humor, but in front of the lens. A visible property of the
iris is the random morphogenesis of their minutiae. The phenotypic
expression even of two irises with the same genetic genotype (as in
identical twins, or the pair possessed by one individual) has uncorrelated
minutiae. The iris texture has no genetic penetrance in the expression and
is chaotic. In these respects the uniqueness of every iris parallels the
uniqueness of every hand written signature. But the iris enjoys further
practical advantages over hand written signature.
6
In the year 1994, John Daugman patented his "Biometric personal
identification system based on iris analysis" [41]. The image analysis
algorithm finds the iris in a live video image of a person's face. It isolates
the eye and defines a circular papillary boundary between the iris and the
pupil portions of the image, and it defines another circular boundary
between the iris and the sclera portions of the image and the system
defines plurality of annular bands within the iris image. Then it adds the
iris code and due to Hamming distance compares the code with stored iris
codes.
In the year 1998, Richard Wildes patented "Automated non-invasive
recognition system and method" [11]. This system uses two cameras. The
first one is with low resolution and directs the second one with high
resolution. The obtained image from the second camera is reduced on the
iris due to papillary boundary, limbic boundary and eyelid boundary. In
the next step, the reduced image is compared with the stored images.
In the year 1999, Mitsuji Matsushita patented "Iris identification system and
iris identification method" [42]. This iris identification system is used to
identify customers. At first, the camera identifies the head of a customer,
finds the position of the eyes, zooms up and photographs the irises. The
computer extracts only a portion of the iris data that is significant to
identify the customer.
Using the human IRIS for identification has some advantages, but also
some disadvantages.
7
Advantages:
o IRIS is a highly protected, internal organ of the eye
o IRIS is visible from a distance
o IRIS patterns possess a high degree of randomness
o changing the size of the pupil confirms natural physiology
o limited genetic penetrance
o IRIS is stable throughout life.
Disadvantages:
o small target (1cm) to acquire from a distance (1m)
o moving target
o IRIS must be located behind a curved, wet and reflecting surface
o IRIS is obstructed by eyelashes, eyelids and reflections
o Its deformations are non-elastic as the pupil changes size.
8
1.3 Iris Recognition Process
The above figure summarizes the steps to be followed when doing iris
recognition.
Step 1: Image acquisition, the first phase, is one of the major challenges of
automated iris recognition since we need to capture a high-quality image of
the iris while remaining noninvasive to the human operator.
Step 2: Iris localization takes place to detect the edge of the iris as well as
that of the pupil; thus extracting the iris region.
9
Step 3: Normalization is used to be able to transform the iris region to have
fixed dimensions, and hence removing the dimensional inconsistencies
between eye images due to the stretching of the iris caused by the pupil
dilation from varying levels of illumination.
Step 4: The normalized iris region is unwrapped into a rectangular region.
Step 5: Finally, it is time to extract the most discriminating feature in the iris
pattern so that a comparison between templates can be done. Therefore, the
obtained iris region is encoded using wavelets to construct the iris code.
As a result, a decision can be made in the matching step.
10
Chapter 2
11
2.1 Image Acquisition
Iris recognition has been an active research area in the last years, due its
high accuracy and the encouragement of both the government and private
entities to replace traditional security systems, which suffer of noticeable
margin of error. However, early research was obstructed by the lack of iris
images. Now several free databases exist on the internet for testing usage.
A well known database is the CASIA Iris Image Database (version 1.0) [5]
provided by the Chinese Academy of Sciences. The CASIA Iris Image
Database includes 756 iris images from 108 eyes collected over two
sessions. The images, taken in almost perfect imaging conditions, are noise-
free. More realistic imaging conditions were taken into consideration in the
UBIRIS Database [6]. UBIRIS Database includes 1877 images from 241
persons collected over in two sessions. The images collected in the first
photography session were low-noise images. On the other hand, images
collected in the second session were captured under natural luminosity
factor, thus allowing reflections, different contrast levels, and luminosity
and focus problems. Such images might be a good model for realistic
situations with minimal collaboration from the subjects.
Other databases exist, such as the LEI [7] and the UPOL [8]. UPOL
Database includes images from the internal part of the eye, having the
localization work almost done. As for the LEI Database, it includes images
with noise, however, it is a small database of 120 grayscale images.
12
2.2 Iris Localization or Segmentation
Iris recognition is based on the fact that the human iris contains unique
features that completely distinguish a person; the actual information I am
looking for is found within the iris patterns. Thus, it is logical that the first
step in implementing such a biometric system is isolating the iris region
from the other parts of the image, which are of no relevance. The iris region
is approximated by a ring, defined by the iris/sclera boundary and the
iris/pupil boundary. Thus, in this step, I should be able to detect these
boundaries and isolate the part of the image within.
Another important issue to take care of, in this step, is removing any
corruption to the iris region; eyelids and eyelashes, sometimes, occlude
parts of the iris region, thus hiding important information and at the same
time resulting in errors (it will be falsely represented as iris pattern data).
Another distorting factor is specular reflections which occur within the iris
region. In iris localization, a technique is required to locate and isolate the
iris region and exclude the corruptors as well.
In 1993, John Daugman [9] proposed one of the most significant
approaches in iris recognition, which became the basis of many functioning
systems today. Daugman applies an integro-differential operator to isolate
the boundaries. Since then, several approaches were proposed with small
differences [10]. In 1997, however, Richard Wildes [11] proposed another
approach which grabbed high interest in the field and became one of the
most widely used approaches. Wildes approach is divided into two steps;
he converts the image into binary edge map using a gradient-based edge
detector then applies the Hough transform to detect the boundaries. Many
13
papers [12], [13], [14], [15], [16], [17] have been proposed which apply slight
changes over Wildes approach. All of these approaches are based on first
finding a binary edge map using different operators, and then applying
some function to detect the boundaries. Since this is the case, in what
follows I am going to present the different feature detectors used to get a
binary edge map, then I shall present the different boundary detection
approaches. After that I am going to present an overview of different
approaches [3], [11], and [18] which is, somehow, a combination of
Daugman and Wildes approaches.
14
the Gradient-based Roberts, Sobel and Prewitt edge detection operators,
the Laplacian-based LOG edge detection operator (special case of the Zero-
Crossing operators), and the Canny edge detection operator, in addition to
the Compass operator.
1. Gradient-Based Operators
I have stated earlier that an edge is characterized by noticeable change in
intensity. The fact that the gradient of an image points in the direction of
most rapid change in intensity motivated the research on Gradient-based
operators. Gradient based operators detect an image by finding its
gradients magnitude at each pixel and comparing it to some threshold to
determine whether its an edge pixel or not [20]. An edge pixel is
characterized by two variables, the Edge strength, which is the magnitude
of the gradient at the given pixel, and the Edge direction, which is the angle
of the gradient at the given pixel [20].
The gradient of an image f is defined as:
15
Figure shows the gradient at a given pixel in an image; note that its
direction is that of maximum change in intensity.
Roberts Operator
The Roberts operator provides a simple quick approximation of the
gradients magnitude; the operator performs 2-D spatial gradient
measurement on the image [21]; the operator approximates the gradient for
a pixel Pi(i, j) as [22]:
G[f(i,j)] = |f(i,j)-f(i+1,j+1)|+|f(i+1,j)-f(i,j+1)|
16
A more accurate representation consists of a pair of 22 convolution
kernels or masks Gx, and Gy [21], [22]; Gx and Gy are given as follows:
G[f(i,j)] = |Gx|+|Gy|
Applying the masks to the image and doing the required calculations, the
gradient can be computed as follows [22]:
G[f(i,j)] = |Gx|+|Gy|
The incentive behind the wide use of the Roberts operator is its fast
running time [21]. As for its main disadvantages [21], this operator is very
sensitive to noise, and fails to detect edges characterized by small intensity-
jumps.
Sobel Operator
The Sobel operator is similar in its functionality to the Roberts operator.
However, it uses larger convolution masks. The operator consists of two
3x3 masks Gx and Gy which are given by [22]:
The gradient is still approximated by applying the masks to the image and
computing its magnitude as [21]:
17
G[f(i,j)] = |Gx|+|Gy|
The following prototype will make things more clear. Suppose that we
want to compute the gradient for the pixel Pi(i, j) given the input image.
The following is a part of the input image:
Prewitt Operator
The Prewitt operator is very similar to that of Sobel. It consists of 3x3
convolution masks Gx and Gy given as:
18
The gradient is calculated in the same way as before. The only difference is
that the Prewitt doesnt give the pixels that are close to the centers of the
masks ([i,j]) any emphasis, whereas the Sobel does give them higher weight
(2*) [22].
LOG Operator
The LOG operator, or Laplacian of Gaussian, is a special kind of Zero-
Crossing based operators, where filtering or smoothing is done using a
Gaussian filter and enhancement is done using its second derivative [22];
filtering and enhancement can be directly done by convolving the image
with a linear filter which is the Laplacian of the Gaussian filter. After that,
19
edge detection is performed by looking for zero crossings; Figure will
clarify things more.
Figure 3.2 First and Second Derivative of an Edge Illustrating the Zero-
Crossing
Notice that a change in the intensity of the image, which defines an edge,
will result in a zero crossing if taking the second derivative of the function.
3. Canny Operator
The Canny edge detector performs the following algorithm [22]:
First, smooth the image I(i,j) by convoluting it with a Gaussian smoothing
filter G(i,j,), where , the spread of the Gaussian, controls the degree of
smoothing. The result will be smoothed data S[i,j].
20
Next, compute the gradient magnitude and orientation using finite-
difference approximations for the partial derivatives; the gradient is
computed using two 2x2 convolution masks (similar to the Roberts
approximation).
Then, apply non-maxima suppression to the gradient magnitude; in this
approach, an edge point is a pixel whose strength is locally maximum in
the direction of the gradient. The result of this step is data which is almost
zero everywhere except at local maxima points.
Finally, use the double thresholding algorithm to detect and link edges; the
above computed data contains many false edge fragments caused by noise
and fine texture. These must be removed. Thresholding might be a good
solution for that.
However, incorrect choice of the threshold might easier cause detection of
false edges or exclusion of some. To overcome this problem, two threshold
values are applied to the data (one is double the other). With these
threshold values, two thresholded edge images are produced. One of the
images has gaps in the contours but contains fewer false edges. The
algorithm works on bridging these gaps by checking the second image at
the locations of the 8-neighbours for edges that can be linked to the
contour.
21
corners, is generalized to vector quantization, which results in a more
detailed description of the image.
The Compass operator divides an image window into two sides and
compares them to see if they are different. This operator, as opposed to the
Classical operators, allows multiple values, which are quantization of the
different colors, to exist on each side. Quantization of the colors is done by
assigning them to a variable number of values, each with a weight
depending on how many pixels have that color in the image. The Compass
operator uses the EMD (Earth Mover's Distance) to compute the distance
between the two sides. The maximum EMD distance, oriented in some
angle, is defined as the strength of the edge. The strength combined with
its orientation form a quantity similar to a gradient. Thus standard
techniques such as non-maximal suppression and thresholding can be
performed to extract edges. The Compass operator seemed to outperform
the Canny operator; results showed that the Compass operator succeeded
in detecting edges and corners where the Canny operator failed to do so;
Figure 6 shows two images subjected to the two operators.
22
Figure 3.3 Feature Detection Using Canny and Compass Operators
[24]
23
Calculate the intensity variation from a shift (u, v), of each pixel in
each of the eight directions, as:
24
rationally invariant. Also it is sensitive to noise, which causes the detection
of false interesting points along edges and isolated pixels.
Where, A= w, B= w, C= w
x y y y
25
Construct the cornerness map by calculating the cornerness
measure C(x, y)
for each pixel (x, y):
C(x,y)=det(M)-k(trace(M))2
det(M)=12=AB-C2
trace(M)=1+2=A+B
k=constant
26
Classical operators, the pixel is labeled crisply by thresholding the fuzzy
value obtained.
The simplest neural network edge detector was that proposed by Weller
[27] who used a training set of 20 examples of edge-situations in a 3x3
window to train a feedforward/back-propagation (FF/BP) neural network.
Another interesting approach was proposed by Bezdek et al. [28], [29].
Bezdek approach combines the training of a FF/BP neural network, using a
set of 256 examples of edge-situations in a 3x3 window, with a labeling
scheme (Edged-ness) based on a fuzzy membership values scored by a
Sobel operator. The Bezdek operator seems to perform much better than
the classical Sobel operator, especially in detecting edges characterized by a
small intensity-jump. Figure presents two feature-detected images using
the classical Sobel operator and the NN trained one. The Results are clearly
visible.
27
Figure 3.4: 316x500x24-bit color Kosh: Feature Detection Using Classical
Sobel Operator and the NN trained Sobel operator (respectively). [19]
On the other hand, Bezdek operator seems to suffer from some deficiencies
[19]; Bezdek approach cant be extended to larger window size, due to
highly-increasing training time, which might go up to tens of days for 5x5
windows or higher.
Cohen et al. [19] were able to find a new approach which solved this
problem and still give excellent results, even outperforms Bezdek
approach. Their strategy is based on training on binary prototypes (crisp
values); the fuzzy values are defuzzified using a threshold of 0.5. This
technique reduced the training of a 3x3 Sobel operator to about 1/15 the
time required by Bezdek approach. It also made the extension to higher
window sizes possible; the authors trained a Plessey operator (5x5
windows) and got excellent results.
After introducing the different feature detectors, I am going to introduce
the different approaches for boundary detection.
28
2.2.2.1 Integro-Differential Function
The integro-differential function finds for an image I(x,y), the maximum of
the absolute value of the convolution of a smoothing function G with the
partial derivative, with respect to r, of the normalized contour integral of
the image along an arc ds of a circle C((xo,yo), r) [3]:
29
Where
And
The Hough transform have few deficiencies; it fails to detect some circles
while performing edge detection due to the fact that it depends on a
threshold value; this value might not be critically specified, thus resulting
in edge points being neglected. Another point which is worth noting is the
fact the Hough transform is computationally exhaustive, leading to low
speed efficiency. Thus, it might not be suitable for real time applications.
Its worth noting that the integro-differential operator doesnt suffer from
the thresholding problem, since it works on raw derivative information.
30
operator serves to find both the papillary boundary and the limbus
boundary of the iris. After finding these boundaries, Daugman localizes the
eyelids boundaries, using the same integro-differential approach with
arcuate contours (arcs), and with optimally-fitted spline parameters. These
localizations result in isolating the iris. However, in cases where there is
noise in the eye image, such as from reflections, this integro-differential
operator fails; this is due to the fact that it works only on a local scale.
Another deficiency of this operator is that its too computationally
exhaustive.
31
horizontal direction for detecting the eyelids. This makes circle localization
more accurate, it also makes it more efficient.
32
Using Hough transform, search for the two curves satisfying x(t) =
at2 + bt +c, where t [0, 1]. These will determine the edge of the
upper and lower eyelids.
Determine the eyelashes according to one of the following two
situations:
o If one line exists on the area below upper eyelash, it is
considered as a
separate eyelash.
o If the variance of some given small window in the iris image
is less than
threshold, it is regarded as multiple eyelashes.
Now, the eyelashes and eyelids are marked, and thus can be
excluded when
the iris is encoded.
33
However an important note I must take care of when normalizing the
doughnut shaped iris region to have to have constant radius, is that, as
clearly shown in Figure , the centers of the iris and the pupil are not
concentric [3].
Figure 3.5 The Centers of the Pupil and the Iris are not Concentric
34
Figure 3.6 Daugman's Rubber Sheet Model
The remapping (normalization) of the iris region from Cartesian
coordinates (x,y) to normalized non-concentric polar representation is
modeled as:
Where
I(x,y) is the iris region image, (x,y) are the original Cartesian coordinates,
(r,) are the corresponding normalized polar coordinates, (xp,yp) and (xI,yI)
are the coordinates of the pupil and iris boundaries along the direction
[3].
One important issue to note is that although this model takes into account
the pupil dilation, imaging distance, and non concentric pupil
displacement, it does not compensate for rotational inconsistencies. This
35
problem should be taken care of in the matching process where I have to
keep shifting the iris templates in the direction till they become aligned.
Using Daugmans robber sheet model for normalizing the iris region, the
pupil center is considered as the Reference point, and radial vectors pass
through the iris region. The radial resolution is represented by a number of
data points that are selected along each radial line. The angular resolution
is defined as the number of radial lines going around the iris region.
with
37
Thus, a normalization resolution needs to be chosen, and the same number
of data points must be extracted from each iris and stored along virtual
concentric circles, with origin at the center of the pupil [33].
38
parameters s and are recovered via an iterative minimization procedure
[11].
Where
39
x and y incorporate dilations in the wavelet in size by 2m, translations in
position (p,q), and rotations through angle [35], [39].
An interesting and useful choice for (x, y) is the complex valued Gabor
wavelet which is defined as follow:
As I said before, the phase angle can be quantized to construct the iris code.
This quantization is illustrated in Figure 11.
40
Figure 3.8 Pattern Encoding by Phase Modulation
where are the same as before, and specify the center frequency
of the filter [36].
As a result, the iris code can be constructed by demodulating the iris
pattern using complex valued 2 D Gabor wavelets to extract the structure
41
of the iris as a sequence of phasors whose phase angles are mapped or
quantized into bits that construct the iris code.
The angle quantization is furthermore described by the following
conditional integrals where the iris image pixel data is given in the
42
where represents the center frequency, and gives the bandwidth of the
filter [37].
The zero crossing of dyadic scales of these filters are then used to encode
features [33].
43
Lim et al. shows when comparing both Gabor wavelet and Haar wavelet,
that the recognition rate of Haar wavelet transform is slightly better than
Gabor transform by 0.9%.
Where X and Y are the two bit patterns that I compare and N=2048 [35]. So
basically when the bit in pattern X is different than that of pattern Y, the
exclusive or gives a result of 1 which is accumulated till I go over all the
44
bits in the two patterns. Finally the result is divided by N which is the total
number of bits constituting the iris code.
Ideally, the Hamming distance between two iris codes generated for the
same iris pattern should be zero; however this will not happen in practice
due to fact that normalization is not perfect. Besides, I shall always have
noise undetected.
As a conclusion, the larger the Hamming distance (closer to 1), the more
the two patterns are different. And the closer this distance is to 0; the more
probable the two patterns are to be identical. Note that the bit patterns
produced by different people are independent due to the fact that a
persons iris region contains features with high degree of freedom. On the
other hand, two iris patterns produced by the same iris will be highly
correlated. So by properly choosing the threshold upon which I make the
matching decision, one can get better iris recognition results with very low
error probability.
So my main interest here reduces to properly choosing the threshold when
using the Hamming distance matching metric. The Hamming distance
follows a binomial distribution. So after doing 2.3 million comparisons, it
has been shown that the Hamming distance for iris codes corresponding to
different iris pattern has a mean value equal to 0.459 and standard
deviation equal to 0.0197. On the other hand, the mean Hamming distance
value for two iris codes corresponding to the same iris pattern and
therefore the same persons eye is equal to 0.11 and the standard deviation
is found to be equal to 0.065 (Refer to Figure 12). Note that the above
results are the worst case results since they were taken for a decision
45
environment under unfavorable conditions, using image acquired at
different distances, and by different optical platforms [3].
Figure 3.9 The Decision Environment for Iris Recognition Under Relatively
Unfavorable conditions
46
Figure 3.10 The Decision Environment for Iris Recognition Under Very
Favorable Conditions
In 1998, Boles proposed an algorithm for iris feature extraction using zero-
crossing representation of 1-D wavelet transform [33].
48
All these algorithms are based on gray image, and color information was
not used in them. The main reason is that a gray iris image can provide
enough information to identify different individuals.
It seems that the French ophthalmologist Alphonse Bertillon was the first
one who proposed the use of iris pattern (color) as a basis for personal
identification.
Flom and Aran Safir suggested also using the iris as the basis for a
biometric in 1981.
John Daugman [3] in 1991 after collaborating with Flom and Safir for 4
years developed and introduced the application and usage of iris as a
biometric characteristic for individual identification. He has used 2D Gabor
filters and phase coding to obtain 2048 binary feature code and tested his
algorithm on many images successfully. After his work, various structures
for iris recognition were suggested by other people.
Wilds used laplacian pyramids and 4-level resolutions. His algorithm relies
on image registration and matching which requires many computations.
Using a family of Gabor Filters was studied by Ma, Wang and Tang [43] in
some papers.
49
Woo Nam et al. [14] exploited a scale-space filtering to extract unique
features that uses the direction of concavity of image from an iris image.
Lim et al. [38] used 2D Haar wavelet and quantized the 4th level high
frequency information to form an 87-binary code length as feature vector
and applied a LVQ neural network for classification. A modified Haralicks
co-occurrence method with multilayer perceptron is also introduced for
extraction and classification of the irises.
%PUPILFINDER.M
function [cx,cy,rx,ry]=pupilfinder(F)
% USE: [cx,cy,rx,ry]=pupilfinder(imagename)
50
% Arguments: imagename: is the input image of an human iris
% Purpose:
% perform image segmentation and finds the center and two
% (vertical and horizontal) radius of the iris pupil
% Example: [cx,cy,rx,ry]=pupilfinder('image.bmp')
% cx and cy is the position of the center of the pupil
% rx and ry is the horizontal radius and vertical radius of the pupil
G=imread('cc.bmp');
bw_70=(G>70);
bw_labeled=bwlabel(~bw_70,8);
mr=max(bw_labeled);
regions=max(mr);
for i=1:regions
[r,c]=find(bw_labeled==i);
if size(r,1) < 2500
region_size=size(r,1);
for j=1:size(c,1)
bw_labeled(r(j),c(j))=0;
end;
end;
end;
bw_pupil=bwlabel(bw_labeled,8);
%get centroid of the pupil
stats=regionprops(bw_pupil,'centroid');
ctx=stats.Centroid(1);
51
cty=stats.Centroid(2);
hor_center = bw_pupil(round(cty),:);
ver_center = bw_pupil(:,round(ctx));
%from the horizontal center line, get only the left half
left=hor_center(1:round(ctx));
%then flip horizontally
left=fliplr(left);
%get the position of the first pixel with value 0 (out of pupil bounds)
left_out=min(find(left==0));
%finally calculate the left pupil edge position
left_x = round(ctx-left_out);
%from the horizontal center line, get only the right half
right=hor_center(round(ctx):size(G,2));
%get the position of the first pixel with value 0 (out of pupil bounds)
right_out=min(find(right==0));
%finally calculate the left pupil edge position
right_x = round(ctx+right_out);
%adjust horizontal center and radius
rx = round((right_x-left_x)/2);
cx = left_x+rx;
%from the vertical center line, get only the upper half
top=ver_center(1:round(cty));
%then flip horizontally
top=flipud(top);
%get the position of the first pixel with value 0 (out of pupil bounds)
52
top_out=min(find(top==0));
%finally calculate the left pupil edge position
top_y = round(cty-top_out);
%from the vertical center line, get only the upper half
bot=ver_center(round(cty):size(G,1));
%get the position of the first pixel with value 0 (out of pupil bounds)
bot_out=min(find(bot==0));
%finally calculate the left pupil edge position
bot_y = round(cty+bot_out);
%adjust horizontal center and radius
ry = round((bot_y-top_y)/2);
cy = top_y+ry;
53
%IRISFINDER.M
function [right_x,right_y,left_x,left_y]=irisfinder(imagename)
% USE: [rx,ry,lx,ly]=irisfinder(imagename)
% Arguments: imagename: is the input image of an human iris
% Purpose:
% perform image segmentation and finds the edgepoints of
% the iris at the horizontal line that crosses the center
% of the pupil
% Example: [rx,ry,lx,ly]=irisfinder('image.bmp')
% rx and ry is the edge point of the iris on the right side
% lx and ly is the edge point of the iris on the left side
%read bitmap
54
F=imread('cc.bmp');
%find pupil center and radius
[cx,cy,rx,ry]=pupilfinder(F);
% Apply linear contrast filter
D=double(F);
G=uint8(D*1.4-20);
%obtain the horizontal line that passes through the iris center
l=G(cy,:);
margin = 10;
% Right side of the pupil
R=l(cx+rx+margin:size(l,2));
[right_x,avgs]=findirisedge(R);
right_x=cx+rx+margin+right_x;
right_y=cy;
% Left side of the pupil
L=l(1:cx-rx-margin);
L=fliplr(L);
[left_x,avgs]=findirisedge(L);
left_x=cx-rx-margin-left_x; left_y=cy;
55
%CIRCLE
function H=circle(center,radius,NOP,style)
%---------------------------------------------------------------------------------------------
% H=CIRCLE(CENTER,RADIUS,NOP,STYLE)
% This routine draws a circle with center defined as
% a vector CENTER, radius as a scaler RADIS. NOP is
% the number of points on the circle. As to STYLE,
% use it the same way as you use the rountine PLOT.
56
% Since the handle of the object is returned, you
% use routine SET to get the best result.
%
% Usage Examples,
%
% circle([1,3],3,1000,':');
% circle([2,4],2,1000,'--');
%---------------------------------------------------------------------------------------------
if (nargin <3),
error('Please see help for INPUT DATA.');
elseif (nargin==3)
style='b-';
end;
THETA=linspace(0,2*pi,NOP);
RHO=ones(1,NOP)*radius;
[X,Y] = pol2cart(THETA,RHO);
X=X+center(1);
Y=Y+center(2);
H=plot(X,Y,style);
axis square;
57
58
59
%PATTERN
clc;
clear;
%set base directory of irisBasis directory
irisDir = 'E:\Project-M.Tech\Final Code\IrisBasisAll';
destDir = 'E:\Project-M.Tech\Final Code\IrisBasisPattern';
clc;
T=[];
irisFiles = dir(irisDir);
for i=1:size(irisFiles,1)
if not(strcmp(irisFiles(i).name,'.')|strcmp(irisFiles(i).name,'..'))
irisFileName = [irisDir, '\', irisFiles(i).name];
F=imread(irisFileName);
G=im2double(F);
P=[];
for j=1:size(F,2)
P = [P G(j,:)];
end
P=[P str2num(irisFiles(i).name(1:3))];
T=[T;P];
irisFileName[size(P) size(T)]
end
end[destDir,'\abc.mat']
save 'destDir' T
60
%BOTHCIRCLE.M
fname='cc.bmp';
F=imread(fname);
imshow(F);
colormap('gray');
imagesc(F);
hold;
[cx,cy,rx,ry]=pupilfinder(fname);
%plot horizontal line
x=[cx-rx*2 cx+rx*2];
a=cx;
b=cy;
y=[cy cy];
plot(x,y,'y');
%plot vertical line
x=[cx cx];
y=[cy-ry*2 cy+ry*2];
plot(x,y,'y');
%get bounding box (first row, first column, number rows, number
columns)
62
[row, col] = find(bw);
bounding_box = [min(row), min(col), max(col) - min(col) + 1, max(row) -
min(row) + 1];
%display with rectangle
rect = bounding_box([1,4,1,3]);
% rectangle wants x,y,w,h we have rows, columns, ... need to convert
figure; imshow(img); hold on;
rectangle('Position', rect);
I2 = imcrop(img,rect);
figure,imshow(I2);
%figure,imshow(rect),title('xyz');
%rect1 = bounding_box([1,1,1,1]);
I3=imcrop(I2,[20,60,60,50]);
figure,imshow(I3);
63
%PATTERN MATCHING
clc;
clear;
%set base directory of irisBasis directory
irisDir = 'E:\Project-M.Tech\Final Code\IrisBasisAll';
clc;
T=[];
irisFiles = dir(irisDir);
for i=1:size(irisFiles,1)
if not(strcmp(irisFiles(i).name,'.')|strcmp(irisFiles(i).name,'..'))
irisFileName = [irisDir, '\', irisFiles(i).name];
F=imread(irisFileName);
G=im2double(F);
%perform singular value decomposition
xpattern=svd(G);
P=[xpattern' str2num(irisFiles(i).name(1:3))];
T=[T;P];
irisFileName
[size(P) size(T)]
end
end
save('irisBasisSDV','T','-ASCII');
64
load('irisBasisSDV','-ascii');
irisBasisSDV
%get only first 3 dimensions'
nclasses=1;
size(irisBasisSDV)
TS=[irisBasisSDV(1,1:3) irisBasisSDV(1,41)];
%TS=[T(1:nclasses*7,1:3) T(1:nclasses*7,41)];
%display TS (full dataset)
TS
%display scatter points
scatter3(TS(:,1),TS(:,2),TS(:,3),8,TS(:,4),'filled');
%pause;
%form training set with the first 5 instances of each class
Training=[];
for i=1:nclasses
Training=[Training;TS(1,:)];
end;
scatter3(Training(:,1),Training(:,2),Training(:,3),8,Training(:,4),'filled');
%form the pattern matrix (patterns in columns - no class information)
P=Training';
P=P(1,:);
%get class column
targetTr=Training(:,4);
65
%convert sequential numbered classes to power of two:
% class 1 = 1
% class 2 = 2
% class 3 = 4
targetDec=2.^(targetTr-1);
%convert decimal to binary
% class 1 = 001
% class 2 = 010
% class 3 = 100
targetBin=dec2bin(targetDec);
%separate in columns
targetClass=[];
for i=1:nclasses
targetClass=[targetClass double(str2num(targetBin(:,i)))];
end;
%transpose
T=targetClass';
%----------------------------------------------------------- training
S1 = 300; % Number of neurons in the first hidden layer - changes
according to test
OUTPUT IMAGES
69
%CANNY.M
f=imread('ccc.bmp');
subplot(1,2,1);
imshow(f);title('original image');
BW2=edge(f,'canny');
figure,subplot(1,2,1);imshow(f);title('original image);
imshow(BW2);title('canny edge detector image');
70
%BOTHCIRCLE.M
fname='cc.bmp';
F=imread(fname);
imshow(F);
colormap('gray');
imagesc(F);
hold;
[cx,cy,rx,ry]=pupilfinder(fname);
%plot horizontal line
x=[cx-rx*2 cx+rx*2];
a=cx;
b=cy;
y=[cy cy];
plot(x,y,'y');
%plot vertical line
x=[cx cx];
y=[cy-ry*2 cy+ry*2];
plot(x,y,'y');
71
y=[ry ry];
plot(a,b,'y');
%plot vertical line
x=[rx rx];
y=[ry-ly*2 ry+ly*2];
plot(a,b,'y');
circle([a b], lx+10, 1000, '-');
%[IB]=irisbasis('cc.bmp',100,100,1);
%imshow(uint8(IB));
%p=uint8(IB);
%imshow(F);
%i=imcrop;
%imshow(i);
%t=i;
%imshow(t);
% load
img = im2double(imread('cc.bmp'));
% black-white image by threshold on check how far each pixel from
"white"
bw = sum((1-img).^2, 3) > .5;
% show bw image
figure; imshow(bw); title('bw image');
%get bounding box (first row, first column, number rows, number
columns)
72
[row, col] = find(bw);
bounding_box = [min(row), min(col), max(col) - min(col) + 1, max(row) -
min(row) + 1];
%display with rectangle
rect = bounding_box([1,4,1,3]);
% rectangle wants x,y,w,h we have rows, columns, ... need to convert
figure; imshow(img); hold on;
rectangle('Position', rect);
I2 = imcrop(img,rect);
figure,imshow(I2);
%figure,imshow(rect),title('xyz');
%rect1 = bounding_box([1,1,1,1]);
I3=imcrop(I2,[20,60,60,50]);
figure,imshow(I3);
73
74
FUTURE SCOPE
The iris detection is the initial steps for analysis of facial images in image
processing environment. It can be further used in detecting faces, tracking
eyes in many situations such as gaze direction, dividing alertness systems,
recognition of faces, facial expression analysis. Face recognition is a field of
biometrics together with fingerprint recognition, iris recognition, and
speech recognition and so on.
For example, IRIS recognition systems are being tested and installed at
airports to provide a new level of security; human-computer interfaces.
However, there still are two main restrictions in using the proposed
algorithm:-
75
REFERENCES
1. www.google.com
2.www.wikipedia.com
3. LiborMasek, Peter Kovesi, Iris
Recognition,http://www.csse.uwa.edu.au/~pk/studentprojects/libor/in
dex.html
5.JohnDaugmanhttp://www.cl.cam.ac.uk/users/jgd1000/
International Biometric group, Independent Testing of Iris Recognition
Technology
6. http://faculty.qu.edu.qa/qidwai/DIP/downloads.html
7. findbiometrics.com
8. eye-controls.com
9. Daugman, J., Complete Discrete 2-D Gabor Transforms by Neural
networks for Image
Analysis and Compression, IEEE Transactions on Acoustics, Speech,
and Signal Processing, Vol. 36,
no. 7, July 1988, pp. 1169-1179.
76
12. Gonzalez, R.C., Woods, R.E, Digital Image Processing, 2rd ed.,
Prentice Hall (2002).
13. Lim, S., Lee, K., Byeon, O., Kim, T, Efficient Iris Recognition
through Improvement of Feature
Vector and Classifier, ETRI Journal, Volume 23, Number 2, June 2001,
pp. 61-70.
77