You are on page 1of 10

International Journal of Neural Systems, Vol. 13, No.

4 (2003) 263271 c World Scientic Publishing Company

FINGERPRINT MATCHING USING RECURRENT AUTOASSOCIATIVE MEMORY


B. POORNA Department of Computer Applications, Dr. M.G.R. Engineering College, Maduravoyal, Chennai 602 102, India poornasundar@yahoo.com K. S. EASWARAKUMAR School of Computer Science and Engineering, Anna University, Chennai 600 025, India easwarakumar@yahoo.co.in Received 14 November 2002 Revised 28 May 2003 Accepted 28 May 2003
An ecient method for ngerprint searching using recurrent autoassociative memory is proposed. This algorithm uses recurrent autoassociative memory, which uses a connectivity matrix to nd if the pattern being searched is already stored in the database. The advantage of this memory is that a big database is to be searched only if there is a matching pattern. Fingerprint comparison is usually based on minutiae matching, and its eciency depends on the extraction of minutiae. This process may reduce the speed, when large amount of data is involved. So, in the proposed method, a simple approach has been adopted, wherein rst determines the closely matched ngerprint images, and then determines the minutiae of only those images for nding the more appropriate one. The gray level value of pixels along with its neighboring ones are considered for the extraction of minutiae, which is more easier than using ridge information. This approach is best suitable when database size is large. Keywords : Minutiae extraction; pattern matching; recurrent autoassociative memory.

1. Introduction Fingerprint classication and identication has been addressed by many researchers in the past. They are the most widely used biometric feature for automatic personal identication. Law enforcement agencies use it routinely for criminal identication. It is also being used in several other applications such as access control for high security installations, credit card usage verication and employee identication. The main reason of ngerprints as a form of identication is that the ngerprint of a person is unique and remains invariant through age. Several pattern recognition algorithms exist now for ngerprint classication, including early syntactic
Corresponding

approach, methods based on detection of singular points; and connectionist algorithms such as selforganizing feature maps and neural networks. The singular points, namely the core and delta points, act as registration point for comparing the ngerprints. A structure based approach using the estimated orientation eld in a ngerprint image to classify the ngerprint was given in Ref. 6. A topological approach to detect core point was proposed in Ref. 18. A Fourier transform method to reach the core point was given in Ref. 2. A method for the automatic detection of these points using syntactic tree grammar has been reported in Ref. 16. These methods rely on the accuracy of the immediate

author.

263

264 B. Poorna & K. S. Easwarakumar

neighborhood information. In syntactic approach using stochastic grammars, probabilities associated with the production rules have been considered for the ngerprint classication problem.13 The main stumbling block of this approach is that mechanisms for the inference of grammars from training samples have not been well understood. Here, the major drawback is the noise and the requirement of numerous matching lters. Optical techniques in ngerprint classication using stochastic grammars has been attempted in Ref. 11. The method proposed in Ref. 10 primarily deals with the description of ngerprint impressions by determining the location of ridge endings, bifurcations and enclosures. The matching techniques of most of the ngerprint identication systems presume a high level of accuracy of singular points. In Ref. 5, the ngerprint features are combined using two types of Neural networks, one to classify the ngerprints and the other to train the matching networks. The network training is strongly dependent on regularization and pruning for accurate generalization. Some of the methods require ridge width, ridge length, ridge direction and minutiae direction to decide spurious minutiae. Most of the approaches use local ridge directions and a locally adaptive threshold method. To improve ngerprint image quality, directional ridge enhancement is used. The adaptive ow orientation based feature extraction method proposed in Ref. 17, involves tremendous execution time. Direct optical correlations and hybrid optical neural network correlations are used in the matching system for inked ngerprints.5 The images in both binary and gray level forms are tested for cross correlation and auto correlation sensitivity. Results are found to be strongly inuenced by plastic distortion of the nger. One of the problems encountered by many existing systems for ngerprint matching is that they are highly sensitive to imperfections introduced during ngerprinting. The Recurrent autoassociative memory is useful for application dealing with large sized databases. As large amount of data is to be stored, compact representation and ecient access mechanism is essential. A lot of work on neural network and other learning machines was stopped by the need for adequate representation.12 Normally sequential and non-sequential data structures, which are used

for data representations, are simple. A number of connectionist models capable of representing data with compositional structure have already appeared. Distributed representations have been the focus of much research, especially for a connectionist network. The systematic patterns developed by recurrent autoassociative memory are a dierent kind of representation, called recursive distributed representation.15 This was used for performing holistic structure-sensitive computations with distributed representations in Ref. 4. Also non-monotonic reasoning is a core problem in articial intelligence. A connectionist structure with exceptions represented using recurrent autoassociative memory is in Ref. 3. In general, variable sized recursive data structures and compositional structures use recursive distributed representations. The proposed method here is focused particularly on handling huge amount of data each time the recognition process takes place, the recurrent autoassociative memory is used. The most important features for ngerprint matching is accuracy and speed of retrieval. Fingerprint identication is very crucial as it is used for criminal identication and high security installations. Hence, accuracy in the matching process is very essential. Also, the applications that use this biometric feature for identication demands immediate response and so speed of retrieval is a must. Since, the size of ngerprint database is usually huge, an ecient method for storage and retrieval is mandatory, which is attempted in this paper. Also, the accuracy of matching is improved by comparing the number and type of minutiae of the search pattern and the retrieved pattern. Some existing methods tried to reduce the size of database by classifying the data. However, this alone is not sucient for improving the speed of recognition. The proposed method determines the core points using the algorithm given in Ref. 9. The algorithm given in Ref. 9 does not depend on a particular data set and it can be tested on the entire database. Also, smoothing for nding the singularities and classifying them are substantially faster, as in this method the given image is reduced to a size of 64 64, which is relatively a small image. After determining the core points, the intensity of the pixels around the core points are choosen for further processing. Generally, the intensity changes relatively. Due to

Fingerprint Matching Using Recurrent Autoassociative Memory 265

this fact, our method determines the matching even if the image under consideration is of low quality. 2. Minutiae Extraction Minutiae are local discontinuities(ridge anomalies) in the ngerprint pattern. Fingerprint identication is mainly based on the detection of the minutiae. There are generally four types of minutiae.7 They are terminations, bifurcations, crossovers and undetermined. Of these only the rst two types of minutiae are considered for the identication. The eciency of ngerprint identication system depends on the method used for the extraction of minutiae. Most of the minutiae extraction methods transform ngerprint images into binary images using some adhoc algorithms. The images obtained are subject to a thinning algorithm. The method proposed in this paper for ngerprint matching uses a simple process to detect the minutiae. The core point is the top most point of the inner most ridge, and a delta point is the triradial point with three ridges radiating from it, and are known as singular points. The two singular points of interest are identied using already existing methods.9 In order to do the ngerprint matching accurately, the images are normalized. The normalization must account for translation, rotation and scaling. The singular points are good candidates for registration points, and for core classication of the ngerprint pattern. An arch ngerprint does not contain any singular point. Tented arch, right loop and left loop contains one core and one delta point, whereas whorls and twin loops contain two core and two delta points. By connecting the core and delta point, it is possible to decide on the type of the ngerprint. This classication reduces the search space when matching is done. The area of the ngerprints containing the singular points and the ridges is called a pattern area. The method proposed here rst reduces the ridge thickness by applying a thinning algorithm. Then, a circular pattern area, with the registration point as the centre of the circle and a particular radius r is formed. Whenever the ngerprint classication has two core points as in the case of whorl and twin loop, the mid-point of those two core points are considered to be the point of registration. In the case of an arch type ngerprint classication, as there are

no singular points, the following step is performed to determine the centre of the pattern area. The arch type ngerprint has some ridges that are nearlly horizontal at the bottom, above which arches are formed. The local maxima of each of the arcs are considered, and if more than one local maxima exists for an arc, then the centroid of such local maxima is considered as the local maxima of that arc for further processing. The registration point is then determined as the centroid of the local maxima of all arcs exit in the image. Now, a circular pattern area is determined with radius r and the center as the registration point, as shown in Fig. 1. The circular pattern area is divided into unit rectangular cells, each of one pixel size, and the value of r is so determined that the circular pattern area has atleast 512 pixels. The average gray value g of all the pixels in the pattern area is determined. The cells are assigned a value zero if its intensity is below this value g , and 1 otherwise. The minutiae points are then determined as follows. Traverse the cells of circular area left to right within top to bottom order. For each cell that has a value of 1, the eight neighbouring cells that surround it are considered to form a square area. Based on the gray value on these cells the type of the minutiae is decided. If only one neighbour of the center cell has gray value 1 then that cell is a terminal minutiae. Similarly, if the center cell under consideration has only three neighbouring cells with gray value 1, then that point is

Axis Registration point

Quadrants

Fig. 1. Circular pattern area.

Figure 1: Circular pattern area inner most ridge, and a delta point is the triradial point with three ridges radiating from it, and are known as singular points. The two singular points of interest are identied using already existing methods [9]. In order to do the ngerprint matching accurately, the images are normalized. The nor-

266 B. Poorna & K. S. Easwarakumar

where the matrix W is called the connectivity matrix, which is an n n matrix containing network weights. The algorithm allowing the computation of W is called the Recording or Storage algorithm.14 The mapping as in Eq. 1 performed on a key vector is called a Retrieval algorithm. Updating the output of the ith neuron is done in an asynchronous fashion. Under asynchronous operation of the network, each element of the output vector is updated separately, while taking into Fig. 2. Some bifurcation minutiae. account that the most recent values for the elements Figure 2: Some Bifurcation Minutiae that have already been updated and remain stable. The autoassociative recall of images uses the considered to be a bifurcation minutiae.8 Some Hopeld model to store and recall a set of bitmap algorithm. The memory is able store example patterns are shown in to Fig. 2. data in a robust manner, images. Images are stored by calculating a correNow, thedoes given not circular area, it is breakdown and cal damage to by itstraversing structure cause total sponding weight matrix. Thereafter, starting from possible to determine the number and the type of o recall. It associates or regenerates stored pattern vectors and conguration, the memory will settle an arbitrary minutiae. The choice of circular area is for recogmeans of specic similarity criteria. The memory locations no on have exactly that stored image, which is nearest to the nizing ngerprints that appears in dierent direcstarting conguration. Thus, given an incomplete or and storage is distributed over a large interconnected neurons. An tions. Moreover, the images to be matched are at corrupted version of a stored image, the network is sociative can store a large set of patterns as memories. the memory same scale. able to recall the corresponding original image. The ociative memory performs an associative mapping of an input vecmemory even shows a limited degree of fault tolean output vector v , by performing the transformation rance in case of corrupted input patterns. 3. Recurrent Autoassociative Memory Dynamic memory networks exhibits dynamic Neural networks recognize patterns that is not v = M [x ] evolution in the sense that they converge to an equieven dened, as the neural network characterizes libirium state according to the recursive formula much intelligent behaviour. An associative memory1

tor M denotes a general nonlinear matrix-type operator. For the belong to a class of neural networks that learns v k+1 = M [xk , v k ] . ciativeaccording memory, an input pattern x is presented and mapped to to a certain recording algorithm. The by simply performing the matrix memory is able to store data in multiplication a robust manner,operation, The operator M operates at the present instant k

so that local damage to its structure does not cause on the present input xk , and output v k to produce v = W x (1) total breakdown and inability to recall. It associates the output for the next instant k + 1. The memory or regenerates stored pattern vectors and do so by is essentially a single layer feedback network with n matrix means W is called the connectivity matrix , which is an n n matrix of specic similarity criteria. The memory neurons and is a discrete time network. Under the locations have The no addresses, and storage is distrinetwork weights. algorithm allowing the computation of W update mode as only one neuron is asynchronous buted or over a large interconnected neurons. An as in allowed to compute or change state at a time and e Recording Storage algorithm [14]. The mapping equation ecient associative memory can store a large set of then all output are delayed by a time produced by d on a key vector is called a Retrieval algorithm. patterns as memories. th thefashion. unity delay element in the feedback loop. This ng the output of the i neuron is done in an asynchronous An associative memory performs an associative symbolic delay allows for the time-stepping of the nchronous operation of the network, each element of the output mapping of an input vector x into an output vector retrieval algorithm. pdatedv,separately, while taking into account that the most recent by performing the transformation v = M [x] . The operator M denotes 7 a general nonlinear matrixtype operator. For the linear associative memory, an input pattern x is presented and mapped to the output by simply performing the matrix multiplication operation, v = Wx (1) 3.1. Encoding

the elements that have already been updated and remain stable.

Encoding of the information is a vital factor in solving a problem using neural network. The binary encoding is used in this problem. The type of encoding is purely depending on the problem to be solved. The binary string is made of six parts. The length of each string is n, where n is the number of pixels that make up the circular area. The rst part is meant

Fingerprint Matching Using Recurrent Autoassociative Memory 267

for the registration point, which consists of only one cell. Therefore, the binary value of the registration point is considered for the rst part. Basically, for left loop, right loop and whorl the bit value will be zero, and for tented arch the bit value will be one. However, in case of arch and twin loop it may be 0 or 1. The given circular area is divided into four quadrants. In the case of an arch type ngerprint, a perpendicular line drawn from the centroid to the base line is considered as one of the axis. For tented arch, left loop or right loop, the line joining the core and delta point is considered as one of the axis. Whereas for whorl and twin loop the line joining the middle of the two core points and the middle of the delta points is considered as one of the axis. The other axis in all the case is the one perpendicular to this axis. The bit information for the next four parts are determined based on the binary value exists in the clockwise order starting from the registration point towards the boundary of the circle corresponding to the four quadrants. The sixth part consists of the binary value of the pixels on the axis, starting from the positive y-axis and then in clockwise order. For each axis, cells are to be considered from registration point towards the boundary of the circle. Figure 3 illustrates the order of choosing cells for

1 1 0 1 0 1 0 1 0 11 1

0 0 0 1 0 0 1

1 1 0 1 0 0 1 1 0 1 0 1 0 1 1 0 1 0 1 1 0 1 0 1 1

Fig. 4. Encoding.

41 31 21 1 42 32 22 (a) 43 33 44

assigning bit values for each quarter, and that order be 1, 21 , 22 , 31 , 32 , . . . . In our approach, ith, 2 i 5, part corresponds to the (i 1)th quadrant, after normalization, which is explained in Sec. 3.2. Thus, the entire string is made up of sequence of zeros and ones. For example, consider the pixel ordering shown in Fig. 4. The circular pattern area consists of 45 pixels. In this gure, the registration point is shaded black. Let the value of the registration point be 0, and therefore the rst part of the corresponding string is 0. The 2nd part of the binary string corresponds to the rst quadrant, and it is 00110001. The pixels are considered here in clockwise order. Similarly, the 3rd, 4th and 5th parts of the binary string relate to the second, third and fourth quadrants, respectively. The respective substrings are 01100111, chord3 00011111 and 00010011. The nal part provide the sequence corresponding to positive Y -axis, positive X -axis, negative Y -axis and negative @ X -axis, in that order. For each axis (either positive @ @ or negative), the sequence will be determined from @? the center point towards the boundary of the circle. @ @ @  Here, in our example, the 6th part of the sequence is @ chord1 @ @ @ 111101110101. The entire sequence is thus @
(b)

(a)

000110001011001110001111100010011111101110101

Figure 3: (a) Pixel ordering (b) Some Chords in the quadrant chord3 3.2. Normalization and the middle of the delta points is considered as one of the axis. The other axis in all the case is the one perpendicular to this axis. The bit information Let us assume that there are n cells in the circular for@ the next four parts are determined based on the binary value exists in the The value of n should atleast be 512. There@ clockwise order starting from the registration point towardsarea. the boundary of @ the circle to the four quadrants. The sixthfore, part the consists of circular area is divided into four quadrants, @ corresponding ? 43 @ the binary value of the pixels on the axis, starting from the positive y-axis (as shown in Fig. 1) each consists of 16 chords (see @ @ then and order. For each axis, cells are to be considered from  in clockwise @ 33 44 chord 1 Fig. 3) as the rst one made of one pixel, the second @ @ towards the boundary of the circle. Fig. 3 illustrates the registration point @ @ with two the third with three pixels and so (b) order of choosing cells for assigning bit values for each quarter, and pixels, that a) 31 , 32 , ..... In our approach, ith , 2 i 5, on. part corresponds order be 1, 21 , 22 , (b) Let wt be the weight assigned to a cell in the i in Section to the (i 1)th quadrant, after normalization, which is explained ith chord. The chord that is nearest to the core point (a) Pixel ordering (b) Some Chords in the quadrant 3.2. Thus, the entire string is made up of sequence of zeros and ones. a weight wt1 = 0.95, followed by the next example, consider the pixel ordering 4.assigned The circular Fig. 3. (a) For Pixel ordering (b) Some chords in shown the in g. is f the deltaquadrant. points is considered as one of the axis. The other successive chords being assigned the weight starting pattern area consists of 45 pixels. In this gure, the registration point is e is the one perpendicular to this axis. Let Thethe bit information shaded black. value of the registration point be 0, and therefore parts are determined based on the binary value exists in the string is 0. The 2 nd part of the binary the rst part of the corresponding tarting from the registration towardsto the boundary of string point corresponds the rst quadrant, and it is 00110001. The pixels are ponding to the four quadrants. The sixth part consists ofSimilarly, the 3 rd , 4th and 5th parts of the considered here in clockwise order. of the pixels on the axis, starting from the positive y-axis binary string relate to the second, third and fourth quadrants, respectively. kwise order. For each axis, cells are to be considered from The respective substrings are 01100111, 00011111 and 00010011. The nal t towards the boundary of the circle. Fig. 3 illustrates the

268 B. Poorna & K. S. Easwarakumar

1 0 0 1

0 0 1 1

0 1 0 0 1 0 0

(a)

.8 .85 .9 .95

.75 .8 .85 .9

.7 .75 .7 .8 .85 .75 .8

After each circular left shifting, the weight of the sequence will be determined. This gives four dierent weights, after three shifting. Now, we select the minimal weight sequence as the normalized one. When more than one sequence gives the minimal weight, any one can be chosen arbitrarily as normalized one. Let, i be the number of shifting required to get the normalized one, then (i + 1)th quadrant becomes the rst quadrant for the encoding process. In our example, one circular left shift leads to the sequence (5)(12)(9)(10). Also, the minimum weight sequence is this particular one, which is obtained after one circular left shift. Then, the quadrant corresponding to Q2 is now considered as the rst quadrant, followed by other three quadrants in clockwise order for encoding. 3.3. Algorithm

(b) Fig. 5. Weight determination.

from 0.9, and reducing the weight by 0.05 for each chord. Here the value of wt16 is 0.2. The rst value is chosen randomly, which may be between 0 and 1. The weight of each quadrant is determined as follows. Let, Qi be the sum of the weight of chords starting from the one that is nearest to the core 16 j point of quadrant i, then Qi = j =1 k=1 (wtj ), for 1 i 4. For example, consider the pixel ordering shown in Fig. 5. The quadrant here consists of ve arcs. The pixel values and weights are respectively shown in Fig. 5(a) and Fig. 5(b). Thus, the sequence corresponding to this quadrant is 101010100001001, and the weight of this quadrant is (1 0.95+(0+1) 0.9+(0+1+0) 0.85+(1+0+0+ 0) 0.8+(0+1+0) 0.75+(0+1) 0.7), which is equal to 4.95. After nding the values of Qi , 1 i 4, a sequence of Qi s is formed as (Q1 )(Q2 )(Q3 )(Q4 ). For example if Q1 = 10, Q2 = 5, Q3 = 12 and Q4 = 9, then the initial sequence would be (10)(5)(12)(9). The weight of the sequence is dened as concatenating the values of the elements in the sequence, in that order, with respect to the base as the maximum value exists in the sequence. For instance, suppose Q3 is maximum among {Qi |1 i 4}, then the weight of the sequence is (Q1 Q2 Q3 Q4 )Q3 . The normalized sequence is the one having minimum weight. This is achieved by performing sequence of circular left shifting on the values of elements in the sequence.

Recurrent networks are neural networks with one or more feedback loops. There are two functional uses of recurrent networks, one associative memory and the other is input-output mapping. Biological memory operates according to the associative memory principles. Associative memory enables a parallel search within a stored datale. An associative memory has the ability to retrieve a stored pattern, given a reasonable subset of the information content of that pattern. The connectivity matrix, W , is used to perform the associative mapping. The neural system is said to have learned the association, when given the input x, it identies the output v . Here, W is the outer product matrix, which is the generalization of Hebbs postulate of learning.1 Suppose, each pair of association generates an associative matrix, then the overall connectivity matrix is the sum of all the matrices of the individual associations. Thus, the connectivity matrix becomes
p

W =
m=1

x(m) x(m)t pI

where p is the number of bipolar vectors stored, and I is an identity matrix. The system here does not need the individual vectors, however only the weights. As the Hebbs learning does not involve the presence of negative synaptic weight values, only bipolar vectors are allowed for building the auto correlation matrix. First, convert the normalized binary vectors B (j ) (j ) (j ) into bipolar form using the formula xi = 2 bi 1, 1 i n and 1 j p, where n is the number

Fingerprint Matching Using Recurrent Autoassociative Memory 269

of pixels used for matching, and p is the number of bipolar vectors stored. In an autoassociative memory, each x(j ) (when 1 j p), converts to v (j ) , based on mapping equation given in 1. Here, vector x(j ) is the stored data, and the bipolar vector to be matched serves as a search argument. The following algorithm is used to store the bipolar vectors x(j ) , 1 j p, in the memory. Algorithm: 1 [Storage Algorithm] Input: p Bipolar vectors, x(1) , x(2) , ..., x(p) . where x(m) is (n 1), f or 1 m p. BEGIN 1. Initialize W as zero matrix and m as 1. 2. Calculate W = W + x(m) x(m)t I, where I is the identity matrix. 3. If (m < p) then m = m + 1 and goto step 2. 4. Store W. END. The associative memory used here is content addressable. Initializing v as a (a is the pattern whose match is searched), the elements vi , 1 i n, of the given vector v can be calculated by using the discrete-time recurrent network update rule
n k+1 vi

3. Update neuron i by computing vnew as


n

neti =
j =1

wij vj

vnew = sgn(neti) (Note that vnew is the vector of size n 1). 4. If (i n) then update i = i + 1, goto step 3. 5. If vnew = a then Output vnew and STOP. 6. If v = vnew then Display Match not found and STOP. 7. Update k = k + 1, v = vnew, goto step 3. END. The search argument a is a bipolar vector of the ngerprint whose match is required. Using this search argument, the closest stored vector is obtained using the retrieval algorithm. The network has the ability to converge to the desired output, when a corrupted pattern is given. The overall process of our ngerprint matching is briefed through following steps. 1. Calculate the connectivity matrix W , as stated in the storage algorithm. 2. Store the input vectors in a separate database called DB . 3. For the bipolar vector a, determine the vector (vnew) that matches (exactly or closely) a, using the retrieval algorithm. 4. Using the vector vnew, search DB to get all closely matched stored vectors, any of which likely to match the pattern being searched. The closely matched patterns are determined using Hamming distance, which we call as an allowable error rate. The vector gives zero Hamming distance is the exactly matched one. 5. Now, compare the number and types of minutiae of the images, retrieved in the previous step, with the number and types of minutiae of the search pattern, for more accuracy. This can be done only after converting the bipolar vectors to the respective binary vectors. The Hamming distance is dened as an integer equal to the number of bit positions diering between two binary vectors of same length. For two n-tuple

= sgn

j =1

k wij vj

where k denotes the index of recursive update, and i is the neuron number currently undergoing the update asynchronously. Here, the functions sgn() is applied to each element of the vector. The function k ) sgn() returns either 1 or 1 depends on sgn(wij vj is positive or not. Here, each neuron can update its values simultaneously. Note that n neurons are required, as the number of pixels used for matching is n. The method for retrieval process is as follows. Algorithm: 2 [Retrieval Algorithm] Input: (i) The bipolar vector a of the pattern to be matched (ii) The connectivity matrix W . BEGIN 1. Initialize k and i as 1, where k is the cycle counter, and i is the update counter. 2. Initialize v as a. (Note that a and v are the vectors of size n 1)

270 B. Poorna & K. S. Easwarakumar

binary vectors x and y , the Hamming distance HD(x, y ) is n 1 abs(xi yi ) . 2 i=1 Advantages Some advantages of the proposed methods are due to 1. The database is to be searched, vector by vector, only if there is atleast one closely matched image exists in it. This is benecial when a large sized database is used. 2. The process of comparing the stored vectors is much more easier than extracting the minutiae and then comparing. 3. Minutiae is determined only for closely related matches, but not on the entire database images. Also, this process is required only if exact match is not found. 4. The number of closely matched pattern is obviously far lesser than the number of patterns in the database. Thus, minutiae should be determined only for a limited set of ngerprints. 4. Results and Conclusion The algorithm stated in Sec. 3.3 was implemented using MATLAB, a technical computing language. The image processing toolbox of MATLAB is used for reading, thinning and rotation of the ngerprints. A database of 1000 ngerprints is used for testing. The test set is a mix of real and articial ngerprints. 60% of the ngerprints were created using the synthetic ngerprint generator. Some of the ngerprints generated are of dierent intensities and transformations of the original one. The input gray scale image is converted to binary using threshold value, say g . The value of g is specied within the range [0, 1]. The conversion is carried out as follows. When the luminance of the pixel is less than the threshold value, its value will be treated as 0. On the otherhand, it will be 1. It is also observed that for a normal image with less noise, the threshold value within the range of [0.5, 0.7] yields correct result. Usually, the intensity of the image varies almost uniformly. So, an image with less quality must be used with small threshold value. Similarly, a dark picture is to be used with high threshold value. Therefore, we can modify

Table 1. Observation. Intensity variation (%) 020 2040 4060 6080 80100 Threshold range 0.80.9 0.70.8 0.50.7 0.30.5 0.10.3

the threshold value suitably by either increasing or decreasing, depending on quality of image. The observations that are made on threshold value, while there is change in intensity, is given in Table 1. It is worth to note that the recurrent autoassociative memory neural network is found suitable for matching, even if their intensities varies. Usually, the quality of the image varies uniformly and relatively on the gray scale values. Therefore, our algorithm is best suitable for carrying out detection with low quality images. Moreover, it is known that the performance and correctness depends on minutiae detection. As the minutiae extraction depends only on a particular unit cell along with eight neighbouring cells, the approach adopted in this paper is more eective, as this approach is not based on directional maps. Due to normalization, the algorithm determines the images even if it is rotated. The practical importance of recurrent autoassociative memory is for large size networks. The network is able to produce correct stored states when an incomplete or noisy version of the stored vector is applied as the input. The update rule can reconstruct a noise corrupted or incomplete pattern. The approach adopted in this paper is more eective, as the system need not remember the individual vectors, but only the weights. Hence, the algorithm works with the same accuracy when applied to real world problems of large data sets. This, denitely makes the algorithm more eective than other methods. The accuracy of the algorithm is improved by the process of minutiae extraction when closely matching patterns are retrieved, that too when exact matching is not found. The algorithm works substantially better, irrespective of intensities or of transformations. Acknowledgment We thank the referees for their valuable suggestions for the improvement of the quality of presentation.

Fingerprint Matching Using Recurrent Autoassociative Memory 271

References
1. J. A. Anderson 1995, An Introduction to Neural Networks (Eastern Economy edition). 2. D. A. Aushermann et al. 1973, A proposed method for the analysis of dematoglyphics patterns, Proc. Soc. Photoopt. Instrum. Engrs. 40. 3. M. Boden and A. Narayanan 1992, Connectionism in a broad perspective, in Swedish Conference on Connectionism. 4. L. Chrisman 1991, Learning recursive distributed, representations for holistic computation, Connection Science 3(4), 345366. 5. C. L. Wilson, C. L. Watson and E. G. Paek 1997, Combined optical and neural network ngerprint matching, (Information Technology Laboratory, National Institute of Standards and Technology). 6. C. L. Wilson, G. T. Candela and C. I. Watson 1993, Neural network ngerprint classication, Journal of Articial Neural Networks 1(2), 125. 7. A. Farina, Z. M. KovacsVajna and A. Leone 1999, Fingerprint minutiae extraction from skeletonized binary images, Pattern Recognition 32, 877889. 8. A. Jain, L. Hong and R. Bolle 1997, On-line ngerprint verication, IEEE Transactions on Pattern Analysis and Machine Intelligence 19(4), 302314. 9. K. Karu and A. K. Jain 1995, Fingerprint classication, Pattern Recognition 29(3), 389404.

10. C. R. Kingston 1967, Problems in semi-automated ngerprint classication, Law Enforcement Science and Technology (Academic Press). 11. E. Marom 1967, Fingerprint classication and identication using optical methods, Law Enforcement Science and Technology (Academic Press). 12. M. Minsky and S. Papert 1988, Perceptrons (MIT Press, Cambridge). 13. B. Moayer and K. S. Fu 1975, A syntactic approach to ngerprint pattern recognition, Pattern Recognition 7, 123. 14. J. M. Zurada 1997, Introduction to Articial Neural Systems (Jaico Publishing House). 15. J. B. Pollock 1990, Recursive distributed representations, Articial Intelligence 46, 77105. 16. C. V. K. Rao and K. Black 1980, A syntactic approach to classication of ngerprints, IEEE Transactions on Pattern Analysis and Machine Intelligence 2, 223231. 17. N. K. Ratha, S. Chen and A. K. Jain 1995, Adaptive ow orientation-based feature extraction in ngerprint images, Pattern Recognition 28(11), 16571672. 18. J. T. Tou and W. J. Hankley 1968, Automatic ngerprint identication and classication analysis via contextual analysis and topological coding, Pictorial Pattern Recognition, 411456.

You might also like