You are on page 1of 6

2010 IEEE Symposium on Industrial Electronics and Applications (ISIEA 2010), October 3-5, 2010, Penang, Malaysia

FPGA-Based Embedded System Implementation of Finger Vein Biometrics


M. Khalil-Hani VLSI-eCAD Research Laboratory (VeCAD) Faculty of Electrical Engineering, Universiti Teknologi Malaysia, 81310 Skudai, Johor, Malaysia khalil@fke.utm.my P.C. Eng VLSI-eCAD Research Laboratory (VeCAD) Faculty of Electrical Engineering, Universiti Teknologi Malaysia, 81310 Skudai, Johor, Malaysia peichee84@hotmail.com

AbstractWith the ubiquitous deployment and rapid growth of electronic information systems in todays society, personal or identity verification is now a critical key problem. Due to this fact, biometric authentication has emergently gaining popularity as it provides a high security and reliable approach for personal authentication. However, authentication using ones biometric features has not been widely implemented in a real time embedded system. Thus, in this paper, a novel approach to personal verification using infrared finger vein biometric authentication implemented on FPGA-based embedded system is presented. Creating a biometric authentication system in this resource-constrained embedded system for a real-time application, being a challenging problem in itself, is a significant contribution of this work. The proposed biometric system consists of four modules, namely image acquisition, image pre-processing, feature extraction, and matching. Feature extraction is based on minutiae extracted from the vein pattern image, while the biometric matching utilizes a technique based on the Modified Hausdorff Distance. The system is prototyped on Altera Stratix II FPGA hardware board with Nios2-Linux Real Time Operating System running at 100MHz clock rate. Experiments conducted on a database of 100 images from 20 different hands shows encouraging results with system acceptable accuracy of less than 1.004%. Our first version of the embedded system, which is wholly in firmware, resulted in an execution time of 1953x106 clock cycles or 19 seconds. The results demonstrate that our approach is valid and effective for vein-pattern biometric authentication. Keywords Biometric, Embedded System, Finger Vein, FPGA, RTOS

I. INTRODUCTION Traditional method uses personal identification number (PIN), password, key smartcards, etc which are based on something the user has and/or knows (ownership and knowledge based factor). Today, these methods have proven adequately to be unreliable and do not provide adequately strong security. For example, smartcards and keys can be stolen, and PINs can be hacked by unauthorized persons. Hence, personal verification methods that utilizes a person biometric traits has been intensively studied and developed to overcome the disadvantage of the traditional methods. Biometric recognition (biometrics) refers to automatic recognition of

individual based on their physiological and behavioral characteristics [1]. Since it is based on something the user is (inherence factor), biometric authentication is more reliable than the password-based system. Furthermore, the biometric features are difficult to replicate, and the system requires the person to be presented for the authentication process [2]. Many biometric such as face, fingerprint, iris and voice have been well studied and developed [3-6]. Biometric authentication utilizing vein patterns however is in infancy. Vein patterns are the vast network of blood vessels underneath a persons skin. They are unique to each individual and are stable over a long period of time. As veins are hidden underneath the skin surface and are mostly invisible to human eye, they are not prone to external distortion, and the vein patterns are much harder for an intruder to replicate as compared to other biometric traits. In addition, it provides liveness detection as it senses the flow of blood in the vessels. Due to the uniqueness, stability, and high resistance to criminal tampering, vein pattern offers a more reliable trait for a secure biometric authentication system. Biometric system is currently often implemented in an untrusted environment that uses an insecure central server for the storage of the biometric templates. This can be the source of biometric information leakage [7]. The implementation of the biometric system on an embedded system can address this critical issue, as the embedded system can provide a medium of secure communication, secure information storage, and tamper resistance, hence providing protection from both physical and software attacks. However, implementing biometric system in resource-constrained embedded system for real-time performance is a challenge, and therefore is currently least developed [8], more so with vein biometrics. In this paper, we investigate a method of personal authentication based on infrared finger vein patterns. Our focus is to obtain an embedded system implementation with high performance and optimum accuracy. The current version of the design is prototyped on Altera Nios II FPGA Stratix II EP2S180 development board running on Nios2-Linux RTOS with a 100MHz clock frequency. Fig. 1 shows the top-level system architecture. The rest of the paper is organized as follows. Section II describes the methodology. Experimental results are given in section III, and conclusions are made in section IV.

978-1-4244-7647-3/10/$26.00 2010 IEEE

700

Figure 1. Top-Level System Architecture

diode (IR LED) and a low cost image acquisition module using a modified webcam with an attached IR filter, the transmitted IR light pass through the finger is then captured by the camera. In the resulting image, as hemoglobin in the blood absorb the IR light, the vein patterns captured as shadow and appear darker. Image is captured in coloured bitmap (bmp) format, 320x240 pixels (width x height) in size, with 24 bits (3 bytes) per pixel. Fig. 4 shows an example of infrared finger vein image captured in our system. B. Image Pre-Processing Module As illustrated in Fig. 2, the proposed image preprocessing method comprises of eight sub-blocks: colour to grayscale conversion, grayscale median filter, image segmentation, image alignment and resize, Gaussian low pass filter, local dynamic thresholding, binary median filter and lastly thinning process. The resultant output image for each sub-block is as shown in Fig. 5(a) through to 5 (h) respectively. 1) Color to Grayscale and jpeg to bmp Conversion: The raw finger vein image in Fig. 4 which is in coloured bmp format with 3 bytes per pixel will be first converted to grayscale image to reduce the size from 3 2) Grayscale Median Filter: A grayscale median filter with window size 7x7 is applied to the grayscale image to smooth the noisy background. By applying median filter to the noisy edge background, this will reduce the significant effect of the

PROPOSED FINGER VEIN BIOMETRIC AUTHENTICATION SYSTEM As shown in Fig. 2, our proposed finger vein biometric authentication system consists of four modules: image acquisition, image pre-processing, feature extraction, and matching. A biometric image is first acquired form an individual using a modified infrared camera, then the image is processed, before a feature extraction is performed to obtain the biometric template, which is then matched against the template set in the database. The image pre-processing, feature extraction, as well as the matching modules are performed in the embedded device, ensuring a maximum security in the system. In this work, finger vein feature extraction applies the minutiae extraction technique, which includes bifurcation and ending points. The Modified Hausdorff Distance (MHD) is used to evaluate the dissimilarity of two images for verification purposes. A. Image Acquisition Module As vein are hidden underneath skin, vein pattern cannot be observed in visible light. However, vein pattern can be acquired by the fact that the infrared light with wavelength 700nm-1000nm can pass through human tissues while the hemoglobin in the blood can absorb the infrared light fully [9]. This paper deploys the NearInfrared (NIR) technique instead of Far-Infrared (FIR) imaging, since NIR is more tolerant to changes in environmental and body condition while FIR technique is mote suitable for capturing larger vein pattern [10]. Light transmission method, as illustrated in Fig. 3, is used to capture the finger vein patterns. In this method, the finger is placed in between an array of infrared light-emitting

II.

Figure 4. Captured finger vein image Figure 3. Finger vein image capture

Figure 2. Proposed finger vein biometric authentication system

701

noisy background surrounding the finger for a better finger region detection in the next process. The median filter considers each pixel in the image and its nearby neighbors pixels within a window size, and replaces the center pixel value with median value of all those pixels. The sorting process involved in this median filter is compute intensive and is slow in speed. The median filtering algorithm proposed here is to find the median value by using a histogram based method. Fig. 5 (b) illustrates the output image after grayscale median filter.

3) Finger Region Extraction (Image Segmentation): To separate the finger region from the background, an image segmentation process is performed. This step involves three processes, namely: finger edge detection, edge smoothing, and finger region filling. Firstly, a Canny edge detection technique is used to detect the finger edge. The edge is then smoothed by using morphological dilation to join the broken edge. After smoothing, the region inside the finger region is then filled with white pixels (data value 255). The resultant image is shown in Fig. 5 (c). 4) Finger Alignment and Image Resize After the finger region is extracted, the finger is aligned to a fixed position to minimize the discrepancy in matching process cause by the misaligned finger during image capturing. Finger alignment process is done by taking a reference point in the filled finger image (fignerRefX, fingerRefY), and shifts the whole image to a user defined location (refX, refY). To do so we need to first find the x and y which denoted the value needed to be shifted in x and y direction respectively. x and y is given in Equation (1). (x, y) = (refX fingerRefX, refY fingerRefY). (1)

(a)

Grayscale output image

(b)

Output of Grayscale Median Filter

(c)

Output of Finger Region Extraction

Fig. 6 depicts how the reference point (fignerRefX, fingerRefY) is determined. The grayscale finger image is then replace in the image, and then is aligned to user defined location using x and y determined earlier. Lastly the image is cropped from originally 320x240 pixels to 320x160 pixels to remove the unnecessary pixels and to allow for faster processing speed. The aligned and resized image is illustrated in Fig. 5 (d).

(d)

Resized and Aligned output image

(e)

Output of Gaussian Low Pass Filter

Figure 6. Determination of (fignerRefX, fingerRefY) (f) Local Dynamic Thresholding output image

(g)

Output of Binary Median Filter

5) Gaussian Low Pass Filter A spatial low pass filter with the selected mask in Fig. 7 is applied to the image to smooth out the sharp transitions in gray level and remove high frequency noise in the image.

(h)

Output after Thinning process

1/273

Figure 5. Output image for each sub-block in image preprocessing module

1 4 7 4 1

4 16 26 16 4

7 26 41 26 7

4 16 26 16 4

1 4 7 4 1

Figure 7. Low pass filter convolution mask

Spatial low pass filter is done by convolving the input

702

image with the convolution mask, as shown in Fig. 8. (Convolution is an operation in which the final pixel is the weighted sum of the neighboring pixels.) Equation (2) is applied to each pixel in the image. Z is the input image pixel with Z12 is the pixel we wish to perform the convolution, while Z0 to Z24 is the surrounding pixels. W is the selected convolution mask as shown in Fig. 7. The resulting image after Gaussian low pass filtering is shown in Fig. 5 (e).

requires high computational power, as it involves a number of arithmetic operations, which include multiplication, division, power and square root, in the calculation of s(x,y) as defined in (5). To reduce this compute-intensive operation, we proposed to determine t(x,y) using the simple equation given in (6).

t ( x, y ) = m ( x, y ) .

(6)

Z0 Z5 Z10 Z15 Z20

Z1 Z6 Z11 Z16 Z21

Z2 Z7 Z12 Z17 Z22

Z3 Z8 Z13 Z18 Z23

Z4 W0 Z9 W5 Z14 * W10 Z19 W15 Z24 W20

W1 W6 W11 W16 W21

W2 W7 W12 W17 W22

W3 W8 W13 W18 W23

W4 W9 W14 W19 W24

The window size w, chosen in our system is 19. Fig. 5 (f) shows the output image after binarization. 7) Binary Median Filter Vein image after threholding contains noise. These unwanted noise can be eliminated by applying binary median filtering process to the image. In this filtering process, a center pixel in a symmetrical shape area (normally a square window) is replaced with the median value of all the pixel values in that area. The median value is calculated by first sorting all pixel values within the window centered at the pixel being considered. For example, if a 5*5 window as illustrated in Fig. 9, the median value which is the 13th element after sorting, replaces the centre point P13. Thus for any w*w size window, the median value will be located at ( w * w 1 ) th
2

Figure 8. Low pass filter convolution mask

Z12 = i =0Wi * Z i .
24

(2)

6) Local Dynamic Thresholding (Binarization) Binarization is a technique to convert the grayscale image into a bi-level representation which are black pixel with value 0 and white pixel with value 255. Binarization extracts the vein pattern form the vein image. In this paper, a simple but effective local dynamic thresholding method is adopted. Let g ( x, y ) [0,255] be the intensity of a pixel at location (x,y) in a grayscale image. In local dynamic thresholding techniques, the aim is to calculate the threshold, t(x,y) for each pixel such that the output pixel is replaced according to (3).

element after sorting.

P0 P5 P10 P15 P20

P1 P6 P11 P16 P21

P2 P7 P12 P17 P22

P3 P8 P13 P18 P23

P4 P9 P14 P19 P24

Figure 9. 5*5 processing window for median filter

0 if g ( x, y ) t ( x, y ) . out ( x, y ) = 255 otherwise

(3)

The work in [11] proposed a method to calculate threshold using Equation (4), where s(x,y) is defined in (5).

t ( x , y ) = m( x , y ) + k * s ( x , y ) .

(4)

The convention method for performing median filter, involving sorting all pixel values in the w*w window area, is slow in speed. Since we are operating on a binary image, which consist only pixel value 0 and 255, we can replace the sorting algorithm by using a simple count operation. First the algorithm will look at all pixels in the w*w size window, and count up one if the value is equal to 0. If the total count value us greater than ( w * w 1 ) ,
2

s ( x, y ) =

w 2 w i= x 2 x+

w 2 w y 2 y+

(m( x, y ) g (i, j )) 2 w* w
. (5)

For every pixel at location (x,y), t(x,y) is its local threshold, m(x,y) and s(x,y) is the mean and standard deviation of the pixel intensities in a w*w size window centered on the pixel (x,y), and k is the coefficient correction constant. After thresholding, each pixel (x,y) is replaced with out(x,y). Clearly, local threshold determination using (4)

then the output is assign as 0 or else is assign as 255. By doing so, the median filter for binary image is actually replacing the center pixel by the dominant pixel value in a w*w window. As a result, the speed for the median filtering process has been significantly decreased 80% compared to conventional method. The window size used in our system is 5x5 with iteration 3. Fig. 5 (g) shows the resultant image after application of the median filtering. 8) Thinning To extract the skeleton image of the vein texture which consist only a single pixel wide line, a fast parallel

703

algorithm for thinning digital patterns proposed by [12] is used in this work. A 3*3 window size as illustrated in Fig. 10 is used to check the surrounding pixels (P1 to P8) and decide to remove or maintain pixel P0.

storing the type as well as the x and y coordination of the minutiae point. Fig. 11 illustrated the minutiae point extracted from the vein pattern.

P8 P7 P6

P1 P0 P5

P2 P3 P4

Figure 10. Labeling of the 9 pixels in a 3*3 window

Pixel P0 will be removed if it meets the following thinning conditions. Thinning conditions, first iteration,

i. ii. iii. iv. i. ii. iii. iv.

2 B(P0) 6 A (P0) = 1 P1* P3* P5=0 P3* P5* P7=0 2 B(P0) 6 A (P0) = 1 P1* P3* P7=0 P1* P5* P7=0

Figure. 11. Minutiae extracted from the vein pattern (cross for bifurcation point; dot for ending point)

Thinning conditions, second iteration,

Two iterations involved to preserve the connectivity of the skeleton. The first two thinning condition for both iteration is the same. Fig. 5 (h) shows the output image after thinning process. C. Feature Extraction Module The feature extraction module utilizes the minutiae features extracted from the vein patterns for recognition as proposed by [13]. The minutiae points include bifurcation points and ending points. Similar to fingerprints, these feature points are used as a geometric representation of shape of vein patterns. The most widely used method for minutiae feature extraction in fingerprint biometric system is the Cross Number (CN) concept [14, 15]. Cross number is defined as number of transition from 0 to 1 (and vice versa) for the surrounding pixel P0, which is from P1 to P8 in a 3*3 window as illustrated in Fig. 10. Mathematically, cross number can be expressed by the following equation:

D. Matching Module In other biometric system such as face recognition system, Hausdorff Distance (HD) is a well known technique in measuring the dissimilarity of two images [16]. Given two sets of points A= {a1, , am} and B= {b1, , bn}, the Hausdorff Distance is defined as,

H ( A, B ) = max(h( A, B ), h( B, A)) .
where

(8)

h( A, B ) = max min || a b || .
aA bB

(9)

CN ( P 0 ) = i =1| Pi Pi +1 | .
8

(7)

TABLE I PROPERTIES OF CROSS NUMBER CN 0, 1 2, 3 4, 5 6, 7 8 Property Isolated point Ridge ending point Connecting point Bifurcation point Crossing point

The h(A,B) is called the directed Hausdorff Distance from A to B. It measures the distance from point in A to its nearest neighbor in B and identifies the point that is farthest from any point of B. Thus the Hausdorff Distance H(A,B) measures the degree of mismatch between two sets of points as it reflects the distance of A that is farthest from any point of B and vice versa. Smaller velue of H(A,B) indicates better similarity of two sets points. The directed Hausdorff Distance based on (10) is very sensitive to outlier points. The Modified Hausdorff Distance (MHD) introduced by [17] overcomes this problem. In MHD, the directed Hausdorff Distance is defined as

h( A, B) =

1 min || a b || . m aA bB

(10)

By taking the average value of all distance from point in A to its nearest neighbor in B, rather than taking the farthest point as defined in (10), MHD decreased the impact of outlier point in set points. III. EXPERIMENTAL RESULTS Fig. 5 illustrates the output image for each sub-module in the image pre-processing module while Fig. 11 illustrates the output image of the minutiae points

By using the property of cross number of a pixel as shown in Table I, the pixels can be classified as ridge ending point or bifurcation point. A template is then generated by

704

extracted from the finger vein patterns. The vein patterns have been extracted clearly as shown in Fig. 5 (h). Experiment was carried out to evaluate the performance of the system in term of speed and accuracy. Preliminary experiment on our finger vein database consists of 100 finger vein images from 20 different fingers (5 samples for each finger) shows a promising result with Equal Error Rate (EER) 1.0037%. Table 2 gives the execution time in second (s), clock cycle (cc), and percentage for each image processing blocks running on RTOS Nios2-Linux with a 100MHz frequency FPGA prototyping board. This timing result is calculated based on average time taken for processing all the finger vein images in our database.
TABLE II EXECUTION TIME IN SECOND AND CLOCK CYCLE FOR EACH IMAGE PROCESSING BLOCK IN NIOS2-LINUX Process Color to grayscale conversion Grayscale median filter (7x7) Finger region extraction Resized and aligned Gaussian low pass filter (5x5) Local dynamic thresholding (19x19) Binary median filter (5x5, 3) Thinning Minutiae extraction TOTAL Second (s) 0.51 3.35 8.2 0.68 0.91 1.67 1.46 2.54 0.22 19.54

RTOS Nios2-Linux. These are candidate operations that need to be accelerated in hardware, and this is the scope of the next version of our design. For future research, the grayscale median filter, finger region detection and thinning process which consume most of the processing time is accelerated in hardware. ACKNOWLEDGMENT Thank all the teammates: Jasmine Hau, Rabia Bakhteri and Vishnu P. Nambiar for their helpful comments and advice on this research. REFERENCES
[1] [2] A. K. Jain, A. Ross, and S. Prabhakar, "An introduction to biometric recognition," IEEE Transactions on Circuits and Systems for Video Technology, vol. 14, pp. 4-20, 2004. U. Uludag, S. Pankanti, S. Prabhakar, and A. K. Jain, "Biometric cryptosystems: issues and challenges," Proceedings of the IEEE, vol. 92, pp. 948-960, 2004. T. Matthew and P. Alex, "Eigenfaces for recognition," Journal of Cognitive Neuroscience, vol. 3, pp. 71-86, 1991. A. K. Jain, H. Lin, and R. Bolle, "On-line fingerprint verification," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, pp. 302-314, 1997. S. Lim, K. Lee, O. Byeon and T. Kim. "Efficient iris recognition through improvement of feature vector and classifier", Electronics and Telecommunications Research Institute Journal, vol. 23, no. 2, pp. 61-70, 2001. A. M. Judith, "Voice biometrics," Communications of the ACM, vol. 43, pp. 66-73, 2000. S. L. Yang, K. Sakiyama, and I. M. Verbauwhede, "A compact and efficient fingerprint verification system for secure embedded devices," in Conference Record of the Thirty-Seventh Asilomar Conference on Signals, Systems and Computers 2003, pp. 20582062. N. Aaraj, S. Ravi, S. Raghunathan, and N. K. Jha, "Architectures for efficient face authentication in embedded systems," in Proceedings Design, Automation and Test in Europe, 2006, p. 6. D. Mulyono and H. S. Jinn, "A study of finger vein biometric for personal identification," in International Symposium on Biometrics and Security Technologies, 2008, pp. 1-8. L. Y. Wang and G. Leedham, "Near- and Far- Infrared Imaging for Vein Pattern Biometrics," in IEEE International Conference on Video and Signal Based Surveillance, 2006, pp. 52-52. Y. H. Ding, D. Y. Zhuang, and K. J. Wang, "A study of hand vein recognition method," in IEEE International Conference on Mechatronics and Automation, 2005, pp. 2106-2110. T. Y. Zhang and C. Y. Suen, "A fast parallel algorithm for thinning digital patterns," Communications of ACM, vol. 27, pp. 236-239, 1984. L. Y. Wang, G. Leedham, and D. S. Y. Cho, "Minutiae feature analysis for infrared hand vein pattern biometrics," Pattern Recognition, vol. 41, pp. 920-929, 2008. R. Thai, Fingerprint image enhancement and minutiae extraction, School of Computer Science and Software Engineering, the University of Western Australia, 2009. X. Sun and Z.M. Ai, Automatic feature extraction and recognition of fingerprint images, Proceedings of ICSP, 1996. O. Jesorsky, K.J. Kirchberg, and R.W. Frischholz, Robust face detection using Hausdorff Distance, Proceedings of the Third International Conference on Audio- and Video-Based Biometric Person Authentication, Lecture Notes In Computer Science, Springer-Verlag , vol. 2091, pp. 90-95, Halmstad, Sweden, June 2001. M.P. Dubuisson and A.K Jain, A modified Hausdorff distance for object matching, Proceedings of the 12th IAPR International Conference on Pattern Recognition, vol. 1, pp. 566-568, Jerusalem, Israel, October 1994.

Time Taken [3] Clock Cycle Percentage [4] (106 cc) (%) 50.96 2.61 334.86 820.32 68.45 90.94 166.81 145.62 253.93 21.78 1953.67 17.14 41.99 3.50 4.65 8.54 7.45 13.00 1.11 100.00 [9] [8] [6] [7] [5]

IV.

CONCLUSIONS
[10] [11] [12] [13] [14] [15] [16]

Based on the results, it can be concluded that a finger vein authentication system targeted for embedded system, which is implemented on Altera Nios II FPGA prototyping board, running on RTOS Nios2-Linux has been successfully developed. The accuracy of the proposed system is encouraging with EER 1.0037% at threshold 9.97. Based on the experimental results, the performance bottle neck and the opportunities for optimizing the system have been identified. The system would achieve higher accuracy by adding noise removing after binary median filter or thin image smoothing after the thinning process. However, adding the noise removing and smoothing processing blocks will increase computational cost and can lead to unacceptable processing speed. The intensive computation algorithm such as grayscale median filter, Canny edge detection in finger region detection, and thinning process can be time consuming running in embedded system which is limited by the capabilities of the 100Mhz fixed point processor. This is one of the main challenges for implementing the entire image pre-processing module as well as the minutiae feature extraction and matching module in embedded

[17]

705

You might also like