^{1}

^{*}

^{2}

Fingerprint identification and recognition are considered popular technique in many security and law enforcement applications. The aim of this paper is to present a proposed authentication system based on fingerprint as biometric type, which is capable of recognizing persons with high level of confidence and minimum error rate. The designed system is implemented using Matlab 2015b and tested on a set of fingerprint images gathered from 90 different persons with 8 samples for each using Futronic’s FS80 USB2.0 Fingerprint Scanner and the ftrScanApiEx.exe program. An efficient image enhancement algorithm is used to improve the clarity (contrast) of the ridge structures in a fingerprint. After that core point and candidate core points are extracted for each Fingerprint image and feature vector have been extracted for each point using filterbank_based algorithm. Also, for the matching two neural networks: the KNN neural network. In addition, the matching results of the two matching methods were calculated and compared in other papers using some performance evaluation factors. A threshold has been proposed and used to provide the rejection for the fingerprint images that does not belong to the database and the experimental results show that the KNN technique have a recognition rate equal to 93.9683% in a threshold equal to 70%.

Establishing the identity of a person is a critical task in any identity management system. Surrogate representations of identity such as passwords and ID cards are not sufficient for reliable identity determination because they can be easily misplaced, shared, or stolen. Biometric recognition is the science of establishing the identity of a person using his/her anatomical and behavioral traits. Commonly used biometric traits include fingerprint, face, iris, hand geometry, voice, palm print, and handwritten signatures [

In the literature, various approaches have been proposed by researchers to provide the best recognition rate. For example Jain, in 2001 [

The proposed system flowchart provides a definition of the system modules and sub modules.

The task of this module is to enroll the fingerprint of the user in to the system database using Futronic’s FS80 USB2.0 Fingerprint Scanner and the ftrScanApiEx.exe program. In this work 90 fingerprint image have been collected with 8 samples for each person; so as a total 720 fingerprint image have been captured. The FP images have been capture from different age persons like teenagers, college students, middle age and old persons (see

The task of this module is to prepare the FP image for the feature extraction module and enhance and increase the FP quality to get rid from noise if there was any so it can be compatible with the system performance. This module includes the enhancement using the Fourier domain analysis filtering and segmentation.

1) Fourier domain analysis filtering: we enhance the FP image as in [

In the Fourier domain analysis A local region of the FP image can be model as a surface wave according to Equation (1):

i ( x , y ) = A cos { 2 π f ( x cos ∅ + y sin ∅ ) } (1)

This orient wave can be characterize completely by its orientation ∅ and

frequency f . The Fourier spectrum and its inverse is obtained by using Equations (2) and (3).

F ( u , v ) = ∑ x = 0 x = M − 1 ∑ y = 0 y = N − 1 f ( x , y ) e − j 2 π ( u x + v y ) / M N (2)

f ( x , y ) = ∑ u − 0 x = M − 1 ∑ v − 0 y = N − 1 F ( u , v ) e − j 2 π ( u x + v y ) / M N (3)

The Directional Field Estimation Is the process used to find the orientation (angles) of fingerprint ridges. Let the Fourier coefficients be represented in the polar co-ordinates as F ( r , ∅ ) . We define a probability density function as F ( r , ∅ ) and the marginal density functions as f ( ∅ ) , f ( r ) .

f ( r , ∅ ) = | F ( r , ∅ ) | 2 ∬ | F ( r , ∅ ) | 2 d ∅ d r (4)

f ( r ) = ∫ f ( r , ∅ ) d ∅ (5)

f ( ∅ ) = ∫ f ( r , ∅ ) d r (6)

The orientation ∅ is assume as a random variable that has the probability density function f ( ∅ ) . The expected value of the orientation may then be obtained by equation (7):

E { ∅ } = ∫ ∅ ⋅ f ( ∅ ) d ∅ (7)

For the Ride Frequency Estimation, the image frequency represents the local frequency of the ridges in a fingerprint [

E { r } = ∫ r ⋅ f ( r ) d r (8)

Either for the Energy Map, is the presence of ridges contributes to the energy content in the Fourier spectrum. The energy content of a block may be obtained through Equation (9). We define an energy image E(x,y) where each value indicates the energy content of the corresponding block. The fingerprint region may be differentiated from the background by thresholding the energy image so the FP may be easily to segmented based on the energy map. We take the logarithm values of the energy to obtain a linear scale. The same technique is used to visually represent a frequency spectrum [

E = ∑ u ∑ v | F ( u , v ) | 2 (9)

Finally in the Enhancement, the image is divided in to 12 × 12 overlapping blocks with a 6 pixel overlap between adjacent blocks. The block is multiplied by a raised cosine window in order to eliminate any artifacts due to rectangular windows. Each block is filtered in the frequency domain by multiplying it with orientation and frequency selective filter whose parameters are based on the estimated local ridge frequency and orientation. Block-wise approaches have problems around the singularities where direction of the ridges cannot be approximated by a single value. The bandwidth of the directional filter has to be increased around these regions. This is achieved in [

The filter H is separable in angle and frequency and is obtain by multiplying separate frequency and angular band pass filter of order n. the filters are defined in [

H ( r , ∅ ) = H ( r ) ⋅ H ( ∅ ) (10)

H r ( r ) = ( r r B W ) 2 n ( r r B W ) 2 n + ( r 2 − r B W 2 ) 2 n (11)

H ( ∅ ) = { cos π ( ∅ − ∅ c ) 2 ∅ B W if | ∅ − ∅ c | ≤ ∅ B W 0 otherwise (12)

2) Segmentation: In this operation, the image is segment and the background is separate from fingerprint image. This can be performed using a simple block-wise variance approach, since background is usually characterized by a small variance. Image is first binary closed (Matlab command imclose), then eroded (Matlab command imerode), in order to avoid holes in fingerprint image and also undesired effect at the boundary (between fingerprint and background).

For each point from the core and candidate core points, the filterbank_based algorithm will be implemented [

a) Core Point and Candidate Core Points Detection: The core point is detected through a number of steps:

1) Estimate the orientation from the enhanced FP image as it described.

2) The orientation field is used to obtain a logical matrix where pixel (i, j) is set to 1 if the angle of the orientation is ≤PI/2 (PI 3.1415926535897...).

3) After this computation, the complex filtering output [

where g is a Gaussian defined as g ( x , y ) = exp { − x 2 + y 2 2 σ 2 } [

used as window because the Gaussian is the only function which is orientation isotropic (in polar coordinates, it is a function of radius only) and separable [

In this thesis, symmetry complex filter is used to detect the core point:

h ( x , y ) = ( x + i y ) g ( x , y ) = r exp { i φ } g ( x , y ) (13)

We identify the candidate core points by its special symmetry properties. Therefore, in order to detect the candidate core points, complex filter designed for detect rotational symmetry extraction is applied. After calculating the complex filtering output of the enhanced fingerprint image, the maximum value of complex filtering output where the pixels of the logical image are set to one were found.

4) Repeat steps 2 - 3 for a wide set of angles (…, PI/2-3 * alfa, PI/2-2 * alfa, PI/2-1 * alfa, PI/2, PI/2 + 1 * alfa, PI/2 + 2 * alfa, PI/2 + 3 * alfa, …where alfa is an arbitrary angle). Each time a point is determined (Note: each of them will be a candidate for fingerprint matching).

5) Subdivide all the points found in step 4 into subsets of points, which are quite near each other. It will be N subsets. For each subset there will be a certain number of candidates and only subset with a number of candidates ≥ 3 will be

considered. For each of this subset, consider the subset with the greatest x-averaged coordinate. In this subset, the core point with the greatest x-coordinate is considered. This is a good approximation in standard fingerprint image. The number of core point and candidate core points that have been extracted from this are different from FP image to another,

b) Cropping

A square of a certain size region around the calculated point is extract in this step. This square area contains the part that will be the input of Gabor filter-bank. Input image is pad with a proper border of zeros in order to avoid any size error.

c) Sectorization

Cropped fingerprint image is sectorized into 4 concentric bands. The bands are centered on the pseudo-center point. All 4 bands have a radius of 20 pixels, and a center hole radius of 12 pixels. Each band is divided into 16 sectors, ignoring center band as it has very small area. Each sector thus formed will capture information corresponding to each Gabor filter. See

d) Normalization

Each sector is individually normalized to a constant mean and variance to eliminate variations in the fingerprint pattern. Normalization of each sector is done as in [

N ( i , j ) = { M ° + V ° ( I ( i , j ) − M ) 2 V if I ( i , j ) > M , M ° − V ° ( I ( i , j ) − M ) 2 V else (14)

where I ( i , j ) grey level at pixel(i, j), N ( i , j ) is normalized grey level at pixel(i, j), M and V are estimated mean and variance of (i, j), respectively. M ° and V ° are desired mean and variance, respectively.

FP Number | Number of core point and candidate core points |
---|---|

1 | 10 |

2 | 5 |

3 | 8 |

4 | 12 |

5 | 12 |

6 | 10 |

7 | 12 |

8 | 9 |

9 | 16 |

10 | 11 |

e) Gabor Filters Bank

The Gabor filter capture both local orientation and frequency information from a fingerprint image. This filter is suited for extracting the texture information from images because by tuning a Gabor filter to specific frequency and direction, the local frequency and the orientation information could be obtained.

The definition of GF in spatial domain is given as follows [

G ( x , y ; f , θ ) = e − 1 2 [ x ° 2 σ x 2 + y ° 2 σ y 2 ] cos ( 2 π f x ° ) (15)

x ° = x cos θ + y sin θ

y ° = y cos θ − x sin θ

where θ is the orientation of the GF, f is the frequency of the cosine wave, σ x and σ y are the slandered deviations of the Gaussian envelope along the x and y axes, respectively, and x ° and y ° define the x and y axis of the filter coordinate frame, respectively.

The Normalized image is passing through eight Gabor filters. Each filter produces a 33 × 33 filter image for 8 angles (0, π/8, π/4, 3π/8, π/2, 5π/8, 3π/4 and 7π/8) which is convolved with the fingerprint image. The angles are θ = {0, 22.5, 45, 67.5, 90, 112.5, 135, 157.5}.

4 concentric bands are considered around the detected reference point for feature extraction. Each band is 20 pixel wide and segmented in to 16 sectors. Thus, we have a total of 16 * 4 = 64 sectors. Each sector image will be input in to the eight Gabor filters bank. So, as a total 512 (64 * 8) filtered images for each core point in FP image will be extract.

f) Variance Calculation

The Variance of the all pixel values in each sector is calculated after obtaining the 512 filtered images that gives the concentration of fingerprint ridges going in each direction in that part of the fingerprint. V i θ is the average absolute deviation from the mean and it is calculate using the equation [

V i θ = ∑ K i ( F i θ ( x , y ) − P i θ ) 2 (16)

Fiθ are the pixel values in the ith sector after a Gabor filter with angle θ has been applied. Piθ is the mean of the pixel values. Ki is the number of pixels in the ith sector. A higher variance in a sector means that the ridges in that image were going in the same direction as is the Gabor filter. A low variance indicates that the ridges were not, so the filtering smoothed them out. The resulting 512 variance values (8 × 64) is the feature vector of the fingerprint scan. Therefore, each feature vector will have 512 value.

K-nearest neighbor classification is the simplest technique in machine learning, if you have a labeled data set {xi}, and you want to classify some new item y, find the k elements in your data set that are closest to y, and then somehow average

317.755472110611 347.876625885873 208.831596658908 119.702662365649 67.6820626488398 58.8557991705111 32.0457471125907 15.3282659340089 | 85.0608871268424 144.168111602665 198.447152449966 267.354621591008 285.626540499418 270.602200430766 118.844510888489 266.967014183922 |
---|

their labels to get the label of y [

The k-NN predict is computed using the features assembled in the matrices in a two-step process. In the first step, we have been calculating the distance between the features in the new dataset (test set) and the features in the previous dataset (training sets). In the second step, choosing k-NNs and have k smallest distance from distance set [

To find the k-NN based on the Euclidean distance, this mathematical equation is used [

d ( x , y ) = ∑ j = 1 N W j 2 ( x j − y j ) 2 (17)

where d is the number of forecast instances in the test set. We can calculate the distance between two scenarios using some distance function d ( x , y ) , where x, y are the matrix scenarios composed of N features x = { x 1 , x 2 , ⋯ , x N } , y = { y 1 , y 2 , ⋯ , y N } , N is the length of data, and the distance between the current performance and previous condition, W j is the weight value of the dependent variable members of k-NN (kernel function) and j is the order of the k-NN based on their distance from the current performance condition and which the nearest with used the lowest order ( j = 1 , ⋯ , K ) [

The training and testing of the FP images using the KNN neural networks will be explained in details:

a) Training Phase

In the training phase, core point and candidate core points is extracted from the FP image. For each point a feature vector is extracted, so the FP image in the training phase will have a number of feature vectors depending on the number of the points that the FP image have. Therefore, each FP image will have different number of feature vectors.

One image is trained for each person and this process will be repeat for all the 90 persons stored in the database and all the features data set will be stored outside the KNN neural network.

b) Testing Phase

The same process happen for the training FP image will be done in the testing phase for the testing FP image. Core point and candidate core points is extract from the test FP image. For each point a feature vector is extracted, so the FP image in the testing phase will have a number of feature vectors depending on the number of points that the FP image have. Therefore, each FP image will have different number of feature vector.

The second step is the calculation of the distance (which will be the Euclidean distance) between each vector from the input vectors N and the whole number of vector for all the FP images in the database and find the minimum distance and store the person who it belong to. Moreover, repeat this step for all the input vectors N and finally it will be N suggested persons; the most repeating person will be the final identification matching result.

There are 8 images for each person, and one image has been used for training and 7 images for testing, and repeats this for all the 90 person.

The threshold selection process has been proposed. After suggest a person for each input vector and repeat this for all N input vectors. We calculate percentage or a score and called it matching score by taking the most repeating person divided by the all number of suggested persons (which is the same number of the FP input image vectors N) multiplying by 100%:

Score = numberofmostrepeatingperson allnumberofsuggestedperson × 100 %

If the test image or the unknown input image have Score larger than a specified threshold then the image will be accept and if else, the image will be reject.

For example, you can choose the threshold such high, that really no impostor scores will exceed this limit. As a result, no patterns are falsely accepted by the system. On the other hand, the client patterns with scores lower than the highest impostor scores are falsely rejected. In opposition to this, you can choose the threshold such low, that no client patterns are falsely rejected. Then, on the other hand, some impostor patterns are falsely accepted. If you choose the threshold somewhere between those two points, both false rejections and false acceptances occur [

The recognition rate (RR) has been extract for a range of thresholds values:

RR = numberofRightAcceptFPimages allnumberofFPimages × 100

fraction of the number of rejected client patterns divided by the total number of client patterns is called False Rejection Rate (FRR) [

FAR = numberofFalseAcceptFPimages allnumberofFPimages × 100

FRR = numberofFalseRejectFPimages allnumberofFPimages × 100

Also

Threshold % | Recognition Rate % | FAR % | FRR % |
---|---|---|---|

0 | 98.0952 | 1.9047 | 0 |

5 | 98.0952 | 1.9047 | 0 |

10 | 98.0952 | 1.9047 | 0 |

15 | 98.0952 | 1.9047 | 0 |

20 | 98.0952 | 1.9047 | 0 |

25 | 98.0952 | 1.9047 | 0 |

30 | 98.0952 | 1.7460 | 0 |

35 | 97.9365 | 1.7460 | 0.3175 |

40 | 97.9365 | 1.7460 | 0.3175 |

45 | 97.7778 | 1.7460 | 0.4762 |

50 | 97.6190 | 1.7460 | 0.6350 |

55 | 96.6667 | 1.5873 | 1.7460 |

60 | 96.3492 | 1.4285 | 2.2223 |

65 | 95.0794 | 1.2698 | 3.6508 |

70 | 93.9683 | 1.2698 | 4.7619 |

75 | 90.4762 | 0.9523 | 8.5715 |

80 | 85.5556 | 0.7936 | 13.6508 |

85 | 76.0317 | 0.7936 | 23.1747 |

90 | 67.1429 | 0.4761 | 32.3810 |

95 | 55.2381 | 0.1587 | 44.6032 |

100 | 55.2381 | 0.1587 | 44.6032 |

This paper presents a design and implementation of Fingerprint recognition system using Filterbank_based algorithm for a number of core point and candidate core points in the feature extraction step and using KNN neural matching techniques in the matching step and threshold selection technique has been propose. During the implementation of the case studies, a number of conclusions have been drawn based on the practical results obtained from the implemented systems and the followings are the most important ones:

1) Taking 8 images for 90 persons from our reality with different ages and rotating the fingerprint image as possible as we can means that the final results are more real and applicable.

2) Include image enhancement in the fingerprint identification system improves the quality of input fingerprint image, reduces extraction of false features vectors and minimizes matching errors.

3) Core point and candidate core points extraction algorithm is a good algorithm and appropriate as a base for the feature extraction algorithm.

4) Feature extraction algorithm based on Filterbank_based algorithm produces a good feature vector in comparing among fingerprints that different from one person to another.

5) KNN Neural networks provide an appropriate matching result and the 70% threshold value of the threshold technique provide appropriate and good results for FP images (90 persons and 8 samples for each) that belong to database which is 93.9683% recognition rate, 1.2698% FAR and 4.7619% FRR.

Dakhil, I.G. and Ibrahim, A.A. (2018) Design and Implementation of Fingerprint Identification System Based on KNN Neural Network. Journal of Computer and Communications, 6, 1-18. https://doi.org/10.4236/jcc.2018.63001