Scale Invariant Feature Transform Based Fingerprint Corepoint Detection

The detection of singular points (core and delta) accurately and reliably is very important for classification and matching of fingerprints. This paper presents a new approach for core point detection based on scale invariant feature transform (SIFT). Firstly, SIFT points are extracted ,then reliability and ridge frequency criteria are applied to reduce the candidate points required to make a decision on the core point. Finally a suitable mask is applied to detect an accurate core point. Experiments on FVC2002 and FVC2004 databases show that our approach locates a unique reference point with high accuracy. Results of our approach are compared with those of the existing methods in terms of accuracy of core point detection..


Keywords:  Biometricsfingerprintcorepointscale invariant feature transform 

Biometric recognition refers to the use of distinctive physiological and behavioral characteristics, called biometric identifiers for the authentication of individuals1. Fingerprint recognition has been widely used in both forensic and civilian applications. Compared with other biometrics features, fingerprint-based biometrics is the most proven technique. Fingerprints are the oldest and most widely used biometric trait because of their universality and distinctiveness. With the increase in the number of commercial systems based on fingerprints, new features and algorithms are being developed.


The pattern of valleys and ridges constitute the fingerprint image. Analyzing this pattern at different levels reveals different types of global and local features. The important singularities are core and delta which are global features. While the core is usually defined as a point on the inner most ridge, the delta is known as the center point where three different flows meet. The core and delta are land mark points whose locations are consistent across different impressions of the same user. Therefore, their positions can be used as reference to align the prints. The singular points provide an important information used for the fingerprint alignment, matching and classification. Consistent extraction of these features is crucial for fingerprint recognition. The region around a core point is one that contains maximum unique information in a fingerprint, thus adding to its importance.

Many approaches have been investigated for accurately determining the location of singular points. A practical method based on Poincare index is proposed by Kawagoe and Tojo2. This depends upon the observation that singularities are termed as loop, whorl and delta corresponding to the core point index value of 180°, 360°, -180°, respectively. Another Poincare method for locating a singular point is proposed by Bazen and Gerez3. But in noisy and low quality images, Poincare method detects the false singularities. Karu and Jain4 iteratively smooth the ridge orientation through averaging until a valid number of singularities is detected by Poincare index.


A majority of the existing techniques try to locate the core point by making use of the ridge orientation of a fingerprint. Srinivasan and Murthy5 have used the local histogram of the orientation image to extract the singularities; their method is able to discriminate between the loop and delta singularities. Koo and Kot6 employ a multi resolution approach to determine the singularities with a single pixel accuracy. The approach of Nilson and Bigun7 approach is based on complex filtering. Singular points are extracted from the complex ridge orientation field estimated from the global structure of a fingerprint. Complex filters, applied to the orientation field in multiple resolution scales, are used to detect the symmetry and the type of symmetry. The direction of curvature is used for the coarse core point detection and geometry of region (GR) technique is used for the fine detection using candidate analysis with an extended relational graph as in Ohtusuka8, et al. Both the local and global features of the ridge orientation field are extracted to achieve reliable extraction of core and delta. Zhou9, et al. have made use of the difference of orientation values along a circle (DORIC) feature to remove the spurious singular point after the initial detection using the Poincare index. An optimal combination of singular points is used to minimize the difference between the original orientation field and the model based orientation field reconstructed using the singular points. Khalil10, et al. have developed an algorithm for the singular point detection based on the fingerprint orientation field. A two stage algorithm for core point detection in fingerprint images is presented by Tejas10, et al. in this algorithm, the first stage determines the presence of a core point based on ridge component identification followed by the unwanted component elimination and core segment detection. A method to detect the exact (single) point from the approximate core and delta region using the fuzzy reasoning is proposed by Kundu12, et al.

Scale invariant feature transform (SIFT) was originally developed for the general purpose object recognition. The SIFT detects stable feature points in an image and performs matching based on the descriptor representing each feature point. The features are selected to be invariant to scale and rotation, and they provide robust matching across a substantial range of affine distortion, addition of noise and partial change in the lighting13. The steps in the generation of SIFT features are now discussed.

3.1 Detection of Scale-space Extrema

The scale space of an image is defined as a function, L(x,y,σ), which arises from the convolution of a variable-scale Gaussian, G(x,y,σ), with an input image, I(x,y):


L(x,y,σ) = G(x,y,σ)* I(x,y) (1)

where * is the convolution operation in x and y, and


G(x,y,σ)= 1 2π σ 2 e ( x 2 + y 2 ) 2 σ 2 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipq0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaam4raiaacI cacaWG4bGaaiilaiaadMhacaGGSaGaeq4WdmNaaiykaiabg2da9maa laaabaGaaGymaaqaaiaaikdacqaHapaCcqaHdpWCdaahaaWcbeqaai aaikdaaaaaaOGaamyzamaaCaaaleqabaWaaSGaaeaadaqadaqaaiaa dIhadaahaaadbeqaaiaaikdaaaWccqGHRaWkcaWG5bWaaWbaaWqabe aacaaIYaaaaaWccaGLOaGaayzkaaaabaGaaGOmaiabeo8aZnaaCaaa meqabaGaaGOmaaaaaaaaaaaa@4EF8@      (2)

To find the stable keypoint location Lowe13, uses the scale space extrema in the Difference-of-Gaussian function convolved with an image, D(x,y,σ), which can be computed from the difference of two nearby scales separated by a constant multiplicative factor k , where L(x,y,σ) is the smoothed image. That is,



D(x,y,σ)= {G(x,y,kσ) - G(x,y,σ)}* I(x,y) (3)

D(x,y,σ) = L(x,y,kσ) -L(x,y,σ)
} (4)


Construction of D(x,y,σ), is shown inFig 1(a).. The initial image is incrementally convolved with the Gaussian to produce images separated by a constant factor k in the scale space, shown stacked in the left column. Adjacent image scales are subtracted to produce the Difference-of-Gaussian images as shown in Fig 1(a).. Once a complete octave has been processed, resample the Gaussian image that has twice the initial value of σ by taking every second pixel in each row and column.


Figure 1. (a) Scale space construction for SIFT operation, (b) Maxima and minima of the difference-of-Gaussian images13.


3.2 Local Extrema Detection

To detect the local maxima and minima of D(x,y,σ), each sample point is compared to its eight neighbors in the current image and nine neighbour in the scale above and below as shown in Fig 1(b).. It is selected only if it is larger than or smaller than all of the neighbours considered.

3.3 Accurate Keypoint Localization

After a keypoint candidate has been found by comparing a pixel to its neigbors, the next step is to process the nearby data for location, scale and ratio of principal curvature.This information allows points having low contrast or poorly localized along an edge to be rejected.

3.4 Orientation Assignment

By assigning a consistent orientation to each keypoint based on local image properties, the keypoint descriptor can be represented relative to this orientation for accomplishing invariance to image rotation.

3.5 Local Image Descriptor

Image location, and scale and orientation are assigned to each keypoint. These parameters define a repeatable local 2D cordinate system to describe the local image region.

4.1 Preprocessing

The fingerprint image can be electronically scanned with ranges of resolution. However, the generally accepted one is that of 500 dpi. The quality of the acquired images may vary in some locations and the clarity of the image may also vary itself. The uncertainty due to the first factor can be remedied by fixing the finger position while scannig the fingers. In the second case, the image quality is highly dependent on the finger condition. The enhancement process, therefore, tries to level-up the image condition to the state where it can be processed with high degree of success .The discontinuous ridge and abrupt ends in ridges, the noise due to scars are corrected using adaptive interpolation and extrapolation21. Enhancement based on short time Fourier analysis is performed by Chikkerur & Govindaraju14.

4.2 SIFT Point Extraction

Scale space is constructed with three samples per scale13. The size of the Gaussian filter is taken as 3 for finding the Difference-of-Gaussian images. Threshold for the standard deviation of the Gaussian is set between 0.003 to 0.005 depending upon the contrast in the images in different databases. We get an absolute value of two Difference-of-Gaussian images. For finding the local maxima and minima each sample point is compared with its eight neighbours in the current image and nine neighbours in the scale above and below the Gaussian blurred images. This step extracts SIFT feature points as shown in Fig 2(b).

4.3 Keypoint Localization

In the image shown inFig 2(a)., some new ridges having the shape of the closed loop similar to core point are formed. Based on the reliability and frequency of ridges, a threshold is used to remove noisy and spurious keypoints. The keypoints which have reliability greater than 0.4 and ridge frequency greater than 0.5 are selected.

In Fig 2(d).it can be seen that reliability in the upper region is low (blackish region), while at the centre it is quite high.Fig 2(e). shows that the background frequency is zero, therefore threshold for frequency is set to 0.5. After this step the number of keypoints is reduced as shown inFig 2(f). It is clearly seen that the ridges extrapolated over a white background have a much lower reliability as compared to minute discontinuties that exist in the centre.


Figure 2.(a) Original Image (b) SIFT points extracted (c) Ridge enhanced Image (d) Reliabilty Image (e) Ridge Frequency Image (f) Reduced SIFT points.


4.4 Orientation Assignment

The least square estimation method is used here to compute the orientation image as in Hong15, et al. The steps for calculating the orientation at pixel(i,j) are as follows:

Divide the input fingerprint image into nonoverlapping blocks of size W x W.

For each pixel in the block, compute ∂x(i,j) and ∂y(i,j) which are gradient magnitudes in the x and y directions respectively. Estimate the local orientation at pixel(i,j) using
V x (i,j)= u=iw/2 i+w/2 v=jw/2 j+w/2 2x(u,v) y(u,v) MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipq0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaamOvamaaBa aaleaacaWG4baabeaakiaacIcacaWGPbGaaiilaiaadQgacaGGPaGa eyypa0ZaaabmaeaadaaeWaqaaiaaikdacqGHciITcaWG4bGaaiikai aadwhacaGGSaGaamODaiaacMcaaSqaaiaadAhacqGH9aqpcaWGQbGa eyOeI0Iaam4Daiaac+cacaaIYaaabaGaamOAaiabgUcaRiaadEhaca GGVaGaaGOmaaqdcqGHris5aaWcbaGaamyDaiabg2da9iaadMgacqGH sislcaWG3bGaai4laiaaikdaaeaacaWGPbGaey4kaSIaam4Daiaac+ cacaaIYaaaniabggHiLdGccaaMc8UaeyOaIyRaamyEaiaacIcacaWG 1bGaaiilaiaadAhacaGGPaaaaa@64EA@       (5)

V y (i,j)= u=iw/2 i+w/2 v=jw/2 j+w/2 x 2 (u,v) y 2 (u,v) MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipq0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaamOvamaaBa aaleaacaWG5baabeaakiaacIcacaWGPbGaaiilaiaadQgacaGGPaGa eyypa0ZaaabmaeaadaaeWaqaaiabgkGi2oaaDaaaleaacaWG4baaba GaaGOmaaaakiaacIcacaWG1bGaaiilaiaadAhacaGGPaaaleaacaWG 2bGaeyypa0JaamOAaiabgkHiTiaadEhacaGGVaGaaGOmaaqaaiaadQ gacqGHRaWkcaWG3bGaai4laiaaikdaa0GaeyyeIuoaaSqaaiaadwha cqGH9aqpcaWGPbGaeyOeI0Iaam4Daiaac+cacaaIYaaabaGaamyAai abgUcaRiaadEhacaGGVaGaaGOmaaqdcqGHris5aOGaaGPaVlabgkGi 2oaaDaaaleaacaWG5baabaGaaGOmaaaakiaacIcacaWG1bGaaiilai aadAhacaGGPaaaaa@6615@       (6)

θ(i,j)= 1 2 tan 1 V y (i,j) V x (i,j) MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipq0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeqiUdeNaai ikaiaadMgacaGGSaGaamOAaiaacMcacqGH9aqpdaWcaaqaaiaaigda aeaacaaIYaaaaiGacshacaGGHbGaaiOBamaaCaaaleqabaGaeyOeI0 IaaGymaaaakmaalaaabaGaamOvamaaBaaaleaacaWG5baabeaakiaa cIcacaWGPbGaaiilaiaadQgacaGGPaaabaGaamOvamaaBaaaleaaca WG4baabeaakiaacIcacaWGPbGaaiilaiaadQgacaGGPaaaaaaa@4EA5@       (7)

where is the least square estimate of local orientation at the block centered at pixel (i,j).

Smooth the orientation field in a local neighbourhood using a Gaussian filter. The orientation image is firstly converted into a continuous vector, defined as:


ϕ x (i,j)=cos{2θ(i,j)} MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipq0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeqy1dy2aaS baaSqaaiaadIhaaeqaaOGaaiikaiaadMgacaGGSaGaamOAaiaacMca cqGH9aqpciGGJbGaai4BaiaacohacaGG7bGaaGOmaiabeI7aXjaacI cacaWGPbGaaiilaiaadQgacaGGPaGaaiyFaaaa@48E5@       (8)

ϕ y (i,j)=sin{2θ(i,j)} MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipq0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeqy1dy2aaS baaSqaaiaadMhaaeqaaOGaaiikaiaadMgacaGGSaGaamOAaiaacMca cqGH9aqpciGGZbGaaiyAaiaac6gacaGG7bGaaGOmaiabeI7aXjaacI cacaWGPbGaaiilaiaadQgacaGGPaGaaiyFaaaa@48EB@       (9)

where ϕ x MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipq0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeqy1dy2aaS baaSqaaiaadIhaaeqaaaaa@38C4@ and ϕ y MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipq0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeqy1dy2aaS baaSqaaiaadMhaaeqaaaaa@38C5@ are the x and y components of the vector field, respectively. 5. Perform the Gaussian smoothing as follows:


ϕ x ' (i,j)= u= wϕ 2 wϕ 2 u= wϕ 2 wϕ 2 G(u,v) ϕ x (iuw,jvw) MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipq0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeqy1dy2aa0 baaSqaaiaadIhaaeaacaGGNaaaaOGaaiikaiaadMgacaGGSaGaamOA aiaacMcacqGH9aqpdaaeWaqaamaaqadabaGaam4raiaacIcacaWG1b GaaiilaiaadAhacaGGPaGaaGPaVlabew9aMnaaBaaaleaacaWG4baa beaakiaacIcacaWGPbGaeyOeI0IaamyDaiaadEhacaGGSaGaamOAai abgkHiTiaadAhacaWG3bGaaiykaaWcbaGaamyDaiaaykW7cqGH9aqp caaMc8UaaGPaVlabgkHiTiaaykW7caaMc8+aaSaaaeaacaWG3bGaaG PaVlabew9aMbqaaiaaikdaaaaabaWaaSaaaeaacaWG3bGaaGPaVlab ew9aMbqaaiaaikdaaaaaniabggHiLdaaleaacaWG1bGaaGPaVlabg2 da9iaaykW7caaMc8UaeyOeI0IaaGPaVlaaykW7daWcaaqaaiaadEha caaMc8Uaeqy1dygabaGaaGOmaaaaaeaadaWcaaqaaiaadEhacaaMc8 Uaeqy1dygabaGaaGOmaaaaa0GaeyyeIuoaaaa@7F1B@       (10)

ϕ y ' (i,j)= u= wϕ 2 wϕ 2 u= wϕ 2 wϕ 2 G(u,v) ϕ y (iuw,jvw) MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipq0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeqy1dy2aa0 baaSqaaiaadMhaaeaacaGGNaaaaOGaaiikaiaadMgacaGGSaGaamOA aiaacMcacqGH9aqpdaaeWaqaamaaqadabaGaam4raiaacIcacaWG1b GaaiilaiaadAhacaGGPaGaaGPaVlabew9aMnaaBaaaleaacaWG5baa beaakiaacIcacaWGPbGaeyOeI0IaamyDaiaadEhacaGGSaGaamOAai abgkHiTiaadAhacaWG3bGaaiykaaWcbaGaamyDaiaaykW7cqGH9aqp caaMc8UaaGPaVlabgkHiTiaaykW7daWcaaqaaiaadEhacaaMc8Uaeq y1dygabaGaaGOmaaaaaeaadaWcaaqaaiaadEhacaaMc8Uaeqy1dyga baGaaGOmaaaaa0GaeyyeIuoaaSqaaiaadwhacaaMc8UaaGPaVlabg2 da9iaaykW7caaMc8UaeyOeI0IaaGPaVlaaykW7daWcaaqaaiaadEha caaMc8Uaeqy1dygabaGaaGOmaaaaaeaadaWcaaqaaiaadEhacaaMc8 Uaeqy1dygabaGaaGOmaaaaa0GaeyyeIuoaaaa@7F1D@       (11)

where G is a Gaussian low pass filter of size x . 6.The final smoothed orientation image field O at pixel (i, j) is defined as:


O(i,j)= 1 2 tan 1 ϕ y ' (i,j) ϕ x ' (i,j) MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipq0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaam4taiaacI cacaWGPbGaaiilaiaadQgacaGGPaGaeyypa0ZaaSaaaeaacaaIXaaa baGaaGOmaaaaciGG0bGaaiyyaiaac6gadaahaaWcbeqaaiabgkHiTi aaigdaaaGcdaWcaaqaaiabew9aMnaaDaaaleaacaWG5baabaGaai4j aaaakiaacIcacaWGPbGaaiilaiaadQgacaGGPaaabaGaeqy1dy2aa0 baaSqaaiaadIhaaeaacaGGNaaaaOGaaiikaiaadMgacaGGSaGaamOA aiaacMcaaaaaaa@50F5@       (12)

4.5 Core Point Localization

The sine component of the orientation image O(i,j) is
ε(i,j)=sin{O(i,j)} MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipq0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeqyTduMaai ikaiaadMgacaGGSaGaamOAaiaacMcacqGH9aqpciGGZbGaaiyAaiaa c6gacaGG7bGaam4taiaacIcacaWGPbGaaiilaiaadQgacaGGPaGaai yFaaaa@45F8@       (13)

The sine component of the orientation field is multiplied by a semicircular region consisting of segments RI and RII as shown in Fig. 3(a) , at every SIFT point calculated above with (i,j) denoting the SIFT point. Region I consists of 1 and Region II consists of -1. Matrix of dimension to the effect is built by declaring the remaining points zero. Empirically, a matrix of size 15 x 15 was found most suitable for locating the core point. All of the elements obtained after the filter operation are summed. The maximum value obtained after all these operations is taken as the reference point. Fig. 3(b) shows core point located in the fingerprint image.


Figure 3. (a) Semicircular filter and (b) Core point located in fingerprint image.


4.6 Cropping

After locating the core point, the region of interest (ROI) around the core point is cropped. If we use a fixed window size to extract ROI, then the fingerprint images having core points at the edges will include the background region which does not carry any useful information, as shown in Fig. 4.

To overcome this problem the window size for cropping is varied according to the extent of the background. This is done using the SIFT points extracted earlier. To set the boundary of the window we find the maximum and minium of the x and y cordinates of the SIFT points. Different conditions are set to deal with all possible cases so that core point is included in the region of interest and the background area is minimized. Cropped images are shown in Fig. 4(c)and Fig. 4(d).


Figure 4. (a) and (b) original image; (c) and (d) cropped image.


The benchmark databases FVC2002 and FVC2004 are used for the state-of-the-art in fingerprint recognition, where the finger core points and poor quality images, and if these cases are not considered then the accuracy of the proposed system increases. impressions are accquired using capacitive and optical sensors. Both the databases contain images of 100 different persons with 8 impressions per finger. The images in FVC2002DB1A are captured with an optical sensor at 500dpi and each image is 388 x 374 pixels wide.The images in FVC2004 DB1A, DB2A captured with optical sensor at 500 dpi are 640 x 480 and 328 x 364 pixels wide respectively.The images in FVC2004 DB3A are captured using thermal sweeping sensor at 512 dpi and are of size 300 x 480 pixels. The images in FVC2004DB4A obtained by synthetic generator are 288 x 384 pixels wide at about 500 dpi.


The images of FVC2004 DB4A shown in Fig. 5(a) and Fig. 5(b) have their core points correctly located and the failure cases are shown in Fig. 5(c) and Fig. 5(d). The total number of images tested is 800. The images with no core point or having poor quality are shown in Figs. 5(e) and 5(f). The core points detected on the tented arch images are shown in Fig. 5(g) but the plain arch type images have no core points as observed by Maio1, et al. and shown in Fig. 5(h).


Figure 5. FVC2004 (a) and (b) correct corepoint detection; (c) and (d) failure cases; (e) and (f) missing core points; (g) and (h) tented arch and plain arch.


The typical images of FVC2002 in which core points are accurately detected are shown inFig. 6(a) and Fig. 6(b) and the failure cases where the core points have not been detected are shown in Fig. 6(c) and Fig. 6(d). The algorithm is tested on 800 images from FVC2002DB1A. Fig. 6(e) and Fig. 6(f)show the images where the core points are missing or the quality of the fingerprints is poor or the singularity is missing as in the case of plain arch. Fig. 6(g) and Fig. 6(h) show the detected core points on the tented arch images. The images in the database are processed manually to locate the core points for checking the correctness of core point detection by the proposed method.


Figure 6. FVC2002 (a) and (b) correct corepoint detection; (c) and (d) failure cases; (e) and (f) missing core points; (g) and (h) tented arch images


The FVC2004 database is a very difficult benchmark with many intraclass variations accompanied by large scale distortion, which is a well known problem in the fingerprints as discussed by Lumini, & Nanni17. The results of the core point detection algorithm on FVC2004 database are shown in Table 1. . All the images in the database are considered for testing though some of them are of very poor quality with core points missing. In the literature papers discard poor quality images while calculating the accuracy of core point detection while here all the images are considered. The proposed method is tested on FVC2002DB1A and compared with Poincare index method, the extended relational graph method and the singular candidate method8 as shown in Table 2. It has been observed that many failure cases are due to missing


Table 1. Core point detection accuracy on FVC2004




Table 2. Comparison of core point detection accuracy with other methods



An efficient algorithm is developed to consistently locate a core point in the fingerprints in the wake of several problems. The proposed method uses SIFT points detected on the fingerprint image as the possible candidates for the determination of core point. SIFT method eliminates the noisy and spurious points thus minimizing the possibility of false core point detection. It has been observed that even in the extreme case of core points located at the edges of the fingerprints the proposed method is able to detect the core point. Extracting different features from the cropped image around a core point and developing a fingerprint verification system will be considered in future

1.     Maio, D.; Maltoni, D.; Jain, A.K. & Prabhakar, S. Handbook of fingerprint recognition. Springer Verlag, 2003.

2.     Kawagoe, M. & Tojo, A. Fingerprint pattern classification. Pattern Recognition, 1984 , 17(3), 295-303. [Full text via CrossRef]

3.     Bazen, A. M. & Gerez, S.H. Systematic methods for the computation of the directional fields and singular points of fingerprints. IEEE Trans. Pattern Anal. Machine Intelligence, 2002, 24(7), 905-919. [Full text via CrossRef]

4.     Karu, K. & Jain, A.K. Fingerprint classification. Pattern Recogntion, 1996, 29(3), 389-404. [Full text via CrossRef]

5.     Srinivasan, V. & Murthy, N. Detection of singular points in fingerprint images. Pattern Recognition, 1992, 25(2), 139-153. [Full text via CrossRef]

6.     Koo, W.M. & Kot, A.C. Curvature-based singular points detection. In Proceedings of the 3rd International Conference on Audio- and Video-Based Biometric Person Authentication, London, UK,  Springer-Verlag, 2001, pp. 229-234. [Full text via CrossRef]

7.     Nilsson, K. & Bigun, J. Localization of corresponding points in fingerprints by complex filtering. Pattern Recognition Letters, 2003, 24 (13), 2135-2144. [Full text via CrossRef]

8.     Ohtsuka, T.; Watanabe, D.; Tomizawa, D.; Hasegawa, Y. & Aoki, H. Reliable detection of core and delta in fingerprints by using singular candidate method. In  IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshop, 2008, pp. 1-6. [Full text via CrossRef]

9.     Zhou, J.; Chen, F. & Gu, J. A novel algorithm for detecting singular points from fingerprint images. IEEE Trans. Pattern Anal. Machine Intelligence, 2009, 31(7), 1239-1249. [Full text via CrossRef]

10.   Khalil, S.M.; Muhammad, D.; Khan, K.K. & Alghathbar, K. Singular points detection using fingerprint orientation field reliability. Int. J. Physical Sci., 2010, 5(4), 352-357.

11.   Joshi, T.; Dey, S. & Samanta, D. Two-stage algorithm for core point detection in fingerprint images. In IEEE Region 10 conference TENCON 2009, Singapore, 2009, 1-6 [Full text via CrossRef]

12.   Kundu, M.K.; Maiti, A.K. Accurate localizations of reference points in a fingerprint image. In Fourth International conference on Pattern Recognition and Machine Intelligence, LNCS Springer-Verlag Berlin Heidelberg 6744, Russia, 2011, pp.293-298. [Full text via CrossRef]

13.   Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the International Conference on Computer Vision, 1999, pp. 1150-1157. [Full text via CrossRef]

14.   Chikkerur, S. & Govindaraju, V. Fingerprint image enhancement using STFT analysis. In International Workshop on Pattern Recognition for Crime Prevention, Security and Surveillance (ICAPR 05), 2005, pp. 20-29. [Full text via CrossRef]

15.   Hong, L.; Wan, Y. & Jain, A.K. Fingerprint image enhancement : Algorithm and performance evaluation. IEEE Trans. Pattern Anal. Machine Intelligence, 1998, 20(8), 777-789. [Full text via CrossRef]

16.   Julasayvake, A. & Choomchuay, S. An algorithm for fingerprint core point detection. In 9th International Symposium on Signal Processing and its Applications, 2007, pp. 1-4. 4 [Full text via CrossRef]

17.   Lumini, A. & Nanni, L. When fingerprint are combined with Iris - A case study: FVC2004 and CASIA. Int. J. Network Security , 2007, 4(1), 27-34

18.   http://www.cubs.buffalo.edu (Accessed on 24 March 2012)

19.   Fingerprint verification competition. (Accessed on 3 February 2012)

20.   Fingerprint verification competition. (Accessed on 3 February 2012)

21.   Kovesi, P.D. Matlab functions for computer vision and image analysis. . (Accessed on 24 March 2012)

Dr M. Hanmandlu received his MTech (Power systems) from REC Warangal, Jawaharlal Nehru Technological University in 1976, and PhD (Control Systems) from Indian Institute of Technology (IIT), Delhi, in 1981. Presently working as Professor in Department of Electrical Engineering, IIT, Delhi. He has authored a book on Computer Graphics and published more than 220 publications in both conferences and journals. He has guided 15 PhDs and 100 MTech students. His current research interests mainly include: Fuzzy modeling for dynamic systems and applications of fuzzy logic to image processing, document processing, medical imaging, multimodal biometrics, surveillance and intelligent control.

Dr A.Q Ansari received his BTech, MTech and PhD from AMU, Aligarh; IIT Delhi, New Delhi, and JMI, New Delhi, respectively. He is a Professor and Head, Department of Electrical Engineering, Jamia Millia Islamia, New Delhi. His research areas includes: Computer networks, networks-on-chip, fuzzy logic, and image processing.

Ms Jaspreet Kour received her BTech, MTech from REC(NIT) Bhopal and UPTU Lucknow, respectively. Currently pursuing her PhD in the Department of Electrical Engineering at Jamia Millia Islamia, New Delhi. She is serving as Faculty in Department of Electronics and Instumentation Engineering, GCET, Greater Noida, India. Her research areas includes: Pattern recognition, image processing and biometrics.

Mr Kunal Goyal> is an undergraduate student pursuing Electrical Engineering at Indian Institute of Technology Ropar, Punjab, India. His research areas includes: Signal and system, processor design and biometrics.

Mr Rutvik Malekar is an undergraduate student pursuing Electrical engineering at Indian Institute of Technology Ropar, Punjab, India.His research areas includes: Computer architecture, image processing and biometrics.