Curvelet and Ridgelet-based Multimodal Biometric Recognition System using Weighted Similarity Approach

Biometric security artifacts for establishing the identity of a person with high confidence have evoked enormous interest in security and access control applications for the past few years. Biometric systems based solely on unimodal biometrics often suffer from problems such as noise, intra-class variations and spoof attacks. This paper presents a novel multimodal biometric recognition system by integrating three biometric traits namely iris, fingerprint and face using weighted similarity approach. In this work, the multi-resolution features are extracted independently from query images using curvelet and ridgelet transforms, and are then compared to the enrolled templates stored in the database containing features of each biometric trait. The final decision is made by normalizing the feature vectors, assigning different weights to the modalities and fusing the computed scores using score combination techniques. This system is tested with the public unimodal databases such as CASIA–Iris-V3-Interval, FVC2004, ORL and self-built multimodal databases. Experimental results obtained shows that the designed system achieves an excellent recognition rate of 98.75 per cent and 100 per cent for the public and self-built databases respectively and provides ultra high security than unimodal biometric systems.


Keywords:    Multimodalmulti-resolutioncurvelet tranformridgelet transformscore combinationweighted similarity 


Biometrics means life measurement, but the term is an automated method of recognizing a person based on either physiological or behavioural characteristics. The different modalities of an individual such as fingerprint, iris, face, palmprint, gait, and voice are used for personal identification. These modalities are definitely having advantages over the non-biometric methods such as personal identification number (PIN), and identification (ID) cards. The applications which most people associate with biometrics are surveillance systems, national security systems, border security and many more applications. Nowadays, due to the increase in the transaction fraud and security breaches, there is a need for highly secure systems. Some of the problems faced in the unimodal biometric authentication system are the enrolment problems because of non-universality, vulnerable to some level of spoofing, insufficient accuracy during data acquisition.

To overcome these problems, multimodal biometric system is preferred. Multimodal biometrics refers to the use of combining two or more biometric modalities in a single identification system. A multimodal system can operate in either serial or parallel or hierarchical mode. In the serial mode, the overall recognition time may be reduced due to early decision-making as there is no necessity to acquire multiple traits information simultaneously. But in the parallel mode, acquisition of multiple modalities information has to be done simultaneously in order to perform recognition. Hierarchical mode is suitable when the number of classifiers is large where a tree-like structure is formed by combining individual classifiers in order to perform recognition. The universality problem is overcome by enabling a user who does not possess a particular biometric identifier to still enroll and authenticate using other biometric traits. Spoofing problem is reduced, because of the presence of multiple pieces of evidence, making difficult for an intruder to spoof multiple biometric traits of a legitimate user simultaneously. The most compelling reason to combine different modalities is to improve the accuracy of decision making and to reduce false alarms.


 Large number of techniques exists for fusing the scores obtained from different biometric traits. Rukhin and Malioutov2 proposed a fusion technique based on a minimum distance method by aggregating rankings obtained from four face-recognition algorithms.

Nandakumar3, et al. proposed a generalized likelihood ratio-based fusion (GLRF) scheme by deriving overall quality oiometric features were taken from the same image and they performed identification by combining the feature fusion and match score fusion together. Besbes5, et al.  proposed a multimodal biometric approach based on fingerprint and iris recognition, and the final decision was taken by using an AND operator.

Common theoretical framework for combining classifiers using sum rule, median rule, max and min rule were analyzed by Alkoot and Kittler6 under the most restrictive assumptions and observed that sum rule outperforms other classifier combination schemes. Ross and Jain7 presented experimental results by combining three biometric modalities (face, fingerprint, and hand geometry) and stated that the sum rule outperformed better than the decision tree and linear discriminant classifiers.

Monwar and Gavrilova8 developed a multimodal biometric system using principal components analysis (PCA) and Fisher’s linear discriminant analysis (FLDA) methods for individual matching. They compared their obtained results with the previous works with and without the rank-level fusion scores. They consolidated the ranked output of three matchers (face, ear and signature) by using the highest rank, Borda count and logistic regression methods.

Conti9, et al. conducted experiments on their multimodal biometric system (iris and fingerprint) by performing fusion at the template level. They performed comparison against their experimental results obtained from the unimodal systems. They generated a homogenous template from the extracted normalized ROIs through a frequency-based approach. Hamming distance measure was used to find the similarity degree for matching. Raghavendra10, et al. first estimated the statistics (such as mean and covariance) of match scores distribution using Gaussian mixture model (GMM) and then sampled the match scores estimated by GMM using Monte Carlo method. Using statistical hypothesis testing on sampled scores, they decided whether the user was genuine or imposter. Al-Hijaili and AbdulAziz11 developed a multimodal biometric fusion system by normalizing the scores obtained from iris and face modality and then performed the fusion at the matching score level using weighted scores.

Candes and Donoho12 and Candes13, et al. developed the curvelet transform, a pyramid of windowed ridgelets. The curvelet transformed output is obtained by first filtering and then applying a windowed ridgelet transform on each bandpass image. In Fig. 1 (a), a grid of squares of side 2-j by 2–j with order 2j squares intersects the curve. Each wavelet is localized near a corresponding square of side 2-j by 2–j, at jth level of the 2-D wavelet pyramid. In Fig. 1 (b), at each length scale, a multi-scale pyramid is formed with many directions and positions, and needle-shaped elements, at fine scales. As curvelets have both variable length and width, they present highly anisotropic behaviour. Two different digital implementations are proposed in second generation curvelet transform: curvelets via wedge-wrapping and curvelets via unequally spaced fast Fourier transform (USFFT). When compared to the first generation curvelet transforms, second generation discrete curvelet transforms are simpler, faster and less redundant and hence generally called as fast discrete curvelet transform (FDCT). Curvelets are considered superior over wavelets in optimally sparse representation of (i) objects with edges and (ii) wave propagators. Also, it is superior in the case of optimal image reconstruction in severely ill-posed problems.




Figure 1. Representation of curved singularities using (a) wavelets and (b) curvelets14.


In this work, the second implementation technique of FDCT i.e., curvelet via wedge-wrapping is used, which is based on the series of translation and wrapping of specially selected Fourier samples. The properties of curvelet transform such as parabolic scaling, oscillatory behaviour and the tight frame and vanishing moments provide optimal sparse representation with very high directional sensitivity.

A ridgelet is a function ρ a,b,θ =ψ( ( xcosθ+ysinθb )/a )/ a 3 2 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeqyWdi3aaS baaSqaaiaadggacaGGSaGaamOyaiaacYcacqaH4oqCaeqaaOGaaGPa Vlabg2da9iaaykW7cqaHipqEdaqadaqaamaabmaabaGaamiEaiaayk W7ciGGJbGaai4BaiaacohacaaMc8UaeqiUdeNaaGPaVlabgUcaRiaa ykW7caWG5bGaaGPaVlGacohacaGGPbGaaiOBaiaaykW7cqaH4oqCca aMc8UaeyOeI0IaaGPaVlaadkgaaiaawIcacaGLPaaacaaMc8Uaai4l aiaaykW7caWGHbaacaGLOaGaayzkaaGaaGPaVlaac+cacaaMc8Uaam yyamaaCaaaleqabaWaaSGaaeaacaaIZaaabaGaaGOmaaaaaaaaaa@6B01@  where ψ( t ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeqiYdK3aae WaaeaacaWG0baacaGLOaGaayzkaaaaaa@3BF3@  is a wavelet function, a and b are scaling and translation parameters respectively, and θ is the direction parameter15. The continuous ridgelet transform (Rf) of s MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeyicI4maaa@3927@ L 2 ( 2 ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaamitamaaCa aaleqabaGaaGOmaaaakmaabmaabaGaeSyhHe6aaWbaaSqabeaacaaI YaaaaaGccaGLOaGaayzkaaaaaa@3D53@  is defined as

R f ( a,b,θ )= 2 ψ a,b,θ ( x )s( x )dx MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaamOuamaaBa aaleaacaWGMbaabeaakmaabmaabaGaamyyaiaacYcacaaMc8UaamOy aiaacYcacaaMc8UaeqiUdehacaGLOaGaayzkaaGaaGPaVlabg2da9i aaykW7daWdrbqaaiabeI8a5naaBaaaleaacaWGHbGaaiilaiaaykW7 caWGIbGaaiilaiaaykW7cqaH4oqCaeqaaaqaaiabl2riHoaaCaaame qabaGaaGOmaaaaaSqab0Gaey4kIipakmaabmaabaGaamiEaaGaayjk aiaawMcaaiaadohadaqadaqaaiaadIhaaiaawIcacaGLPaaacaWGKb GaamiEaaaa@5DC3@        (1)

with x = (x1, x2) 2 MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeyicI4SaeS yhHe6aaWbaaSqabeaacaaIYaaaaaaa@3B80@ and ψ a,b,θ ( x ) MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeqiYdK3aaS baaSqaaiaadggacaGGSaGaaGPaVlaadkgacaGGSaGaaGPaVlabeI7a XbqabaGcdaqadaqaaiaadIhaaiaawIcacaGLPaaacaaMc8oaaa@45B0@ , the ridgelet function defined from a wavelet 1-D function ψ as


ψ a,b,θ ( x )= a 1 2 ψ( x 1 cosθ+ x 2 sinθb a ) MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeqiYdK3aaS baaSqaaiaadggacaGGSaGaaGPaVlaadkgacaGGSaGaaGPaVlabeI7a XbqabaGcdaqadaqaaiaadIhaaiaawIcacaGLPaaacaaMc8UaaGPaVl abg2da9iaaykW7caWGHbWaaWbaaSqabeaadaWccaqaaiabgkHiTiaa igdaaeaacaaIYaaaaaaakiabeI8a5naabmaabaWaaSaaaeaacaWG4b WaaSbaaSqaaiaaigdaaeqaaOGaaGPaVlGacogacaGGVbGaai4Caiaa ykW7cqaH4oqCcaaMc8Uaey4kaSIaaGPaVlaadIhadaWgaaWcbaGaaG OmaaqabaGccaaMc8Uaai4CaiaacMgacaGGUbGaaGPaVlabeI7aXjaa ykW7cqGHsislcaaMc8UaamOyaaqaaiaadggaaaaacaGLOaGaayzkaa aaaa@6DAC@        (2)

 where a  MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeyicI4SaeS yhHekaaa@3A97@ is the scaling parameter, b  MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeyicI4SaeS yhHekaaa@3A97@ is the translation parameter and θ MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeyicI4maaa@3927@  [0, 2π] is the direction parameter. This transform obeys a parseval relation and an exact reconstruction formula. The ridgelet transform is obtained by applying 1-D wavelet transform to the slices of the Radon transform. Similarly, applying 1-D Fourier transform to the slices of the Radon transform leads to 2-D Fourier domain.
Some of the properties of digital ridgelet transform are
(i) geometrically faithful, and avoids wrap-around artifacts
(ii) an iterative algorithm which gives exact reconstruction from the ridgelet transform
(iii) takes O(N log N) times for execution in an n x n grid, where N = n2 is the total number of data, and
(iv) takes an n x n array and expands it by a factor of 4 in creating the coefficient array.

In this paper, the three biometric traits namely iris, face and fingerprints are used. Iris, as one of the most accurate and efficient biometrics, is chosen because of its advantage of genetic independence and stability. Secondly, fingerprint, playing a major part of the security market continues to be more competitive than the others, is chosen. Finally, face is chosen, being the most natural, friendly and easy to obtain biometric, as the third biometric trait. Each biometric modality is processed separately with different recognition systems and the generated match scores are normalized and fused together through various fusion strategies. Based on the adaptive threshold, decision is taken whether the identity is genuine or imposter. In the iris recognition system, texture features derived from ridgelet transformed subbands are used, as proposed by Arivazhagan17, et al. while for fingerprint recognition, the algorithm reported in Mandal and Wu18 is used. A modified algorithm proposed by Sekar19et al., our earlier implementation technique, is used for face recognition.


The proposed method is described in the later section and it has the following research highlights/contributions:
(a) Analyze the recognition rate of each unimodal biometric system by extracting multi-resolution features obtained from the curvelet and ridgelet transformed outputs.
(b)   Perform multimodal biometric recognition using four new score level fusion strategies such as weighted min, weighted max, weighted median and weighted exponential and study their performances.
(c)   The final decision is made, whether to accept or reject a user, by using weighted similarity approach and adaptive thresholding technique.

The block diagram of the proposed multimodal biometric recognition system is shown in Fig. 2. The data obtained from each modality is preprocessed, transformed using multi-resolution transforms such as curvelet and ridgelet, and then features are extracted. These extracted features are compared against the stored templates to generate match scores. Genuine acceptance rate (GAR) is defined as the ratio of truly matching samples, which are matched by the system and total numbers of test samples.

All the three individual biometric traits are processed with separate recognition systems by transforming them into multi-resolution domain and extracting statistical and co-occurrence multi-resolution features. The match scores produced by each system is then normalized and combined together with various score-level fusion strategies. Finally the decision is made by thresholding the score values by reducing the false alarms.


Figure 2. Block diagram of proposed method.


3.1 Iris Recognition System

  

The approach described by Arivazhagan17, et al. is utilized here for iris recognition. The iris recognition system consists of four phases namely preprocessing, multi-resolution transform, feature extraction and matching. Preprocessing in iris recognition is generally needed in order to extract the iris region by detecting the outer boundaries of the pupil and iris in the photo of an eye.

Preprocessing steps include iris localization, eyelid localization and iris normalization. The iris image is first localized by finding the approximate pupil center and any three points on the circumference of the outer pupil boundary. This process helps to localize the iris region by obtaining the outer boundaries of both pupil and iris. If eyelashes and eyelids interfere within the iris boundary, they are tackled by using horizontal ID rank filter and parabolic curve fitting. Preprocessing results for the sample iris image taken from the self-built multimodal iris database is shown in Fig. 3. The annular region lying between the pupil and iris boundary is transformed to a rectangular image using Daugman rubber sheet model and is shown in Fig. 3 (e).


Figure 3.Iris image preprocessing (a) Input iris image, (b) ROI, (c) Marked pupil boundary, (d) Marked iris boundary, (e) Daugman rubber sheet output, and (f) Ridgelet transformed output.



The obtained Daugman rubber sheet output is partitioned into six regions of size 64 x 64. Ridgelet transform is applied to all the six regions and the obtained output for a single region is shown in Fig. 3 (f). For each subband of the ridgelet transformed output, both statistical features such as mean and co-occurrence features such as contrast, correlation, dissimilarity and homogeneity are computed and stored as feature vectors. The algorithm defined below describes the various steps involved in the iris recognition system.


Algorithm iris Recogn( I )
// Input: I - Iris image
// Output: fvc, fvr – Feature vectors obtained from curvelet and ridgelet coefficients
begin
p ← Θ( I ) || mark Pupil Boundary( I ) // Θ - pupil detection operator
// Choose region of interest ‘roi’ containing iris boundary
roi ← detectIrisBoundary( I ) || pick ROI( I )
// ϑ and – eyelash Removal and Cartesian to polar conversion operators
re ← ϑ( roi ) || ( roi )
r6 ← slice(re, 6)
for each ri in r6
tci ← ri . ζ // ζ - curvelet transform operator
tri ← ri . Γ // Γ - ridgelet transform operator
end
// s, c – statistical and co-occurrence features; f – feature extractor function
fvc i=1 2 f s i ( t c i ) i=1 4 f c i ( t c i ) MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeyiaIiYaa0 baaSqaaiaadMgacaaMc8Uaeyypa0JaaGPaVlaaigdaaeaacaaIYaaa aOGaamOzamaaBaaaleaacaWGZbWaaSbaaWqaaiaadMgaaeqaaaWcbe aakmaabmaabaGaamiDamaaBaaaleaacaWGJbWaaSbaaWqaaiaadMga aeqaaaWcbeaaaOGaayjkaiaawMcaaiaaykW7tuuDJXwAK1uy0Hwmae Hbfv3ySLgzG0uy0Hgip5wzaGqbaabaaaaaaaaapeGae8hfIOUaaGPa V=aacqGHaiIidaqhaaWcbaGaamyAaiaaykW7cqGH9aqpcaaMc8UaaG ymaaqaaiaaisdaaaGccaWGMbWaaSbaaSqaaiaadogadaWgaaadbaGa amyAaaqabaaaleqaaOWaaeWaaeaacaWG0bWaaSbaaSqaaiaadogada WgaaadbaGaamyAaaqabaaaleqaaaGccaGLOaGaayzkaaaaaa@6522@ // MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaamrr1ngBPrwtHr hAXaqeguuDJXwAKbstHrhAG8KBLbacfaaeaaaaaaaaa8qacqWFuiI6 aaa@413B@ : feature fusing operator
fvr i=1 2 f s i ( t r i ) i=1 4 f c i ( t r i ) MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeyiaIiYaa0 baaSqaaiaadMgacaaMc8Uaeyypa0JaaGPaVlaaigdaaeaacaaIYaaa aOGaamOzamaaBaaaleaacaWGZbWaaSbaaWqaaiaadMgaaeqaaaWcbe aakmaabmaabaGaamiDamaaBaaaleaacaWGYbWaaSbaaWqaaiaadMga aeqaaaWcbeaaaOGaayjkaiaawMcaaiaaykW7tuuDJXwAK1uy0Hwmae Hbfv3ySLgzG0uy0Hgip5wzaGqbaabaaaaaaaaapeGae8hfIOUaaGPa V=aacqGHaiIidaqhaaWcbaGaamyAaiaaykW7cqGH9aqpcaaMc8UaaG ymaaqaaiaaisdaaaGccaWGMbWaaSbaaSqaaiaadogadaWgaaadbaGa amyAaaqabaaaleqaaOWaaeWaaeaacaWG0bWaaSbaaSqaaiaadkhada WgaaadbaGaamyAaaqabaaaleqaaaGccaGLOaGaayzkaaaaaa@6540@ end

This algorithm involves the steps such as detection of pupil and iris boundaries, picking up the region of interest for eyelash removal, and consequently the features are obtained by computing the statistical and co-occurrence values on the curvelet and ridgelet coefficients by using our earlier work as proposed by Arivazhagan17, et al.

Features thus generated are then matched using Manhattan distance measure (dM). Using this recognition method, by considering 1000 images for 100 subjects (10 samples per subject) with 600 training samples and 400 test samples, the GAR achieved for CASIA-Iris-V3 Interval database20 and the self built database is 94.25 per cent and 84 per cent respectively.

.

3.2   Fingerprint Recognition System

The fingerprint recognition system is divided into four main parts, namely pre-processing, curvelet transform, feature extraction and matching. Pre-processing is first carried out to enhance the quality of the input fingerprint image. The steps involved in pre-processing are image normalization, orientation image estimation, frequency image estimation, Gabor filtering and binarization. Normalization is performed to remove the influences of sensor noise and gray-level deformation due to finger pressure difference by means of the predefined constant mean and variance. To remove the noise and preserve the structures of true ridges/valley, Gabor filter is applied as bandpass filter on the fingerprint images. Binarization is then performed by adaptive thresholding based on the local intensity mean. Preprocessing results of the fingerprint image taken from the proposed database is shown in Fig. 4.


Figure 4. Fingerprint image preprocessing (a) input image, (b) normalized image, (c) enhanced image, and (d) binarized image.



Each block (w x w) of the binarized image (w = 64) is transformed using curvelet transform with 3 scales and 16 orientations. Standard deviation is calculated from each and every subbands thus forming the feature vector and Euclidean distance (dE) classifier is used for recognition.

Algorithm finger print Recogn( P )
// Input: P - Finger print image
// Output: fvc – Feature vector obtained from curvelet coefficients
begin
po ← compute Orient(P, µ( P ), σ2( P )) // orientation image estimation
pf ← compute Freq( po ) // frequency image estimation
pe ← remove Noise( pf ) // using Gabor filter
rj ← binarize( pe ) || slice(pe, j), ← j = 1, …, 35 | 25
for each rji in rj
tci ← rji . ζ                      // ζ - curvelet transform operator
end
fvc ←  i=1 35|25 f s i ( t c i ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeyiaIiYaa0 baaSqaaiaadMgacaaMc8Uaeyypa0JaaGPaVlaaigdaaeaacaaIZaGa aGynaiaaykW7caGG8bGaaGPaVlaaikdacaaI1aaaaOGaamOzamaaBa aaleaacaWGZbWaaSbaaWqaaiaadMgaaeqaaaWcbeaakmaabmaabaGa amiDamaaBaaaleaacaWGJbWaaSbaaWqaaiaadMgaaeqaaaWcbeaaaO GaayjkaiaawMcaaaaa@4D81@
end
In the above algorithm, the various processes involved in the fingerprint recognition system are discussed. The number of sub-blocks of size 64 x 64 obtained are 35 and 25 for DB1_A and DB2_A respectively. The minimum distance between the feature vectors of the test images and the training features is considered to be the fingerprint match score. The training and testing is carried for 100 subjects (8 samples per subject) with 400 training and 400 testing samples. The recognition rate for FVC200421 databases namely DB1_A, DB2_A, and the self-built database are 88.25 per cent, 74 per cent, and 96.5 per cent, respectively.


3.3   Face Recognition System

The face recognition system designed as proposed by Geng and Zhou22 is based on the global and local discriminative features. In our proposed method, global features are extracted from the whole face image by keeping the low frequency components of the Gabor transformed output. For local feature extraction, local patches for the right eye, left eye, nose and mouth are chosen and curvelet transform is separately applied to each patches. Then the low frequency components of the curvelet transformed output alone are kept as local features. The patches chosen for experimentation on the input face image of the proposed database is shown in Fig. 5 (c).



Figure 5. (a) Input face image (b) ROI, (c) Local patches chosen.



Global and local features are combined together to form the feature vectors. The algorithm for the proposed face recognition system stated below uses the method proposed in our earlier work by Sekar19et al.
Algorithm face Recogn( F )
// Input: F - Face image
// Output: fvc – Feature vector obtained from curvelet coefficients
begin
// Extract global features fvg from Gabor transformed output
fvg ← F . Ψ                   // Ψ - Gabor wavelet operator
// Detect pupil location
(leftp, rightp) ←Θ( F )
pj = (leyep, reyep, nosep, mouthp) ← extractPatches( F,  leftp, rightp )
for each pji in pj
tci ← pji . ζ                     // ζ - curvelet transform operator
end
// Extract local features
fvl i=1 4 f s i ( t c i ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeyiaIiYaa0 baaSqaaiaadMgacaaMc8Uaeyypa0JaaGPaVlaaigdaaeaacaaI0aaa aOGaamOzamaaBaaaleaacaWGZbWaaSbaaWqaaiaadMgaaeqaaaWcbe aakmaabmaabaGaamiDamaaBaaaleaacaWGJbWaaSbaaWqaaiaadMga aeqaaaWcbeaaaOGaayjkaiaawMcaaaaa@4732@ , ←i – local patches

fvc ← (fvg MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaWefv3ySLgznf gDOfdaryqr1ngBPrginfgDObYtUvgaiuaaqaaaaaaaaaWdbiab=rHi Qdaa@42F2@ fvl)
end
The Euclidean distance (dE) measure is used for classification. The minimum distance calculated between the test image features and the training dataset is considered as the face match score. The training and testing is carried out for all subjects of ORL23 database and 40 randomly chosen subjects from our database (10 samples per subject) with 240 training and 160 testing samples. The recognition rate achieved for ORL database, and the self built database are 99.38 per cent and 100per cent, respectively.


3.4   Score Normalization and Fusion Techniques

Score normalization as proposed by Latha and Thangaswamy24 and Gan25, et al. is done to ensure a meaningful combination of the iris, fingerprint and face match scores. Seven score normalization techniques (min-max, max, median-MAD, tanh, double-sigmoid, and z-score) and eight fusion techniques on the normalized score (mean, min, max, sum, product, tanh, median and exponential) are tested here by assigning different weights. Let x be a raw matching score from the set of all scores X for that matcher, and η be the corresponding normalized score.

Min-Max (MM): It is the simplest normalization technique that normalizes the numerical range of the scores to [0, 1]. The values min(X) and max(X) specifies the end points of the score range:

η= xmin(x) max(x)min(x) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeq4TdGMaaG PaVlabg2da9iaaykW7daWcaaqaaiaadIhacaaMc8UaeyOeI0IaaGPa VlGac2gacaGGPbGaaiOBaiaacIcacaWG4bGaaiykaaqaaiGac2gaca GGHbGaaiiEaiaacIcacaWG4bGaaiykaiaaykW7cqGHsislcaaMc8Ua ciyBaiaacMgacaGGUbGaaiikaiaadIhacaGGPaaaaaaa@55F8@       (3)

Max (MX): The normalization is done by assigning min(X) equal to zero in eqn. 3 and is given by

η= x max(x) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeq4TdGMaaG PaVlabg2da9iaaykW7daWcaaqaaiaadIhaaeaaciGGTbGaaiyyaiaa cIhacaGGOaGaamiEaiaacMcaaaaaaa@43A2@       (4)

Median-MAD (MAD): The median-median absolute deviation (MAD) is a measure of the variability of a univariate sample of quantitative data and is given by

η= xmedian(x) const( median| xmedian(x) | ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeq4TdGMaaG PaVlabg2da9iaaykW7daWcaaqaaiaadIhacaaMc8UaeyOeI0IaaGPa VlGac2gacaGGLbGaaiizaiaacMgacaGGHbGaaiOBaiaacIcacaWG4b GaaiykaaqaaiaacogacaGGVbGaaiOBaiaacohacaGG0bWaaeWaaeaa ciGGTbGaaiyzaiaacsgacaGGPbGaaiyyaiaac6gadaabdaqaaiaadI hacaaMc8UaeyOeI0IaaGPaVlaad2gacaWGLbGaamizaiaadMgacaWG HbGaamOBaiaacIcacaWG4bGaaiykaaGaay5bSlaawIa7aaGaayjkai aawMcaaaaaaaa@6629@        (5)

Tanh (TH): Using mean μ MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeqiVd0gaaa@3959@ and standard deviationσ, the normalized score is computed as

η= 1 2 [ tanh( 0.01 ( xμ ) σ )+1 ] MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeq4TdGMaaG PaVlabg2da9iaaykW7daWcaaqaaiaaigdaaeaacaaIYaaaamaadmaa baGaciiDaiaacggacaGGUbGaaiiAamaabmaabaGaaGimaiaac6caca aIWaGaaGymamaalaaabaWaaeWaaeaacaWG4bGaaGPaVlabgkHiTiaa ykW7cqaH8oqBaiaawIcacaGLPaaaaeaacqaHdpWCaaaacaGLOaGaay zkaaGaaGPaVlabgUcaRiaaykW7caaIXaaacaGLBbGaayzxaaGaaGPa Vdaa@595B@          (6)

Double-sigmoid (DS): It provides a linear and non-linear transformation of the scores in the overlapping and non-overlapping regions respectively. Here t is the reference operating point and r1 and r2 denote the left and right edges of the region in which the function is linear.



η={ 1 1+exp( 2[ xt r 1 ] ) , ifx<t 1 1+exp( 2[ xt r 2 ] ) , ifxt MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeq4TdGMaaG PaVlabg2da9iaaykW7daGabaqaauaabeqaciaaaeaadaWcaaqaaiaa igdaaeaacaaIXaGaaGPaVlabgUcaRiaaykW7ciGGLbGaaiiEaiaacc hadaqadaqaaiabgkHiTiaaikdadaWadaqaamaalaaabaGaamiEaiaa ykW7cqGHsislcaaMc8UaaGPaVlaadshaaeaacaWGYbWaaSbaaSqaai aaigdaaeqaaaaaaOGaay5waiaaw2faaaGaayjkaiaawMcaaaaacaGG SaaabaGaamyAaiaadAgacaaMc8UaamiEaiaaykW7cqGH8aapcaaMc8 UaamiDaaqaamaalaaabaGaaGymaaqaaiaaigdacaaMc8Uaey4kaSIa aGPaVlGacwgacaGG4bGaaiiCamaabmaabaGaeyOeI0IaaGOmamaadm aabaWaaSaaaeaacaWG4bGaaGPaVlabgkHiTiaaykW7caaMc8UaamiD aaqaaiaadkhadaWgaaWcbaGaaGOmaaqabaaaaaGccaGLBbGaayzxaa aacaGLOaGaayzkaaaaaiaacYcaaeaacaWGPbGaamOzaiaaykW7caWG 4bGaaGPaVlabgwMiZkaaykW7caWG0baaaaGaay5Eaaaaaa@8191@         (7)

Z-Score (ZS): It is calculated by using mean μ MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeqiVd0gaaa@3959@ and standard deviation σ of the match scores, which is computed as

η= ( xμ ) σ MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeq4TdGMaaG PaVlabg2da9iaaykW7daWcaaqaamaabmaabaGaamiEaiaaykW7cqGH sislcaaMc8UaeqiVd0gacaGLOaGaayzkaaaabaGaeq4Wdmhaaaaa@477D@            (8)

The genuine and imposters can be well separated in multimodal biometric system by using the score-level fusion during the matching process than feature-level and rank-level features and they are relatively easy to obtain. Consider ηiris, ηfp, and ηface as the normalized scores with a, b and c as the weightage assigned to the iris, fingerprint and face modalities respectively.

In our implementation, the four score level fusion techniques (weighted-sum, weighted-mean, weighted-product, weighted-tanh) and four new score level (weighted-min, weighted-max, weighted-median, weighted-exponential) fusion techniques are proposed. Let m be the three modalities of our consideration and w be their corresponding weights.
Weighted-Min: Here, the minimum of the different weighted unimodal scores is chosen, which is given by

η final =min( w m η m ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeq4TdG2aaS baaSqaaiaadAgacaWGPbGaamOBaiaadggacaWGSbaabeaakiaaykW7 cqGH9aqpcaaMc8UaciyBaiaacMgacaGGUbWaaeWaaeaacaWG3bWaaS baaSqaaiaad2gaaeqaaOGaeq4TdG2aaSbaaSqaaiaad2gaaeqaaaGc caGLOaGaayzkaaaaaa@4B97@         (9)

Weighted-Max: Here, the maximum of the different weighted unimodal scores is considered and is computed as

η final =max( w m η m ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeq4TdG2aaS baaSqaaiaadAgacaWGPbGaamOBaiaadggacaWGSbaabeaakiaaykW7 cqGH9aqpcaaMc8UaciyBaiaacggacaGG4bWaaeWaaeaacaWG3bWaaS baaSqaaiaad2gaaeqaaOGaeq4TdG2aaSbaaSqaaiaad2gaaeqaaaGc caGLOaGaayzkaaaaaa@4B99@         (10)

Weighted-Median: Here, the median value of the different weighted unimodal scores is computed as

η final =median( w m η m ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeq4TdG2aaS baaSqaaiaadAgacaWGPbGaamOBaiaadggacaWGSbaabeaakiaaykW7 cqGH9aqpcaaMc8UaciyBaiaacwgacaGGKbGaaiyAaiaacggacaGGUb WaaeWaaeaacaWG3bWaaSbaaSqaaiaad2gaaeqaaOGaeq4TdG2aaSba aSqaaiaad2gaaeqaaaGccaGLOaGaayzkaaaaaa@4E4D@       (11)

Weighted-Exponential: Here, the sum of the product of the weights with the exponential value of the unimodal scores is computed as

η final = m=1 3 ( w m e η m ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeq4TdG2aaS baaSqaaiaadAgacaWGPbGaamOBaiaadggacaWGSbaabeaakiaaykW7 cqGH9aqpcaaMc8+aaabCaeaadaqadaqaaiaadEhadaWgaaWcbaGaam yBaaqabaGccaWGLbWaaWbaaSqabeaacqaH3oaAdaWgaaadbaGaamyB aaqabaaaaaGccaGLOaGaayzkaaaaleaacaWGTbGaaGPaVlabg2da9i aaykW7caaIXaaabaGaaG4maaqdcqGHris5aaaa@52A5@         (12)

A user is considered to be genuine, if the match score value after fusion is less than threshold and imposter otherwise. Threshold is chosen adaptively by collecting the minimum match score value among the wrongly classified subjects and applying them in the corresponding fusion techniques.

Experiments are conducted for iris recognition system using CASIA–Iris-V3-Interval public database. For the fingerprint recognition system, DB1_A and DB2_A of FVC2004 public databases are used for our experimentation. Using ORL public database, face recognition system is performed. Our self-built database currently consists of 125 subjects with 10 iris samples, 8 fingerprint samples and 31 face samples per subject. These three modalities are acquired from (both genders) faculties, staff members and their family members, students.

These subjects belong to age group from 10 years to 55 years old. Iris images are acquired using Nikon D3100 Single Central Lighting camera model (MEC-6-SCL) with chinrest model CRCS-TTB-AF for autofocus imaging. Fingerprint images are captured using 1000 dpi Hamster IV FP scanner. Left and right thumb fingers images are captured with 4 samples per finger. Face images are acquired by using Nikon D3000 camera. 31 images per subject are captured with varying postures namely 1 sample under normal lighting, 10 samples under various illumination, 10 samples based on rotations, 2 samples by left and right head tilting, 7 samples with different expressions and 1 sample with glass-weared.

To perform recognition test on our self-built multimodal database, we have considered 1000 iris images, 800 fingerprint images and 3100 face images from 100 subjects of varying age groups (10 images both left and right iris; 8 fingerprints both left and right thumb fingers; and 31 face images per subject). For each subject, six iris and face images are considered as training samples, and four images as test samples. The same technique is also applied for fingerprint database with 400 training samples and 400 testing samples images. Sample images of our database are shown in Fig. 6.

The proposed multimodal biometric recognition system is tested with the public databases and our self built database. During the implementation of weighted fusion techniques, two different test cases are proposed with more weightage assigned to iris, next less weightage to fingerprint and least weightage to face modality. The chosen weights are based on the possibility of spoofing attacks. Since, our system aims to reduce / decrease the vulnerability of spoofing attacks, falsifying iris trait is tough when compared to fingerprint and is still tough when compared to the face modality.





Figure 6.(a) Self built database image samples for (a) iris, (b) fingerprints, and (c) face.


Hence, the iris modality is assigned with higher weight and it leads to better multimodal recognition rate. For test case 0, no weights are considered, for test case 1, the weights assumed are 0.4, 0.3, 0.3 and for test case 2, it is 0.5, 0.3, 0.2 weights for the modalities, iris, fingerprint, and face respectively. Table 1 shows the multimodal biometric recognition rate using different normalization and fusion techniques for both the public and self-built databases.

Recognition rate of 98.75 per cent is achieved for product fusion, weighted-min, weighted-product and weighted-tanh fusion in the case of public databases. For the proposed data set, recognition rate of 100 per cent is achieved for many normalization and fusion techniques. The weightage of 0.5 for iris, 0.3 for fingerprint and 0.2 for face results in better performance in an overall sense, for all fusion and normalization techniques while weighted min fusion technique out performs other fusion techniques with 100% recognition rate for all normalization techniques, as evident with Table 1.

It is clearly observed that Gan25, et al. proposed a multimodal biometric fusion with face and iris biometrics by using two-dimensional discrete cosine transform (2DDCT) for feature compression and fused these two features using kernel fisher discriminant analysis (KFDA). They showed that their recognition rates were 96.67 per cent and 97.5 per cent at the feature level and score level respectively. Al-Hijaili and AbdulAziz 11 performed the fusion of two features at the matching score-level by assigning weights for the scores. They produced a recognition rate of 98.75 per cent with a genuine acceptance rate (GAR) of 98 per cent at the score level. Our proposed method has provided a recognition rate of 97.5 per cent and 100 per cent at the feature level and score level respectively, with a genuine acceptance rate (GAR) of 98 per cent at the score level.


Table 1.Recognition rate using different normalization and fusion techniques for public and self-built(#) databases (a) test case 0, (b) test case 1, and (c) test case 2



Liu26, et al. used three biometrics namely iris, fingerprint and face with 40 subjects and obtained a recognition rate of 95.6 per cent using score level fusion by employing the Gabor wavelet and the posterior union decision-based neural network (PUDBNN) while the proposed work, achieves a recognition rate of 99.38 per cent which is shown in Table 2. GAR obtained for the self built multimodal database is 99.5 per cent. The improvement in the performance is due to the features extracted by deploying multi-resolution techniques such as curvelet and ridgelet. Also, the score-level fusion assigned with high weightage to iris biometric modality leads to better recognition rate.


Table 2. Comparison of iris, fingerprint and face recognition results against score (S) level fusion



In this paper, we propose a framework for multimodal recognition system using weighted similarity approach by extracting the multi-resolution features from the curvelet and ridgelet transformed outputs. Unlike existing multimodal biometric approaches, we examine the performance of a multimodal biometric system with different score normalization techniques and fusion methods on public and self built multimodal databases. The efficacy of the proposed fusion technique, weighted min fusion, outperforms well among all the normalization techniques. It is proved that the combination of iris, fingerprint and face modalities into our proposed multimodal biometric recognition system has higher performance than each unimodal separately.

I express my sincere thanks to the College Management, Principal and Head of the Department of Computer Science and Engineering of Mepco Schlenk Engineering College, Sivakasi, for providing me all the facilities and support to carry out this research work. The authors would like to thank Chinese Academy of Sciences’ Institute of Automation (CASIA), and AT&T Laboratories Cambridge, for providing the iris and face database images respectively

1.     Jain, A.K.; Ross, A. & Prabhakar, S. An introduction to biometric recognition. IEEE Trans. Circuits and Systems for Video Technology, 2004, 14(1), 4-20.[Full text via CrossRef]

2.     Rukhin, A.L. & Malioutov, I. Fusion of biometric algorithms in the recognition problem. Pattern Recognition Letters, 2005, 26, 679-84.[Full text via CrossRef]

3.     Nandakumar, K.; Chen, Y.; Jain, A.K. & Dass, S C. Quality-based score level fusion in multibiometric systems. In the Proceedings of 18th International Conference on Pattern Recognition, 2006, 4, pp. 473-76.[Full text via CrossRef]

4.     Yang, F. & Ma, B. A new mixed-mode biometrics information fusion based-on fingerprint, hand-geometry and palm-print. In the Proceedings of 4th International Conference on Image and Graphics, 2007, pp. 689–93.[Full text via CrossRef]

5.   Besbes, F.; Trichili, H. & Solaiman, B. Multimodal biometric system based on fingerprint identification and iris recognition. In the Proceedings of 3rd International Conference on Information Communication Technologies, 2008, pp. 1-5.[Full text via CrossRef]

6.     Alkoot, F.M. & Kittler, J. Experimental evaluation of expert fusion strategies. Pattern Recog. Letter, 1999, 20(11), 1361-69.[Full text via CrossRef]

7.   Ross, A. & Jain, A. Information fusion in biometrics. Pattern Recog. Letter, 2003, 24, 2115-25.[Full text via CrossRef]

8.     Monwar, M.M. & Gavrilova, M.L. Multimodal biometric system using rank-level fusion approach. IEEE Trans. Syst. Man Cybernetics, 2009, 39(4), 867-78.[Full text via CrossRef]

9.     Conti, V.; Militello, C.; Sorbello F. & Vitabile, S. A frequency-based approach for features fusion in fingerprint and iris multimodal biometric identification systems. IEEE Trans. Syst., Man, Cybernetics, 2010, 40(4), 384-95.[Full text via CrossRef]

10.   Raghavendra, R.; Ashok, R. & Kumar, G.H. Multimodal biometric score fusion using gaussian mixture model and Monte Carlo method. J. Comput. Sci. Technol., 2010, 25 (4), 771-82.[Full text via CrossRef]

11.   Al-Hijaili, S.J. & Abdul, Aziz M. Biometrics in health care security system, iris-face fusion system. Int. J.  Academic Res., 2011, 3 (1), 11-19.

12.   Candes, E. & Donoho, D.L. Curvelets – a suprisingly effective nonadaptive representation for objects with edges. Saint-Malo, Vanderbilt University Press, 2000, 1–10.

13.   Candes, E.; Demanet, L.; Donoho, D. & Ying, L. Fast discrete curvelet transform. SIAM Multiscale Modelling Simulation, 2006, 5 (3), 861–99.[Full text via CrossRef]

14.   Starck, J.L. Image processing by the curvelet transform. DSM/DAPNIA/SEDI-SAP, France, Technical Report No. DAPNIA-02-138, Nov. 2002.

15.   Carre, P. & Andres, E. Discrete analytical ridgelet transform. Signal Processing, 2004,84 (11), 2165-73.[Full text via CrossRef]

16.   Do, M.N & Vetterli, M. The finite ridgelet transform for image representation. IEEE Trans.  Image Proc., 2003, 12 (1), 16-28.[Full text via CrossRef]

17.   Arivazhagan, S.; Priyadharshini, S.S. & Sekar, J.R. Iris recognition using ridgelet transform. In the Proceedings of IEEE International Conference on Recent Advancements in Electrical, Electronics and Control Engineering, 2011, pp. 286-90.[Full text via CrossRef]

18.   Mandal, T. & Wu, Q.M.J. A small scale fingerprint matching scheme using digital curvelet transform. In the Proceedings of IEEE International Conference on Systems, Man and Cybernetics, 2008, pp. 1534-38.[Full text via CrossRef]

19.   Sekar, J.R.; Arivazhagan, S. & Ananthi, G. Extracting facial global and local features for recognition using multi-resolution transforms. In the Proceedings of International Conference on Intelligent Design and Analysis of Engineering Products, Systems and Computation (IDAPSC-10) 2010, pp. 65.

20.   Institute of Automation, Chinese Academy of Sciences, CASIA-IrisV3, (2008) www.cbsr.ia.ac.cn/IrisDatabase.htm. (Accessed on 10 Sept. 2009)

21.   Fingerprint Verification Competition, 2004,  bias.csr.unibo.it/fvc2004/download.asp. (Accessed on 10 Sept. 2009)

22.   Geng, X. & Zhou, Z.H. Image region selection and ensemble for face recognition. J. Comput. Sci. Technol., 2006, 21(1), 116-25.[Full text via CrossRef]

23.   Olivetti Research Laboratory (ORL) database of faces, 2002. www.cl.cam.ac.uk . (Accessed on 10 Sept. 2009)

24.   Latha, L. & Thangaswamy, S. Efficient approach to normalization of multimodal biometric scores. Int. J. Comp. Applications, 2011, 32(10), 57-64.

25.   Gan, J.Y.; Gao, J.H. & Liu, J.F. Research on face and iris feature recognition based on 2DDCT and kernel fisher discriminant analysis. In the Proceedings of IEEE International Conference on Wavelet Analysis and Pattern Recognition, 2008, pp. 401-5.[Full text via CrossRef]

26.   Liu, L.; Gu, X.F.; Li, J.P.; Lin, J.; Shi, J.X. & Huang, Y.Y. Research on data fusion of multiple biometric features.  In the Proceedings of IEEE International Conference  on Apperceiving Computing and Intelligence Analysis, 2009, pp. 112-15.[Full text via CrossRef]

Dr S. Arivazhaganobtained his PhD (Image Processing) from the Manonmaniam Sundaranar University, Tirunelveli in 2005. He is presently the Principal of Mepco Schlenk Engineering College, Sivakasi. He has published more than 150 research papers in refereed journals and conference proceedings in the areas of pattern recognition, image processing and computer vision. His current research interests are in the areas of biometrics, image and video understanding and computer communication. He is a Fellow of IETE and Life Member of ISTE.

Mr J. Raja Sekarobtained his ME (Computer Science and Engineering) from College of Engineering, Guindy, Anna University, Chennai in 2001. He is currently working as Assistant Professor in the Department of Computer Science and Engineering, Mepco Schlenk Engineering College, Sivakasi. He has published nearly 20 technical papers in International/National Journals/Conferences. His current research interests include biometrics, pattern recognition, and image processing. He is a Life Member in Indian Society for Technical Education (ISTE).

Ms S. Shobana Priyadharshini obtained her BE (Electronics and Communication Engineering) from Kamaraj College of Engineering and Technology, Virudhunagar in 2010 and ME (Communication Systems) from Mepco Schlenk Engineering College, Sivakasi in 2012. Her areas of interest include image processing and pattern recognition.