# Binary Biometrics: An Analytic Framework to Estimate the Performance Curves Under Gaussian Assumption

**ABSTRACT** In recent years, the protection of biometric data has gained increased interest from the scientific community. Methods such as the fuzzy commitment scheme, helper-data system, fuzzy extractors, fuzzy vault, and cancelable biometrics have been proposed for protecting biometric data. Most of these methods use cryptographic primitives or error-correcting codes (ECCs) and use a binary representation of the real-valued biometric data. Hence, the difference between two biometric samples is given by the Hamming distance (HD) or bit errors between the binary vectors obtained from the enrollment and verification phases, respectively. If the HD is smaller (larger) than the decision threshold, then the subject is accepted (rejected) as genuine. Because of the use of ECCs, this decision threshold is limited to the maximum error-correcting capacity of the code, consequently limiting the false rejection rate (FRR) and false acceptance rate tradeoff. A method to improve the FRR consists of using multiple biometric samples in either the enrollment or verification phase. The noise is suppressed, hence reducing the number of bit errors and decreasing the HD. In practice, the number of samples is empirically chosen without fully considering its fundamental impact. In this paper, we present a Gaussian analytical framework for estimating the performance of a binary biometric system given the number of samples being used in the enrollment and the verification phase. The error-detection tradeoff curve that combines the false acceptance and false rejection rates is estimated to assess the system performance. The analytic expressions are validated using the Face Recognition Grand Challenge v2 and Fingerprint Verification Competition 2000 biometric databases.

**0**Bookmarks

**·**

**78**Views

- [Show abstract] [Hide abstract]

**ABSTRACT:**Form a privacy perspective most concerns against the common use of biometrics arise from the storage and misuse of biometric data. Biometric cryptosystems and cancelable biometrics represent emerging technologies of biometric template protection addressing these concerns and improving public confidence and acceptance of biometrics. In addition, biometric cryptosystems provide mechanisms for biometric-dependent key-release. In the last years a significant amount of approaches to both technologies have been published. A comprehensive survey of biometric cryptosystems and cancelable biometrics is presented. State-of-the-art approaches are reviewed based on which an in-depth discussion and an outlook to future prospects are given.EURASIP Journal on Information Security 2011(1). - [Show abstract] [Hide abstract]

**ABSTRACT:**This paper focuses on a biometric cryptosystem implementation and evaluation based on a number of fingerprint texture descriptors. The texture descriptors, namely, the Gabor filter-based FingerCode, a local binary pattern (LBP), and a local direction pattern (LDP), and their various combinations are considered. These fingerprint texture descriptors are binarized using a biometric discretization method and used in a fuzzy commitment scheme (FCS). We constructed the biometric cryptosystems, which achieve a good performance, by fusing discretized fingerprint texture descriptors and using effective error-correcting codes. We tested the proposed system on a FVC2000 DB2a fingerprint database, and the results demonstrate that the new system significantly improves the performance of the FCS for texture-based fingerprints.Expert Systems with Applications 04/2013; 40(5):1888–1901. · 1.85 Impact Factor - SourceAvailable from: link.springer.com[Show abstract] [Hide abstract]

**ABSTRACT:**Biometric discretization derives a binary string for each user based on an ordered set of biometric features. This representative string ought to be discriminative, informative, and privacy protective when it is employed as a cryptographic key in various security applications upon error correction. However, it is commonly believed that satisfying the first and the second criteria simultaneously is not feasible, and a tradeoff between them is always definite. In this article, we propose an effective fixed bit allocation-based discretization approach which involves discriminative feature extraction, discriminative feature selection, unsupervised quantization (quantization that does not utilize class information), and linearly separable subcode (LSSC)-based encoding to fulfill all the ideal properties of a binary representation extracted for cryptographic applications. In addition, we examine a number of discriminative feature-selection measures for discretization and identify the proper way of setting an important feature-selection parameter. Encouraging experimental results vindicate the feasibility of our approach.Journal on Advances in Signal Processing 2011(1). · 0.81 Impact Factor

Page 1

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 40, NO. 3, MAY 2010 555

Binary Biometrics: An Analytic Framework to

Estimate the Performance Curves

Under Gaussian Assumption

Emile J. C. Kelkboom, Gary Garcia Molina, Jeroen Breebaart, Raymond N. J. Veldhuis,

Tom A. M. Kevenaar, and Willem Jonker

Abstract—In recent years, the protection of biometric data has

gained increased interest from the scientific community. Methods

such as the fuzzy commitment scheme, helper-data system, fuzzy

extractors, fuzzy vault, and cancelable biometrics have been pro-

posed for protecting biometric data. Most of these methods use

cryptographic primitives or error-correcting codes (ECCs) and

use a binary representation of the real-valued biometric data.

Hence, the difference between two biometric samples is given by

the Hamming distance (HD) or bit errors between the binary

vectors obtained from the enrollment and verification phases, re-

spectively. If the HD is smaller (larger) than the decision threshold,

then the subject is accepted (rejected) as genuine. Because of the

use of ECCs, this decision threshold is limited to the maximum

error-correcting capacity of the code, consequently limiting the

false rejection rate (FRR) and false acceptance rate tradeoff. A

method to improve the FRR consists of using multiple biometric

samples in either the enrollment or verification phase. The noise

is suppressed, hence reducing the number of bit errors and de-

creasing the HD. In practice, the number of samples is empir-

ically chosen without fully considering its fundamental impact.

In this paper, we present a Gaussian analytical framework for

estimating the performance of a binary biometric system given

the number of samples being used in the enrollment and the ver-

ification phase. The error-detection tradeoff curve that combines

the false acceptance and false rejection rates is estimated to assess

the system performance. The analytic expressions are validated

using the Face Recognition Grand Challenge v2 and Fingerprint

Verification Competition 2000 biometric databases.

Index Terms—Binary biometrics, binary template matching,

performance estimation, template protection.

I. INTRODUCTION

W

ITH THE increased popularity of biometrics and its

application in society, privacy concerns are being raised

Manuscript received November 30, 2008. First published March 25, 2010;

current version published April 14, 2010. This paper was recommended by

Guest Editor K. W. Bowyer.

E. J. C. Kelkboom, G. Garcia Molina, and J. Breebaart are with

the Philips Research Laboratories, 5656 Eindhoven, The Netherlands

(e-mail: Emile.Kelkboom@philips.com; Gary.Garcia@philips.com; Jeroen.

Breebaart@philips.com).

R. N. J. Veldhuis is with the University of Twente, 7500 AE Enschede,

The Netherlands (e-mail: R.N.J.Veldhuis@utwente.nl).

T. A. M. Kevenaar is with the priv-ID, 5656 Eindhoven, The Netherlands

(e-mail: Tom.Kevenaar@priv-id.com).

W. Jonker is with the Philips Research Laboratories, 5656 Eindhoven,

The Netherlands, and also with the University of Twente, 7500 AE Enschede,

The Netherlands (e-mail: Willem.Jonker@philips.com; jonker@cs.utwente.nl).

Color versions of one or more of the figures in this paper are available online

at http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TSMCA.2010.2041657

by privacy protection watchdogs. This has stimulated research

into methods for protecting the biometric data in order to

mitigate these privacy concerns. Numerous methods such as

the fuzzy commitment scheme [1], helper-data system [2]–[4],

fuzzy extractors [5], [6], fuzzy vault [7], [8], and cancelable

biometrics [9] have been proposed for transforming the bio-

metric data in such a way that the privacy is safeguarded.

Several of these privacy or template-protection techniques use

some cryptographic primitives (e.g., hash functions) or error-

correcting codes (ECC). Therefore, they use a binary rep-

resentation of the biometric data, referred to as the binary

vector. The transition from real valued to binary representation

of the biometric allows the difference between two biometric

samples to be quantified by the Hamming distance (HD),

i.e., the number of different bits (bit errors) between two binary

vectors.

Eventually, the biometric system has to verify the claimed

identity of a subject. If verified, this identity is considered

as genuine. The decision of either rejecting or accepting the

subject as genuine depends on whether the HD is larger than a

predetermined decision threshold (T). In template-protection

systems that use an ECC, T is usually determined by its

error-correcting capacity. Hence, the false rejection rate (FRR)

depends on the number of genuine matches that produce an HD

that is larger than the decision threshold.

Attackers may attempt to gain access by impersonating a

genuine user. The associated comparisons are referred to as the

impostor comparisons and will be accepted if the HD is smaller

or equal to T, thus leading to a false accept. The success rate of

impersonation attacks is quantified by the false acceptance rate

(FAR).

Therefore, the performance of a biometric system can be

expressed by its FAR and FRR, which depends on the gen-

uine (φge) and impostor (φim) HD probability mass functions

(pmfs) and the decision threshold T. A graphical representation

is shown in Fig. 1.

One of the problems with template-protection systems based

on ECCs is that the FRR is lower (LB) bounded by the

error-correcting capacity of the ECC. A large FRR makes the

biometric system inconvenient because many genuine subjects

will be wrongly rejected. In some practical cases [2], [3], high

FRR values were obtained because it was impossible to further

increase the decision boundary since the used ECC was unable

to correct more bits. The method they used to improve the

FRR consists in using multiple biometric samples in order to

1083-4427/$26.00 © 2010 IEEE

Authorized licensed use limited to: UNIVERSITEIT TWENTE. Downloaded on April 20,2010 at 10:23:38 UTC from IEEE Xplore. Restrictions apply.

Page 2

556IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 40, NO. 3, MAY 2010

Fig. 1.

(φim), respectively.

FRR and FAR from the genuine and impostor HD pmfs, φge, and

suppress the noise and thus reduce the number of bit errors

resulting in a smaller HD.

The main objective of this paper is to analytically estimate,

under the Gaussian assumption, the performance of a biometric

system based on binary vectors under HD comparison and

considering the use of multiple biometric samples. We present

a framework for analytically estimating both the genuine and

impostor HD pmfs from the analytically estimated bit-error

probability presented in [10] under the assumption that both

the within and between class of the real-valued features are

Gaussian distributed. First, due to the central-limit theorem,

we can assume that the real-valued features will tend to ap-

proximate a Gaussian distribution when they result from a lin-

ear combinations of many components, e.g., feature-extraction

techniques based on the principle component analysis (PCA)

or linear discriminant analysis (LDA). PCA or LDA techniques

are often being used to perform dimension reduction in order

to prevent overfitting or to simplify the classifier [11], and in

the field of template protection, PCA is also used to decorrelate

the features in order to guarantee uniformly distributed keys

extracted from the biometric sample [5]. Second, the Gaussian

assumption makes it possible to obtain an analytical closed-

form expression for the HD pmf.

This paper is organized as follows. In Section II, we present

a general description of a biometric system with template pro-

tection and model each processing component. We present the

Gaussian model assumption describing the probability density

function (pdf) of the real-valued biometric features extracted

from the biometric sample, the binarization method under

consideration, and the interpretation of the template-protection

block. Then, we present the analytic expression for estimating

the genuine and impostor HD pmfs and the FRR and FAR

curves in Section III. In Section IV, we validate these ana-

lytic expressions with two different real biometric databases,

namely, the Face Recognition Grand Challenge (FRGC) v2 3-D

face images [12] and the Fingerprint Verification Competition

(FVC) 2000 fingerprint images [13]. We further extend the

framework in Sections V and VI in order to relax the assump-

tions made in Section II. Furthermore, some practical consider-

ations are discussed in Section VII. Section VIII concludes this

paper and outlines the future work.

II. MODELING OF A BIOMETRIC SYSTEM

WITH TEMPLATE PROTECTION

A general scheme of a biometric system with template pro-

tectionbasedonhelperdataisshowninFig.2.Intheenrollment

phase, a biometric sample, for example, a 3-D shape image of

the face of the subject, is obtained by the acquisition system

and presented to the Feature-Extraction module. The biometric

sample is preprocessed (enhancement, alignment, etc.) and a

real-valued feature vector fe

is the number of feature components or dimension of the

feature vector. In the Bit-Extraction module, a binary vector

fe

where NBis the number of bits and, in general, does not need

to be equal to NF. Quantization schemes range from simple,

extracting a single bit out of each feature component [2], [3] to

more complex, extracting multiple bits per feature component

[14], [15]. Hereafter, the binary vector is protected within the

Bit-Protection module. The Bit-Protection module safeguards

the privacy of the users of the biometric system by enabling

accurate comparisons without the need to store the original

biometric data fe

is based on ECCs and cryptographic primitives, for example,

hash functions. A unique but renewable key is generated for

each user and kept secret by using a hash function. Robustness

to measurement noise and biometric variability is achieved

by effectively using ECCs. The output is a pseudoidentity

(PI), represented as a binary vector, accompanied by some

auxiliary data that are also known as helper data (AD)[16].

Finally, PI and AD have to be stored for use in the verification

phase.

In the verification phase, another live biometric measurement

is acquired from which its real-valued feature vector fv

extracted followed by the quantization process, which produces

the binary vector fv

pseudoidentity PI∗is created using AD and the binary vector

fv

AD is presented together with a biometric sample with similar

characteristics as the one presented in the enrollment phase. In

a classical biometric system, the comparator bases its decision

on the similarity or distance between the feature vectors fe

fv

difference between fe

HD. For a template-protection system, there is an acceptance

only when PI and PI∗are identical.

In summary, the biometric system incorporating template

protection can be divided into three blocks: 1) the Acquisi-

tion and Feature-Extraction modules, where the input is the

subject’s biometrics and the output is a real-valued feature

vector fR∈ RNF; 2) the Bit-Extraction module that extracts a

binary vector fBout of fR; and 3) the Bit-Protection and Bit-

Matching modules which protect the binary vector and perform

the matching and decision making based on PI and PI∗. To

build an analytical framework, we have to model each block.

In this section, we present a simple model for each block.

However, the simple model incorporating the Acquisition and

Feature-Extraction block is built under strong assumptions and

will be relaxed later in this paper.

R∈ RNFis extracted, where NF

B∈ {0,1}NBis extracted from the real-valued feature vector,

Ror fe

B. We focus on the helper-data system that

Ris

B. In the Bit-Protection module, a candidate

B.ThereisanexactmatchbetweenPI andPI∗whenthesame

Rand

R. For a binary biometric system, the decision is based on the

Band fv

B, which can be quantified using the

Authorized licensed use limited to: UNIVERSITEIT TWENTE. Downloaded on April 20,2010 at 10:23:38 UTC from IEEE Xplore. Restrictions apply.

Page 3

KELKBOOM et al.: BINARY BIOMETRICS557

Fig. 2.General scheme of a biometric system with template protection based on helper data.

Fig. 3.PGC for both the enrollment and verification phase.

A. Acquisition and Feature-Extraction Block

The input of the Acquisition and Feature-Extraction block is

a captured biometric sample of the subject, and the output is a

real-valued feature vector fR= [fR[1],fR[2],...,fR[NF]]?of

dimension NF, where “?” is the transpose operator. The feature

vector fRis likely to be different between two measurements,

even if they are acquired immediately after each other. Causes

for this difference include sensor noise, environment conditions

(e.g., illumination), and biometric variabilities (e.g., pose or

expression).

To model these variabilities, we consider parallel Gaussian

channels (PGCs) as shown in Fig. 3. We assume an ideal Acqui-

sition and Feature-Extraction module which always produces

the same feature vector μifor subject i. Such ideal module is

thus robust against all aforementioned variabilities. However,

the variability of component j is modeled as an additive zero-

mean Gaussian noise w[j] with its pdf pw[j],i∼ N(0,σ2

Adding the noise w[j] with the mean μi[j] results into the noisy

feature component fR[j]; in vector notation, fR= μi+ w.

The observed variability within one subject is characterized

by the variance of the within-class pdf and is referred to

as within-class variability. We assume that each subject has

the same within-class variance, i.e., homogeneous within-class

variance σ2

w,i[j]).

w,i[j] = σ2

w[j] ∀i. For each component, the within-

class variance can be different, and we assume the noise to be

independent.

On the other hand, each subject should have a unique

mean in order to be distinguishable. Across the population,

we assume μi[j] to be another Gaussian random variable

with density pb[j]∼ N(μb[j],σ2

across the population is referred to as the between-class vari-

ability. Fig. 4 shows an example of the within-class and

between-class pdfs for a specific component and a given sub-

ject. The total pdf describes the observed real-valued feature

value fR[j] across the whole population and is also Gaussian

with pt[j]∼ N(μt[j],σ2

σ2

consider μt[j] = μb[j] = 0.

As shown in Fig. 3, in both the enrollment and verification

phase, the PGC adds random noise weand wvwith the same

probability density to μi, resulting in fe

Thus, μiis sent twice over the same Gaussian channel.

b[j]). The variability of μi[j]

t[j]), where μt[j] = μb[j] and σ2

b[j]. For simplicity, but without loss of generality, we

t[j] =

w[j] + σ2

Rand fv

R, respectively.

B. Bit-Extraction Block

The function of the Bit-Extraction block is to extract a

binary representation from the real-valued representation of

the biometric sample. As a bit-extraction method, we use the

Authorized licensed use limited to: UNIVERSITEIT TWENTE. Downloaded on April 20,2010 at 10:23:38 UTC from IEEE Xplore. Restrictions apply.

Page 4

558IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 40, NO. 3, MAY 2010

Fig. 4.Modeling of a single feature component of the real-valued biometric.

Fig. 5.Fuzzy commitment scheme.

thresholding version used in [2] and [3], where a single bit is

extracted from each feature component. Hence, the obtained

binary vector fB∈ {0,1}NFhas the same dimension as fR.

Furthermore, the binarization threshold for each component

δ[j] is set equal to the mean of the between-class pdf μb[j]; if

the value of fR[j] is smaller than δ[j], then it is set to “0,” oth-

erwise it is set to “1.” (see Fig. 4). More complex binarization

schemes could be used [14], [15], but the simple binarization is

used more frequently. Therefore, we only focus on the single-

bit binarization method. Note that the binarization method is

similar in both the enrollment and verification phase. In the

case where multiple biometric samples are used in either the

enrollment (Ne) or verification (Nv) phase, the average of all

the corresponding fRis taken prior to the binarization process.

C. Bit-Protection and Bit-Comparator Block

Many bit-protection or template-protection schemes are

based on the capability of generating a robust binary vector

or key out of different biometric measurements of the same

subject. However, the binary input vector fBitself cannot be

used as the key because it is most likely not exactly the same

in both the enrollment and verification phase (fe

measurement noise and biometric variability that lead to bit

errors. The number of bit errors is also referred to as the HD

dH(fe

errors. A possible way of integrating an ECC is shown in Fig. 5,

which is also known as the fuzzy commitment scheme [1].

In the enrollment phase, a binary secret or message vector

s is randomly generated by the Random-Number-Generator

module. The security level of the system is higher at larger

B?= fv

B) due to

B,fv

B). Therefore, ECCs are used to deal with these bit

TABLE I

SOME EXAMPLES OF THE BCH CODE GIVEN BY THE CODEWORD (nc

AND MESSAGE (kc) LENGTH, THE CORRESPONDING NUMBER OF

CORRECTABLE BITS (tc), AND THE BIT-ERROR RATE tc/nc

secret lengths. A codeword c of an ECC is obtained by

encoding s in the ECC-Encoder module. The codeword is

XORed with fe

Furthermore, the hash of s is taken in order to obtain the

pseudoidentity PI. For the sake of coherence, we use the

terminology proposed in [16] and [17].

In the verification phase, the possibly corrupted codeword

c∗is created by XORing fv

is obtained by decoding c∗in the ECC-Decoder module. We

compute the candidate pseudoidentity PI∗by hashing s∗. The

decision in the Bit-Comparator block is based on whether PI

and PI∗are bitwise identical.

In order to illustrate our framework with practical parameter

values, wechoose thelinearblock-type “Bose,Ray- Chaudhuri,

Hocquenghem” (BCH) encoder/decoder as an example ECC.

While more sophisticated ECCs can be used, the BCH ac-

commodates our framework due to its HD classifier property.

For example, if we would consider the binary-symbol-based

Reed–Solomon code, the number of bits it can correct depends

on the error pattern. Hence, their probabilistic decoding behav-

ior also needs to be modeled, which is out of the scope of

the framework described in this paper. The ECC is specified

by the codeword length (nc), message length (kc), and the

corresponding number of bits that can be corrected (tc); in

short [nc,kc,tc]. Because the BCH ECC can correct random

bit errors, the Bit-Protection module yields equivalent PI and

PI∗when the number of bit errors between the binary vectors

fe

tc. Thus, there is a match when the HD is smaller than tc,

dH(fe

can be modeled as an HD classifier with threshold tc. Some

[nc,kc,tc] settings of the BCH code are given in Table I. Note

that the maximum number of bits that can be corrected lies

between 20% and 25% of the binary vector.

Bin order to obtain the auxiliary data AD.

Bwith AD. The candidate secret s∗

Band fv

Bis smaller or equal to the error-correcting capability

B,fv

B) = ?fe

B⊕ fv

B?1≤ tc, and the Bit-Protection module

D. Modeling Summary

The following is a summary of the modeling choices and

assumptions that we have made.

• Acquisition and Feature-Extraction Block fR

– Modeled as a PGC, where each feature component is

defined by:

•Within-class pdf ∼ N(0,σ2

–Describes the genuine biometric variability

and measurement noise;

–Homogeneous variance across subjects

σ2

–Noiseisindependentacrosschannels,mea-

surements, and subjects

w[j])

w,i[j] = σ2

w[j] ∀i

Authorized licensed use limited to: UNIVERSITEIT TWENTE. Downloaded on April 20,2010 at 10:23:38 UTC from IEEE Xplore. Restrictions apply.

Page 5

KELKBOOM et al.: BINARY BIOMETRICS559

•Between-class pdf ∼ N(0,σ2

– Characterizes the μi[j] variability across

the population

–Feature components are independent

Total pdf ∼ N(0,σ2

–Defines fR[j] across the population

b[j])

•

t[j])

• Bit-Extraction Block fB

–Single bit extraction method, with binarization

threshold δ[j] = μb[j]

• Bit-Protection and Bit-Comparator Block

–HD classifier with the ECC settings defining its deci-

sion boundary.

III. ANALYTICAL ESTIMATION OF BIT-ERROR

PROBABILITIES, FRR, AND FAR

The goal of this paper is to analytically estimate the perfor-

mance of the presented general template-protection system. In

Section II, we have presented a comprehensive description of

such a system, including the modeling approach or properties

of each block that forms the basis of our analytic framework.

In case of an HD classifier, the goal is to analytically estimate

the expected genuine and impostor HD pmfs φgeand (φim),

respectively (see Fig. 1). With these pmfs, we can compute the

FRR β and the FAR α, where β is the probability that a genuine

subject is incorrectly rejected and α is the probability that an

impostor is incorrectly accepted by the biometric system.

The HD between two binary vectors is the number of bit

errors between them. Knowing the bit-error probability for each

bit Pe[j], the expected HD¯dHbetween fe

Band fv

Bis

¯dH(fe

B,fv

B) =

NF

?

j=1

Pe[j].

(1)

Further, we define the pmf of the number of bit errors of

component j as Pj= [1 − Pe[j],Pe[j]], where Pj(0) is the

probability of no bit error (dH= 0) and Pj(1) is the probability

of a single bit error (dH= 1). Under the assumption that the

bit-error probabilities are independent, the pmf of dH(fe

is defined as

B,fv

B)

φ(k)

def

= P {dH(fe

= (P1∗ P2∗ ··· ∗ PNF)(k)

B,fv

B) = k}

(2)

where the convolution is taken of the pmf of the number of bit

errors per component. A toy example is shown in Fig. 6. For the

two extreme cases of (2), we have

φ(0) =

NF

?

NF

?

j=1

Pj(0) =

NF

?

NF

?

j=1

(1 − Pe[j])

(3)

φ(NF) =

j=1

Pj(1) =

j=1

Pe[j]

(4)

which are the probabilities of having zero or NFerrors, respec-

tively. The FRR corresponding to an HD threshold T β(T) is

Fig. 6.Toy example of the convolution method given by (2).

the probability that the HD for a genuine comparison is greater

than T, therefore

β(T) =P?dH

=

φge(k).

?fe

B,i,fv

B,i

?> T?

NF

?

k=T+1

(5)

Furthermore, α(T) is the probability that the HD for an

impostor comparison is smaller or equal to the threshold T,

hence we have

α(T) =P?dH

=

φim(k).

?fe

B,i,fv

B,j

?≤ T ∀i ?= j?

T

?

k=0

(6)

In other words, if we want to estimate β(T) and α(T) ana-

lytically, we have to obtain an analytic closed-form expression

of the average bit-error probability Pe[j] across the population

for both the genuine and impostor case, Pge

respectively. Because of the PGC modeling approach, Pge

will depend on the within-class and between-class variances

σ2

the relationship between Pge

Neand verification Nvsamples. As mentioned in Section II-B,

in case of multiple samples, the average of the extracted fRof

each sample is taken prior to the binarization process.

e[j] and Pim

e[j],

e[j]

w[j] and σ2

b[j], respectively. Furthermore, we also want to find

e[j] and the number of enrollment

A. PeEstimation for the Impostor Case: Pim

e

For the impostor case, we are considering the com-

parison between binary vectors of two different subjects

dH(fe

on the binarization method based on thresholding with δ =

μb= μt(see Fig. 4). Because the total pdf is assumed to be

Gaussian with mean μt, we have equiprobable bit values. This

implies that the bit-error probability of randomly guessing a bit

is 1/2, Pim

feature components are independent, impostor comparisons are

similar to matching fe

Since Pim

binomial pmf

B,i,fv

B,j) ∀i ?= j. As mentioned in Section II-B, we focus

e[j] = 1/2 ∀j. Thus, under the assumption that the

Bwith a random binary vector.

e[j] = 1/2 ∀j, we can simplify φim(k) as the

φim(k) = (P1∗ P2∗ ··· ∗ PNF)(k)

=

k

?NF

(7)

?NF

??Pim

2−NF

e[j]?k?1 − Pim

e[j]?NF−k

(8)

=

k

?

(9)

Authorized licensed use limited to: UNIVERSITEIT TWENTE. Downloaded on April 20,2010 at 10:23:38 UTC from IEEE Xplore. Restrictions apply.

Page 6

560IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 40, NO. 3, MAY 2010

Fig. 7.Measurement error Pa.

where the simplification step from (7)–(8) holds because of

Pim

e[i] = Pim

e[j] ∀i ?= j. Furthermore, α(T) turns into

T

?

which corresponds to what is used in [18].

α(T) =

k=0

φim(k) = 2−NF

T

?

k=0

?NF

k

?

(10)

B. PeEstimation for the Genuine Case: Pge

e

We focus on estimating the bit-error probability for each

component Pge

component index j. Using the Gaussian model approach as

defined in Section II and shown in Fig. 7, the expected bit-error

probability Pge

e[j], and for convenience purposes, we omit the

e over the whole population is defined by

Pge

e

=E [Pge

∞

?

−∞

e(μ) is the bit-error probability given μ and pb is

the between-class pdf. With the binarization threshold δ =

μb= 0, this problem becomes symmetric with respect to δ.

Consequently, (11) becomes

?

−∞

0

?

−∞

=2λ

√π

−∞

where λ = 1/√2σb.

We define the measurement or acquisition-error probability

Pa, shown by the shaded area in Fig. 7, as the probability

that the measured bit is different than the bit defined by the

mean μ of the feature value. Pabecomes smaller at either a

larger distance between μ and the binarization threshold δ or a

smaller within-class variance. Since multiple enrollment (Ne)

e(μ)]

=

pb(μ)Pge

e(μ)dμ

(11)

where Pge

Pge

e

=2

0

pb(μ)Pge

e(μ)dμ

=2

1

√2πσb

e

−

?

μ

√2σb

?2

Pge

e(μ)dμ

0

?

e−(λμ)2Pge

e(μ)dμ

(12)

and verification (Nv) samples are considered, Paalso depends

on the number of samples N, given as

?

0

Pa(μ;N) =

∞

√N

√2πσw

e

−

?√N(x−μ)

√2σw

?2

dx

(13)

where we used the fact that when averaging N samples, the

within-class variance decreases as

σ2

w,N=σ2

w

N

⇒ σw,N=

σw

√N.

(14)

With the use of the error function

erf(z) =

2

√π

z

?

0

e−t2dt

(15)

and by defining η=(√N/√2σw), Pa(μ;N) can be rewritten as

?

0

1

√π

−ημ

1

√π

0

1

√π

22

=1

2[1 − erf(−ημ)]

where we used the well-known result?∞

ment error at either the enrollment or the verification phase. If

there is a measurement error in both phases, then the measured

bits still have the same bit value thus, no bit error. Hence, Pe(μ)

of (12) becomes

Pa(μ;N) =

η

√π

∞

e−(η(x−μ))2dx

=

∞

?

⎡

?√π

e−z2dz,

with z = η(x − μ)

⎤

?

=

⎣

∞

?

e−z2dz −

√π

−ημ

?

0

e−z2dz

⎦,

for μ ≤ 0

=

−

erf(−ημ)

(16)

0λe−(λμ)2dμ =√π/2.

There is a bit-error probability only when there is a measure-

Pge

e(μ;Ne,Nv) = (1 − Pa(μ;Ne))Pa(μ;Nv)

+ Pa(μ;Ne)(1 − Pa(μ;Nv))

=1

4[(1 + erf(−ηeμ))(1 − erf(−ηvμ))

+(1 − erf(−ηeμ))(1 + erf(−ηvμ))]

=1

2[1 − erf(−ηeμ)erf(−ηvμ)]

where ηe=√Ne/√2σwand ηv=√Nv/√2σw. By substitut-

ing (17) into (12), we obtain

(17)

Pge

e(Ne,Nv) =

λ

√π

0

?

∞

?

0

−∞

e−(λμ)2[1 − erf(−ηeμ)erf(−ηvμ)]dμ

=

λ

√π

e−(λμ)2[1 − erf(ηeμ)erf(ηvμ)] dμ

=1

2−

λ

√π

∞

?

0

e−λ2μ2erf(ηeμ)erf(ηvμ)dμ.

(18)

Authorized licensed use limited to: UNIVERSITEIT TWENTE. Downloaded on April 20,2010 at 10:23:38 UTC from IEEE Xplore. Restrictions apply.

Page 7

KELKBOOM et al.: BINARY BIOMETRICS561

The integral of the erf function can be solved using the

general solution of erf integrals [19] given as

∞

?

0

e−γx2erf(ax)erf(bx)dx =

arctan

?

ab

√

√γπ

γ(a2+b2+γ)

?

.

(19)

Thus, (18) can be solved by using (19) with γ = λ2, a = ηe,

and b = ηvas

Pge

e(Ne,Nv,σw,σb)

=1

2−

λ

√π

arctan

?

ηeηv

e+η2

√

λ2(η2

λ√π

v+λ2)

?

=1

2−1

πarctan

⎛

⎜

⎛

⎜

⎜

⎝

⎜

η√NeNv

λ

?

Ne+ Nv+

?

λ

η

?2

⎞

⎟

⎟

⎠

=1

2−1

πarctan

⎝

σb

√NeNv

σw

?

Ne+ Nv+

?

σb

σw

?−2

⎞

⎟

⎟

⎠

(20)

where we also included σw and σb as an argument of the

estimation function. As can be observed, Pge

the σb/σwratio, Ne, and Nv.

e

is dependent on

C. Summary

We have presented the analytic expressions of the genuine

(φge) and impostor (φim) HD pmfs and the corresponding FRR

(β(T)) and FAR (α(T)) curves. Because of the choice of the

binarization scheme, the impostor bit-error probability Pim

does not need to be estimated and can be assumed to be equal

to 1/2 for each feature component. However, the genuine bit-

error probability Pge

expression in (20). Therefore, in the remainder of this paper,

we only need to estimate Pge

we frequently omit the ge superscript.

e[j]

e[j] has to be estimated using the analytic

e[j], and for convenience reason,

IV. EXPERIMENTAL EVALUATION WITH

BIOMETRIC DATABASES

In this section, the analytic expressions and the effect of the

Gaussian assumption are validated using two real biometric

databases, which are discussed in Section IV-A. To estimate

Pe[j] using (20), we need to estimate the within- and between-

class variances σ2

we show that the within-class variance influences the between-

class variance estimation, and we present a corrected estimator.

Due to the limited size of the databases, estimation errors do

occurwhenestimatingPe[j],even inthecasewhentheunderly-

ing model is correct. We account for these errors by estimating

the 95 percentile boundaries in Section IV-C. We then present

the results of estimating Pe[j] in Section IV-D and the effect

of using PCA as a means to generate uncorrelated features

in Section IV-E. We conclude by portraying the experimental

w[j] and σ2

b[j], respectively. In Section IV-B,

TABLE II

OVERVIEW OF THE BIOMETRIC DATABASES

Fig. 8.

number of features NF.

EER of the training set after applying PCA for different reduced

φge(k),φim(k),β(T),α(T),anddetectionerrortradeoff(DET)

curves in Section IV-F.

A. Biometric Databases and Feature Extraction

The first database (db1) consists of 3-D face images from

the FRGC v2 data set [12], where we used the shape-based 3-

D face recognizer of [20] to extract feature vectors of dimen-

sion Norig= 696. Subjects with at least eight samples were

selected resulting in Ns= 230 subjects with a total of Nt=

3147 samples. The number of samples per subject varies

between 8 and 22 with an approximate average of¯ Ni= 14

samples per subject. The second database (db2) consists of

fingerprint images from the database 2 of FVC2000 [13] and

uses a feature-extraction algorithm based on Gabor filters and

directional fields [21], resulting in 1536 features (Norig=

1536). There are Ns= 110 subjects with Ni= 8 samples each.

An overview is given in Table II.

The components of the original feature vectors are depen-

dent. Therefore, we applied the PCA technique to decorrelate

the features and reduce the dimension of the feature space

if necessary. Furthermore, we partitioned both databases into

a training and testing set containing 25% and 75% of the

number of subjects, respectively. The size of the test set is

a very important factor in this analytic framework; thus, we

traded off the size of the training set and limited it to 25% of

the number of subjects. We applied PCA on the training set

and reduced the dimensionality (NF) of the feature vectors

to the codeword lengths presented in Table I and computed

the equal error rate (EER) (see Fig. 8), which is defined as

the point where FAR equals FRR. The optimal performance

is computed using the bit-extraction method in Section II-B

and an HD classifier. The optimal number of features for both

db1 and db2 are in the range of 15, 31, and 63. Note that

the best EER of 12.7% for db1 and 15.2% for db2 is higher

than the reported performance of template-protection systems

based on these databases in the literature (≈8% for db1 in [2]

Authorized licensed use limited to: UNIVERSITEIT TWENTE. Downloaded on April 20,2010 at 10:23:38 UTC from IEEE Xplore. Restrictions apply.

Page 8

562IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 40, NO. 3, MAY 2010

TABLE III

VARIANCE ESTIMATION TABLE AS DEFINED IN [23]

and ≈5% for db2 in [22]).1However, our proposed analytic

framework is not focused on optimizing the performance but

on analytically estimating the performance. The effect of the

PCA transformation on the feature value distribution and the

errorprobabilityestimationisdiscussedinSectionIV-E.Unless

stated otherwise, the remainder of this analysis is based on the

PCA transformed test set using the PCA matrix obtained from

the training set. For convenience, the remainder of this work is

mainly focused on the optimal setting of NF= 31.

B. Variance Estimation of σ2

wand σ2

b

The analytic expression Pge

quires the standard deviations σwand σb. The estimated values

ˆ σwand ˆ σbare obtained from the test set of the database under

consideration.Thevariances ˆ σ2

to the variance estimation table given in Table III from [23],

wherefi,jisthejthreal-valuedfeaturevectorofsubjecti,Nsis

the number of subjects, Niis the number of samples or feature

vectors of subject i, and Ntis the total number of samples;

Nt=?Ns

squares of the source of the within-class (SSW), between-

class (SSB), and the total (SST) variation. Two important facts

derived from this table are that: 1) the total sum of squares is

equal to the sum of the within-class and between-class sum of

squares SST = SSW + SSB and 2) the total number of degrees

of freedom (d.f.) is equal to the sum of the between-class and

the within-class d.f. The details are in [23]. With the use of the

table, the variance estimation is given as the sum of squares

divided by the d.f., thus

e(Ne,Nv,σw,σb) in (20) re-

wand ˆ σ2

bareestimatedaccording

i=1Ni. This table is also used in analysis of variance

models and describes the method for computing the sum of

ˆ σ2

w=

1

Nt−Ns

Ns

?

i=1

Ni

?

Ns

?

Ni

?

j=1

(fi,j−ˆ μi)2

(21)

ˆ σ2

b=

1

Ni(Ns−1)

1

Nt−1

i=1

Ni(ˆ μi−ˆ μ)2,

with¯ Ni=Nt

Ns

(22)

ˆ σ2

t=

Ns

?

i=1j=1

(fi,j−ˆ μ)2

(23)

with the exception of ˆ σ2

number of samples per subject¯ Ni. Notice that ˆ σ2

as the variance of the aggregated zero-mean samples of the sub-

jects, while taking into account that Nsd.f. are lost because of

the need to estimate the mean of each subject ˆ μi. Furthermore,

b, which is also divided by the average

wis calculated

1In [2], the most reliable feature components were selected, and in [22], six

enrollment samples were used.

Fig. 9.

settings of {σ2

ˆ σ2

subject because (21) can also be written as

Within-class, between-class, and total variance estimation for different

w,σ2

b}.

wis also equal to the weighted average of the variance of each

ˆ σ2

w=

1

Nt− Ns

Ns

?

i=1

(Ni− 1)ˆ σ2

w,i

=

1

Ns

?¯ Ni− 1?

?Ns

1

Ni− 1

Ns

?

i=1

(Ni− 1)ˆ σ2

w,i

=

1

1

Ns

i=1(Ni− 1)

Ni

?

w= (1/Ns)?Ns

1

Ns

Ns

?

i=1

(Ni− 1)ˆ σ2

w,i,

with

ˆ σ2

w,i=

j=1

(fi,j− ˆ μi)2

(24)

which turns into ˆ σ2

each subject.

The variance estimators are validated using a synthetically

generated database of Ns= 1000 subjects with Ni= 4 sam-

ples each. The parameters {σ2

thesis, and we estimated {ˆ σ2

(23), respectively. The synthesis and estimation processes are

performed ten times (tenfold), and the average of the result is

taken. Fig. 9 shows the estimation results of ˆ σ2

values of σ2

ues of σ2

timators give values that closely resemble the underlying model

parameters σ2

error for the ˆ σ2

different values of σ2

respectively. The figures show that the estimation error in-

creases when σwincreases or when Nidecreases.

The constant estimation error of ˆ σ2

tion error of the sample mean of each subject ˆ μi. From [23],

we know that the variance of the sampling distribution of the

sample mean ˆ μiis given by

i=1ˆ σ2

w,iwhen Niis equal for

w,σ2

w, ˆ σ2

b} are used during the syn-

b, ˆ σ2

t} using (21), (22), and

wfor different

tfor different val-

wand ˆ σ2

wwith σ2

bwith σ2

b= 2, and both ˆ σ2

w= 2. We can conclude that the ˆ σ2

band ˆ σ2

tes-

wand σ2

bestimator. This estimation error is examined for

wand Ni, as shown in Fig. 10(a) and (b),

t, but we observe a constant estimation

bis caused by the estima-

σ2

ˆ μi=σ2

w,i

Ni

.

(25)

If more samples are taken to estimate the sample mean, the

estimation variance decreases. This implies that the estimation

Authorized licensed use limited to: UNIVERSITEIT TWENTE. Downloaded on April 20,2010 at 10:23:38 UTC from IEEE Xplore. Restrictions apply.

Page 9

KELKBOOM et al.: BINARY BIOMETRICS563

Fig. 10.

Ni= 2 and (b) different values of Niwith σ2

(27) in (c) and (d), respectively.

Between-class estimation of (22) at (a) different values of σ2

wwith

w= 2, with its corrected version

ˆ σ2

bof (22) is in fact

ˆ σ2

b= EST?σ2

?

= ˆ τ is the estimation of parameter τ. The cor-

rected version of the between-class estimation ˇ σ2

b+ σ2

ˆ μ

?= EST

?

σ2

b+σ2

w

σi

?

(26)

where EST(τ)

bthus becomes

ˇ σ2

b= ˆ σ2

b−ˆ σ2

w

Ni.

(27)

Fig. 10(c) and (d) shows the results of applying this correc-

tion on the results of Fig. 10(a) and (b), and the estimation has

clearly improved.

C. Boundaries of Tolerated Estimation Errors

When estimating Pe[j] of a given biometric database, there

are always estimation errors because of its random nature.

Even if we randomly generate a synthetic database that fully

complies with the Gaussian modeling assumption, there are still

estimation errors. These estimation errors are caused by the

random nature of the problem and should be tolerated. Hence,

we compute the upper (UB) and LB tolerance bounds for the

estimation errors. Such an example is shown in Fig. 11 for a

synthetic data set of similar size as db2 (Ns= 110 and Ni= 8)

but with NF= 500 and σ2

from the uniform distribution U(0,16) with minimum and

maximumvaluesof0and16,respectively.Fig.11compares the

estimated bit-error probability of the synthetic data setˆPsy

with the corresponding analytically obtained Pge

stands for Pge

ˇ σb[j] are estimated using (21) and (27), respectively.ˆPsy

reported by a circle (“o”) at its estimated ˆ σb[j]/ˆ σw[j] ratio, and

its analytic estimation is the value of the solid line at the same

ˆ σb[j]/ˆ σw[j] ratio. A greater vertical distance implies a greater

analytical estimation error.

w[j] = 1, with σ2

b[j] randomly drawn

e[j]

e[j], which

e(Ne,Nv, ˆ σw[j], ˇ σb[j]) of (20), where ˆ σw[j] and

e[j] is

Fig. 11.

LB boundaries.

Random estimation errors due to the random nature and the UB and

The test protocol for calculatingˆPsy

each feature component,ˆPsy

across the bit-error probability of each subjectˆPsy

subject bit-error probabilityˆPsy

200 matches and determining the relative number of errors. For

each match, Nedistinct feature vectors are randomly selected,

averaged, and binarized (enrollment phase). The obtained bit is

compared with the bit obtained from averaging and binarizing

Nv different randomly selected feature vectors of the same

subject (verification phase).

We empirically estimate the UB and LB boundaries by

clustering the points into equidistant intervals on the x-axis and

compute the 95 percentile range of theˆPsy

interval. The circles (disks) correspond to cases whereˆPsy

within (outside) the 95 percentile boundaries.

e[j] is as follows: For

e[j] is calculated as the average

e,i[j]. The

e,i[j] results from performing

e[j] values in each

e[j] is

D. Validation of the Analytic Expression Pge

e

In this section, we experimentally validate the analytic

expression of the bit-error probability Pge

section, we have discussed the use of PCA for decorrelating

the feature components and for reducing the dimension to

NF= 31. In order to have more components for the validation,

we apply PCA but without reducing the number of features.

Hence, we consider the original number of features (696) for

database db1. However, for database db2, we only consider

223 components since 25% of the total number of subjects (i.e.,

28 subjects) with a total of 224 feature vectors were used to

derive the PCA projection. Thus, to avoid singularities, we have

reduced the number of features to 223.

To assess the model assumptions, we compared the estimated

bit-error probability of the biometric databaseˆPdb

the corresponding analytically obtained Pge

protocol is used as discussed in Section IV-C. The experimental

results for db1 and db2 for different values of Ne and Nv

are shown in Figs. 12 and 13, respectively. The circles (disks)

correspond to cases whereˆPdb

centile boundaries. We refer to the number of disks as the esti-

mation error ?Pe. If all the assumptions hold, then we expect the

relative ?Peto be around 5%. Table IV reports the absolute and

e. In the previous

e[j] with

e[j]. The same test

e[j] is within (outside) the 95 per-

Authorized licensed use limited to: UNIVERSITEIT TWENTE. Downloaded on April 20,2010 at 10:23:38 UTC from IEEE Xplore. Restrictions apply.

Page 10

564IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 40, NO. 3, MAY 2010

Fig. 12.

(a) Ne= Nv= 1. (b) Ne= Nv= 2. (c) Ne= Nv= 3. (d) Ne= Nv= 4.

The circles (disks) correspond to cases whereˆPdb1

boundaries.

Comparison between Pge

e [j] andˆPdb1

e

[j] for different settings.

e

[j] falls within (outside) the

Fig. 13.

(a) Ne= Nv= 1. (b) Ne= Nv= 2. (c) Ne= Nv= 3. (d) Ne= Nv= 4.

The circles (disks) correspond to cases whereˆPdb2

boundaries.

Comparison between Pge

e [j] andˆPdb2

e

[j] for different settings.

e

[j] falls within (outside) the

relative ?Pe. Because ?Peis noisy due to the random selection

of Neand Nvsamples within the test protocol, we repeat the

estimation 20 times and report its mean. For db1, ?Peis 16.7%

for Ne= Nv= 1 and decreases to 13% for Ne= Nv= 4.

In the case of db2, ?Peis very large; 27.3% for Ne= Nv= 1

but decreases significantly when both Neand Nvare increased,

reaching 6.3% when Ne= Nv= 4. Thus, for both databases,

there is a clear improvement when increasing the number of

samples. We conjecture that the improved bit-error probability

estimation performance is due to the fact that the feature value

distribution becomes more Gaussian when averaging multiple

samples as stated by the central-limit theorem [24]. In addition,

note that manyˆPdb1

e

[j] estimations of db1 are very close to

the 95 percentile boundaries, hence, small estimation errors

TABLE IV

e [j] IS OUTSIDE THE 95 PERCENTILE

NUMBER OF CASES ?PeWHEREˆPdb

BOUNDARIES PER DATABASE AND {Ne,Nv} SETTING

Fig. 14.

db2 before and after applying PCA. (a) db1 before PCA. (b) db1 after PCA.

(c) db2 before PCA. (d) db2 after PCA.

Normal probability plot of each feature-vector component of db1 and

can lead to large variation in ?Pethat could explain the bit-

error probability-estimation-performance differences between

db1 and db2 observed in the table.

E. Effect of PCA on the Gaussian Assumption

As described in Section II, the analytic framework is based

on the Gaussian model assumption. Fig. 14(a) and (c) shows

the normal probability plot for each component of the feature

vectors of db1 and db2, respectively, before applying the PCA

transformation. The normal probability plot is a graphical tech-

nique for assessing the degree to which a data set approximates

a Gaussian distribution. If the curve of the data closely follows

the dashed-thick line, then the data can be assumed to be

approximately Gaussian distributed. Prior to comparing, we

normalized each feature so that it has zero mean and unit

variance. For both databases, it is evident that the distributions

before applying PCA are not Gaussian because they signif-

icantly deviate from the dashed-thick line that represents a

perfect Gaussian distribution. Fig. 14(b) and (d) shows the

normal probability plot for each of the 696 components of db1

and the 223 components of db2, respectively, after applying

PCA. For both databases, the figures show that after applying

PCA, the features tend to behave more like Gaussians. Yet, the

tails deviate the most from being Gaussian where for the most

cases the empirical distribution is wider.

Fig. 15 shows the Peestimations before applying PCA for

both databases in two cases: Ne= Nv= 1 and Ne= Nv= 4.

Note that before PCA, db1 and db2 have 696 and 1536

Authorized licensed use limited to: UNIVERSITEIT TWENTE. Downloaded on April 20,2010 at 10:23:38 UTC from IEEE Xplore. Restrictions apply.

Page 11

KELKBOOM et al.: BINARY BIOMETRICS565

Fig. 15.

before applying the PCA transform. (a) db1 with Ne= Nv= 1. (b) db1 with

Ne= Nv= 4. (c) db2 with Ne= Nv= 1. (d) db2 with Ne= Nv= 4.

ˆPdbx

e

[j] at different settings of Ne and Nv for both db1 and db2

Fig. 16.

estimation of Pge

and β(T) curves. The graphs on the left (right) correspond to Ne= Nv=

1(Ne= Nv= 4).

Results for db1 with NF= 31. (a) and (b)ˆPdb1

e . (c) and (d) φge(k) and φim(k) pmfs. (e) and (f) the α(T)

e

and the analytical

components, respectively. For db1 ?Peis equal to 99.8% for

the Ne= Nv= 1 and 61.2% for the Ne= Nv= 4 case, while

for db2, ?Peis 71% and 18%, respectively. Comparing these

Fig. 17.

estimation of Pge

and β(T) curves. The graphs on the left (right) correspond to Ne= Nv=

1(Ne= Nv= 4).

Results for db2 with NF= 31. (a) and (b)ˆPdb2

e . (c) and (d) φge(k) and φim(k) pmfs. (e) and (f) the α(T)

e

and the analytical

results with the ?Pevalues when applying PCA (see Table IV),

we can also conclude that applying PCA makes the features

significantly more Gaussian.

F. Validation of the Analytic Expression of FRR and FAR

For both db1 and db2, we analytically estimate the genuine

φge(k) and impostor φim(k) HD pmfs, and the β(T) and α(T)

curves. The results are shown in Figs. 16 and 17 for db1

and db2, respectively. The experimentally calculated pmfs are

indicatedby“Exp,” whiletheones obtained usingtheanalytical

model are indicated by “Mod.” The experimental results are

obtained using the same protocol as the one discussed in

Section IV-C but storing the HD pmfs of each subject instead.

We focus on the cases corresponding to NF= 31, with Ne=

Nv= 1 and Ne= Nv= 4.

Both Figs. 16 and 17 indicate that there is a good agreement

between φim(k)-Exp and φim(k)-Mod. Large differences are

observed between φge(k)-Exp and φge(k)-Mod. However, the

differences decrease when both Neand Nvare increased. Aver-

aging multiple independent samples leads to a higher Gaussian-

ity degree in accordance with the central-limit theorem. This

effect was also observed for the Peestimation results in the

previoussection.Itisinterestingtonotethedifferencesbetween

the estimation errors of φge(k) of db1 and db2. For db1, the

centers of gravity of φge(k)-Exp and φge(k)-Mod practically

coincide. The only difference is the width of the pmfs since the

Authorized licensed use limited to: UNIVERSITEIT TWENTE. Downloaded on April 20,2010 at 10:23:38 UTC from IEEE Xplore. Restrictions apply.

Page 12

566IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 40, NO. 3, MAY 2010

Fig. 18.

order. The experimentally obtained curves are denoted by Exp, while the analytical by Mod. (a) db1 with NF= 31. (b) db2 with NF= 31.

DET curves for both db1 and db2 for NF= 31 with different values of Ne, and Nv. The values Neand Nvare indicated in the legend in the subsequent

Fig. 19.Approximation of the genuine HD pmf as binomial with¯Pe[(26)] for the Ne= Nv= 4 case with NF= 31. (a) db1. (b) db2.

Fig. 20.

subject has the same σ2

feature component separately in case 2. In (b) the comparison between case 1, db1, and db2 is shown.

Empirical estimated probability density pκiusing synthetic databases (a) of 2000 subjects with NF= 31, Ni= 8, σ2

w,i[j] = 1; in case 2, σ2

b[j] = 1, where for case 1, every

w,i[j] = 1 + νi[j]; and for case 3, σ2

w,i[j] = 1 + νi, where νiis drawn from U(−0.4,0.4) and is redrawn for each

experimentally obtained pmf is wider than the theoretical one.

In case of db2, we see that there is both an alignment and a

width error; φge(k)-Exp is skewed to the left.

Eventually, we are interested in estimating the DET curves.

Because the DET curves combine both β and α, they are thus

prone to estimation errors associated with β or α. The DET

curves for db1 and db2 for NF= 31 with different values of

Ne and Nv are shown in Fig. 18. From this figure, we can

conclude that increasing Neand Nvleads to greater estimation

errors of the DET curve, which contradicts the previous finding

that increasing Ne and Nv leads to better estimations of Pe

and φge(k). This can be explained by the fact that in the Ne=

Nv= 4 case, the area of interest with β(T) ∈ [0.01,0.1] occurs

for smaller values of α(T) because the number of bit errors

decreases when Ne and Nv increase, i.e., the performance

improves. As shown by the α(T) curves in Figs. 16 and 17,

Authorized licensed use limited to: UNIVERSITEIT TWENTE. Downloaded on April 20,2010 at 10:23:38 UTC from IEEE Xplore. Restrictions apply.

Page 13

KELKBOOM et al.: BINARY BIOMETRICS567

there is a greater estimation error at smaller values of α(T) thus

amplifying the estimation error of the DET curve.

A summary of the probable causes for the observed differ-

ences, starting from the most probable, are as follows: 1) the

nonhomogeneous within-class variance; 2) the dependence be-

tween features; and 3) the dependence between bit errors.

The db2 seems to be clearly not adhering to the homoge-

neous within-class variance assumption, resulting into a skewed

φge(k) with a large tail. Such a tail is caused by subjects that

have, on average, a worse performance than the other subjects.

These subjects have many feature components with a larger

within-class variance leading to larger Pe[j] values and thus,

greater HDs. In the literature, these subjects are referred to

as goats [25], [26]. If the features are dependent, then the

HD pmf becomes wider while keeping its original mean. This

effect is visible for both φge(k) and φim(k) for both databases.

On the other hand, certain disturbances, such as occluded

biometric images or strong biometric variabilities, can cause

multiple errors to occur simultaneously. Thus, the bit errors are

dependent, causing the tails on the right side of the genuine HD

pmf. A right tail is slightly visible for db1 but is clearly present

for db2, as shown in Fig. 16(c) and (d) and Fig. 17(c) and (d),

respectively.

In Section V, we propose a modified model that incorporates

the nonhomogeneous within-class variance property, while in

Section VI,wefurtherextendthemodeltoincludedependences.

V. RELAXING THE HOMOGENOUS WITHIN-CLASS

VARIANCE ASSUMPTION

In this section, we propose a modified model that takes the

nonhomogeneous property into account, while still assuming

independent feature components. The proposed method makes

use of the approximation of the convolution of (2) with the

binomial pmf. For the genuine case, this would be

?NF

where¯Pge

components¯Pge

e

= 1/NF

¯φge(k) are shown in Fig. 19(a) for db1 and Fig. 19(b) for db2

for the Ne= Nv= 4 case with NF= 31. For both databases,

the approximation is reasonably accurate.

Thus we can model the nonhomogeneous effect by assuming

that¯Pge

ing to a probability density p¯ Pge

determining the pdf p¯ Pge

e

across the population and computing

the average genuine HD pmf defined as

¯φge(k) =

k

??¯Pge

?NF

e

?k?1 −¯Pge

e

?NF−k

(28)

e is the average bit-error probability across the feature

j=1Pge

e[j]. The approximate pmfs

e,iis not equal for each subject and is distributed accord-

e. The following step consists in

¯Φge(k) =

1/2

?

0

p¯ Pge

e(τ)¯φge(k|τ)dτ

(29)

where the integral limits are due to the fact that Pe∈ [0,1/2]

and¯φge(k|τ) is the generic case of (28) as

¯φge(k|τ) =

k

?NF

?

(τ)k(1 − τ)NF−k.

(30)

We propose a method for estimating p¯ Pge

estimatedwithin-classvarianceofeachsubject ˆ σ2

e

using only the

w,i[j].Because

Fig. 21.

property of db1 and db2 for the cases Ne= Nv= 1 and Ne= Nv= 4 with

NF= 31. (a)–(d) show the HD pmf estimations, while (e)–(h) show the DET

curves estimation, where Mod and Mod2 indicate the modeling method without

and with the nonhomogeneous property, respectively. In (e) and (f), all the DET

curves are plotted using the experimentally obtained α-Exp, while in (g) and

(h), we use the α-Exp for the Exp curves and α-Mod for both the Mod and

Mod2 curves.

Results of the proposed method incorporating the nonhomogeneous

of the limited number of samples Ni, we know from [23] that

the estimation ratio ((Ni− 1)ˆ σ2

distribution with Ni− 1 d.f., where σ2

within-class variance that has to be estimated and is assumed

to be homogeneous. However, in practice, σ2

therefore, we have to replace it by its estimate ˆ σ2

well known that the mean associated with a χ2distribution

is equal to its number of d.f.; thus, by omitting the (Ni− 1)

multiplications, it becomes a unit mean.

The next step is to take the average ratio over all feature

components as

w,i[j])/σ2

w[j] follows the χ2

w[j] is the underlying

w[j] is unknown;

w[j]. It is

κi=

1

NF

NF

?

j=1

ˆ σ2

w,i[j]/ˆ σ2

w[j].

(31)

Authorized licensed use limited to: UNIVERSITEIT TWENTE. Downloaded on April 20,2010 at 10:23:38 UTC from IEEE Xplore. Restrictions apply.

Page 14

568IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 40, NO. 3, MAY 2010

Fig. 22.

curve as described by (32) is shown as (φim)-Mod-ϑ. (a) db1 ϑopt= 1.11. (b) db2 ϑopt= 1.17.

Results of estimating ϑoptfrom (φim)-Exp using (33) for the Ne= Nv= 1 case for both databases. The variance-corrected Gaussian approximated

We can model the nonhomogeneous property by assuming

that for all components of subject i, the within-class variance is

σ2

the number of features is large, then the pdf of κiacross the

whole population becomes Gaussian with unit mean and a vari-

ance that decreases when NFincreases. The variance decreases

at larger values of NFbecause this would be similar to having

NFtimes more samples and therefore, a better estimation of

its mean. When there are “goatlike” subjects, the homogeneous

assumption does not hold, then the variance of the pdf of κi

increases.

Fig. 20(a) shows the empirically estimated pdf of κi for

a synthetically generated databases containing 2000 subjects

with NF= 31, Ni= 8, and σ2

every subject has the same σ2

1 + νi[j]; and for “case 3,” σ2

from U(−0.4, 0.4) and is redrawn for each feature component

separatelyincase2.Theresultsimplythatthevarianceoftheκi

pdf increases when σ2

and increases significantly when there is a positive correlation

with the variance offset, for example, when subjects have all

their σ2

Hence, in case 3, there is a clear existence of goats or doves,

where the latter are the subjects that have a small number of bit

errors when matched against themselves [27].

Fig. 20(b) compares the κi pdf of case 1, db1, and db2.

The results show that both db1 and db2 do not adhere to the

homogeneous property. The κipdf found for db1 looks similar

to case 3. However, the pdf found for db2 significantly deviates

from the synthetic cases, which confirms the existence of goats

and doves. This may also explain the significant discrepancy

found when estimating the genuine HD pmfs of db2, as shown

in Fig. 17.

Now, we can empirically estimate the probability density

p¯ Pge

w,i[j] = κiσ2

w[j]. If the homogeneous assumption holds and

b[j] = 1, where for “case 1,”

w,i[j] = 1; in “case 2,” σ2

w,i[j] = 1 + νi, where νiis drawn

w,i[j] =

w,i[j] is different for each subject (case 2)

w,i[j] larger or smaller than the average value (case 3).

eusingpκi.Therelationshipbetween κiand¯Pge

e,iisgiven by

¯Pge

e,i=

1

NF

NF

?

j=1

Pge

e

?

Ne,Nv,

?

κiˆ σ2

w[j], ˆ σb[j]

?

(32)

where we take the average of Pge

using ˆ σb[j] and the modified within-class variance estimation

?κiˆ σ2

Authorized licensed use limited to: UNIVERSITEIT TWENTE. Downloaded on April 20,2010 at 10:23:38 UTC from IEEE Xplore. Restrictions apply.

e[j] across all features, while

w[j]. Because of the nonlinear relationship between

Pge

estimating Pge

In practice, we can rewrite (29) as

e[j] and ˆ σw[j], we take the average over Pge

e, using the average of ˆ σw[j].

e[j] instead of

¯Φge(k) =

1

Ns

Ns

?

i=1

¯φge

?

k|¯Pge

e,i

?

.

(33)

We applied this new method for estimating φge(k) of db1

and db2, and the results are shown in Fig. 21(a)–(d) for the

Ne= Nv= 1 and Ne= Nv= 4 cases with NF= 31, where

φge(k)-Exp is the experimentally obtained pmf, φge(k)-Mod

is obtained using (2), and¯Φge(k)-Mod2 with (31). The results

show that φge-Exp is better approximated when using the new

method¯Φge(k)-Mod2. In the case of db1, there is a small

improvement, but for db2, there is a significant improvement,

and even a better estimation is obtained when Ne= Nv= 4.

Furthermore, Fig. 21(e)–(h) shows the DET curve results. In

Fig. 21(e) and (f), the same α is used for each DET curve in or-

der to isolate the estimation errors of φge(k), while in Fig. 21(g)

and (h), α-Exp is used for the Exp curves and α-Mod is used

for both the Mod and Mod2 curves. With the new method,

the DET curve estimation has improved, most significantly for

db2. However, the differences between Fig. 21(e) and (f) and

Fig. 21(g) and (h) clearly indicate that the remaining estimation

errors are caused by the estimation of α. As shown in Fig. 16(c)

and (d) and Fig. 17(c) and (d), there is an estimation error of

(φim), which we consider to be caused by the fact that the

feature components are dependent.

VI. INCORPORATING FEATURE-COMPONENT

DEPENDENCES

In the previous section, we observed that a significant part of

the remaining DET estimation errors is related to the estimation

errors of the (φim)-Exp pmf. In this section, we propose a

further extension of the analytical framework in order to incor-

porate dependences between feature components. We propose

to estimate the dependence from the (φim) pmf and apply it to

the φgepmf estimation. Hence, we assume that both pmfs are

influenced by the dependence to the same extent.

We estimate the dependence from (φim)-Exp by fitting it

with a Gaussian approximation of the binomial pmf of (9) with

the variance as the fitting parameter. For large values of NF,

Page 15

KELKBOOM et al.: BINARY BIOMETRICS569

the binomial pmf with probability Peand dimension NFcan

be approximated by the Gaussian density N(NFPe,NFPe(1 −

Pe)), with mean NFPeand variance NFPe(1 − Pe). For the

impostor case, we know that Pe= 1/2, from which its mean

and variance become NF/2 and NF/4, respectively. Hence,

the Gaussian approximation of the (φim)-Exp pmf with the

variance parameter ϑ used for fitting becomes

1

√2πϑσ2e−(k−μ)2

?2πϑNFPe(1 − Pe)e−

=

√2πϑNF

where the optimal ϑ is computed by minimizing the mean-

square error as

φim(k)-Mod-ϑ =

2ϑσ2

=

1

(k−NFPe)2

2ϑNFPe(1−Pe)

2

e−(2k−NF)2

2ϑNF

(34)

ϑopt= argmin

ϑ

NF

?

k=0

(φim(k)-Exp − φim(k)-Mod-ϑ)2. (35)

The estimation results of ϑopt for the Ne= Nv= 1 case

are shown in Fig. 22 for both databases. The optimal value

of ϑoptis 1.11 for db1 and 1.17 for db2. For both databases,

ϑoptis very similar, which may indicate that the amount of de-

pendences between the feature components is relatively similar

for both databases. Furthermore, the (φim)-Exp pmf is better

estimated when compared with its first estimation disregarding

the feature-component dependences, as shown in Fig. 16(c) and

Fig. 17(c) for db1 and db2, respectively.

With the Gaussian approximation including the variance

correction with ϑopt, we have a better estimation of the φge

pmf by rewriting (33) as

φge(k) =

1

Ns

Ns

?

i=1

1

?2πσ2

e,i). Because of the Gaussian

cor

e

−(k−¯

Pge

e,iNF)2

2σ2

cor

(36)

with σ2

approximation errors, it does not hold that the sum of the

probability mass is equal to one; therefore, we normalize it

according to

cor= ϑoptNF¯Pge

e,i(1 −¯Pge

φ?

ge(k) =

1

NF

?

k=0

φge(k)

φge(k).

(37)

The estimation results using (37) for the cases of ϑ = 1

and ϑ = ϑopt are shown in Fig. 23. For the ϑ = 1 case,

the Gaussian approximation is used without the variance

correction. Fig. 23(a)–(d) shows that the φge(k) pmf estimation

has slightly improved. The¯Φ?

φge(k)-Exp than¯Φ?

curve for the Ne= Nv= 1 case and mainly for the right

tail for the Ne= Nv= 4 case. The same conclusions are

also shown by the DET curves of Fig. 23(e)–(f), where

each DET curve uses the same α curve, namely, the

experimentally obtained α-Exp, in order to isolate the φge(k)

pmf estimation errors. The DET curves in Fig. 23(g)–(h) use

the actual α curves, thus α-Mod-ϑ1 for the DET-Mod-ϑ1

curves and α-Mod-ϑopt for the DET-Mod-ϑopt curves,

respectively. The curves show that the DET-Mod-ϑopt curve

ge-Mod-ϑopt curve is closer to

ge-Mod-ϑ1. This holds across the whole

Fig. 23.

and nonhomogeneous property of db1 and db2 for the cases Ne= Nv= 1

and Ne= Nv= 4, with NF= 31. (a)–(d) shows the φgeestimations, while

(e)–(h) shows the DET curve estimation. The label Mod-ϑ1indicates the new

modeling method but with ϑ = 1, hence using only the Gaussian approxima-

tion of the binomial pmf, including the nonhomogeneous property. The label

Mod-ϑopt indicates the cases where ϑ = ϑopt. In (e) and (f), all the DET

curves are plotted using the experimentally obtained α-Exp, while in (g) and

(h), we use the α-Exp for the Exp curves, α-Mod-ϑ1for the Mod-ϑ1curves,

and α-Mod-ϑoptfor the Mod-ϑoptcurves.

Results of the proposed method incorporating both the dependence

is clearly closer to DET-Exp curve because α-Mod-ϑoptis a

better approximation of α-Exp as we have shown earlier.

VII. PRACTICAL CONSIDERATIONS

In the previous sections, we have presented several analytical

models for estimating the DET performance curve. However, as

stated previously, because of the use of an ECC, the FRR is LB

bounded because of the limited number of bits the ECC can

correct. For the setting of NF= 31, which is equal to the code-

word length nc, the BCH ECC can correct up to 7 bits, as shown

in Table I. The experimentally achieved performance and its

analytical estimates at this operating point are given in Table V.

The results indicate that at this operating point, there is not a

Authorized licensed use limited to: UNIVERSITEIT TWENTE. Downloaded on April 20,2010 at 10:23:38 UTC from IEEE Xplore. Restrictions apply.

#### View other sources

#### Hide other sources

- Available from Gary Garcia-Molina · May 16, 2014
- Available from utwente.nl