ArticlePDF Available

A Biometric Encryption System for the Self-Exclusion Scenario of Face Recognition


Abstract and Figures

This paper presents a biometric encryption system that addresses the privacy concern in the deployment of the face recognition technology in real-world systems. In particular, we focus on a self-exclusion scenario (a special application of watch-list) of face recognition and propose a novel design of a biometric encryption system deployed with a face recognition system under constrained conditions. From a system perspective, we investigate issues ranging from image preprocessing, feature extraction, to cryptography, error-correcting coding/decoding, key binding, and bit allocation. In simulation studies, the proposed biometric encryption system is tested on the CMU PIE face database. An important observation from the simulation results is that in the proposed system, the biometric encryption module tends to significantly reduce the false acceptance rate with a marginal increase in the false rejection rate.
Content may be subject to copyright.
A Biometric Encryption System for the Self-Exclusion
Scenario of Face Recognition
Karl Martin, Member, IEEE, Haiping Lu, Member, IEEE, Francis Minhthang Bui, Student Member, IEEE,
Konstantinos N.(Kostas) Plataniotis, Senior Member, IEEE, and Dimitrios Hatzinakos, Senior Member, IEEE
Abstract—This paper presents a biometric encryption system
that addresses the privacy concern in the deployment of the
face recognition technology in real-world systems. In particular,
we focus on a self-exclusion scenario (a special application of
watch-list) of face recognition and propose a novel design of a
biometric encryption system deployed with a face recognition
system under constrained conditions. From a system perspective,
we investigate issues ranging from image preprocessing, feature
extraction, to cryptography, error-correcting coding/decoding,
key binding, and bit allocation. In simulation studies, the proposed
biometric encryption system is tested on the CMU PIE face data-
base. An important observation from the simulation results is that
in the proposed system, the biometric encryption module tends
to significantly reduce the false acceptance rate with a marginal
increase in the false rejection rate.
Index Terms—Biometric encryption, face recognition, privacy,
security, self exclusion, watch list.
BIOMETRICS refers to the automatic recognition of in-
dividuals based on their physiological and/or behavioral
characteristics, such as faces [1], iris, and gait [2]. In this paper,
we focus on the application of the face recognition technology.
Face recognition is one of the three identification methods used
in e-passports and it has an important advantage over other pop-
ular biometric technologies: it is non-intrusive and easy to use.
Among the six biometric attributes considered in [3], facial fea-
tures scored the highest compatibility in a machine-readable
travel documents (MRTD) system based on a number of evalua-
tion factors, such as enrollment, renewal, machine requirements,
and public perception [3].
Despite the various benefits, the use of biometrics can create
significant security risks, especially when there are large cen-
tralized databases of biometric passwords. Therefore, there is
Manuscript received January 22, 2009; revised September 29, 2009. This
work was supported in part by the Ontario Lottery and Gaming Corporation
(OLG). The views, opinions, and findings contained in this paper are those of
the authors and should not be construed as official positions, policies, or deci-
sions of the OLG, unless so designated by other official documentation.
K. Martin, F. M. Bui, K. N. Plataniotis, and D. Hatzinakos are with
the Edward S. Rogers Sr. Department of Electrical and Computer Engi-
neering University of Toronto, Toronto, ON M5S 3G4 Canada (e-mail:;; bui@comm.;;
H. Lu was with the Edward S. Rogers Sr. Department of Electrical and Com-
puter Engineering University of Toronto, Toronto, ON M5S 3G4 Canada. He is
now with the Institute for Infocomm Research, Agency for Science, Technology
and Research (A*STAR), Singapore 138632 (e-mail:
Digital Object Identifier 10.1109/JSYST.2009.2034944
a need for biometrics to be deployed in a privacy-enhanced
way that minimizes the possibility of abuse, maximizes indi-
vidual control, and ensures full functionality of the systems in
which biometrics are used [4]. A new technology called bio-
metric encryption emerged recently to address this concern. For
the case of face recognition, with biometric encryption, instead
of storing a sample of one’s facial image in a database, we can
use the facial image to encrypt or code some other information,
like a PIN or account number, or cryptographic key, and only
store the biometrically-encrypted code, rather than the facial
image itself. This removes the need for public or private sector
organizations to store actual biometric images in their database.
Thus, most privacy and security concerns associated with the
creation of centralized databases are eliminated. Biometric en-
cryption allows an individual’s biometric data to be transformed
into multiple and varied identifiers for different purposes, so that
these identifiers cannot be correlated with one another. More-
over, if a biometric identifier is somehow compromised, a com-
pletely new one may be easily generated from the same bio-
metric data of an individual.
Among the earliest proposals to utilize biometrics as privacy
enhancing solutions was the work by Tomko in 1994, which
highlighted the concept of biometrics encryption [5]. An impor-
tant component of biometric encryption is key binding, which
is the process of securely combining a key using a biometric
derived from some physiological features [6]. One challenge
to this approach is the unreliability of the individual bits in
the biometrics-based cryptographic key, due to the variance of
the input and other distortion sources. Solutions for such bio-
metrics-driven cryptographic systems have first been introduced
more than a decade ago, with the biometrics-driven crypto pro-
posal by Bodo [7]. Addressing the same challenge when using
fingerprints, the Bioscrypt solution of Soutar et al. [8] utilizes
the Fourier transform, and a phase-to-phase correlation to lock
the biometric sample with a predefined random key. The bio-
metrics locking approach of [9] prevents recovery of the orig-
inal fingerprints, but with the random keys externally specified.
By extracting fingerprints’ minutiae locations, Clancy et al. [10]
applied the fuzzy vault approach of Juels et al. [9], which is a
polynomial reconstruction method that guarantees obscuration
of the key.
While the earlier solutions focused on fingerprints, other bio-
metrics were subsequently utilized in constructing privacy en-
hancing systems. With face biometrics, a fuzzy vault based cryp-
tographic key generation method was introduced by Wang et al.
[11]. With iris biometrics, a cryptographic signature verification
method without stored reference data was proposed by Davida et
1932-8184/$26.00 © 2009 IEEE
al. [12]. Other notable biometric encryption proposals and vari-
ants include the helper data system (HDS) proposed in 2005
[13], the multi-bit quantization using likelihood ratio method
proposed in 2007 [14], and the quantization index modulation
(QIM) approach, which is first introduced in 2003 [15] and fur-
ther developed in 2007 [16].
In this work, we consider a self-exclusion scenario of face
recognition, which is a special application of watch-list, in con-
trast with the more commonly studied verification or identifica-
tion problems. The goal is to enhance privacy protection com-
pared to traditional system designs. The operating scenario is to
match a few enrolled subjects from a large number of customers,
followed by manual intervention, e.g., by security guard. In this
case, subjects have limited motivation to spoof enrolled subjects
since the enrolled subjects will be denied access once identified,
and positive matches are followed up by personnel rather than
automatic action. Nonetheless, there could be incentives for an
enrolled subject to spoof subjects not on the watch-list to avoid
the “exclusion”. Possible solutions to this kind of spoofing in-
clude liveness detection [17] and abnormal behavior screening
by security personnel. There is a realistic design constraint that
is often imposed in practical biometric systems: using an ex-
isting (traditional/commercial) face recognition system which
cannot be directly altered. We propose a biometric encryption
system that draws from a number of key technologies including
biometrics, cryptography, pattern recognition, and communica-
tions theory. There is no previous work/system that can sat-
isfy all the constraints and fit the specific operating scenario
of self-exclusion. Therefore, although built upon existing litera-
tures, this work has made improvement over the previous work
mainly at the system-level, where the specific requirements of
self-exclusion are considered and respective solutions are pro-
posed based on existing works.
This paper is organized as follows. Section II describes the
self-exclusion scenario of face recognition in detail and pro-
poses a biometric encryption system for it that combines com-
mercial face recognition system and biometric encryption tech-
nology. In Section III, the proposed system is presented in de-
tail, component by component including preprocessing, feature
extraction, cryptographic key module, cryptographic hash func-
tion, error-correcting code module, key binding module, and bit
allocation strategy. Section IV discusses the performance indi-
cators. In Section V, simulation studies are presented and finally,
Section VI concludes this work.
This work was motivated by an Ontario Lottery and Gaming
Corporation (OLG) initiative to evaluate facial recognition for
its self-exclusion gaming initiative. The work is part of a system
that attempts to solve the problem of identifying subjects in
a self-exclusion program using facial recognition, while pro-
tecting the privacy of stored personal information. In this case,
the personal information is considered to be the facial image it-
self, as well as application-specific meta-data related to the sub-
ject’s identity.
The self-exclusion initiative involves identifying voluntarily
enrolled subjects who have entered a gaming facility, and
Fig. 1. Combined face recognition system and biometric encryption. (a) En-
rollment and key binding. (b) Watch list identification and key release.
contravened the terms of the program. In this case, the subjects
entering the facilities do not provide claimed identities. In
biometric recognition systems, this is termed the “watch list”
scenario [18], which involves one-to-few matches that compare
a query sample against a list of suspects. In this task, the size of
database is usually very small compared to the possible queries,
and the identity of the probe may not be in the database.
Therefore, the recognition system should first detect whether
the query is on the list or not and if yes, correctly identify it.
In the self-exclusion scenario, the performance requirements
(minimization) are placed on the false rejection rate (FRR),
rather than the false acceptance rate (FAR). This is due to the
OLG requirement that the system should identify as many
enrolled subjects as possible.
There are two other common recognition tasks in biometric
applications: verification and identification. Verification in-
volves a one-to-one match that compares a query sample
against the sample(s) of the claimed identity in the database.
The claim is either accepted or rejected. Identification involves
one-to-many matches that compare a query sample of an
unknown person against the samples of all the persons in the
database to output the identity or the possible identity list of
Fig. 2. Proposed biometric encryption system for enrollment (key binding) and verification (key release).
the query sample. In this scenario, it is often assumed that the
unknown (query) person belongs to the persons who are in the
database. Currently known biometric encryption approaches
only provide the equivalence of the verification task.
To offer the privacy protection properties of biometric
encryption to the self-exclusion application scenario, the com-
bined face recognition and biometric encryption approach
shown in Fig. 1 is taken [19]. Fig. 1(a) and (b) depicts the
proposed general enrollment and watch list identification sys-
tems, respectively. Subject identification is performed using
a vendor-supplied face recognition system. A biometric en-
cryption module is then incorporated in order to offer privacy
protection of the personal information by way of a bound
cryptographic key which can be used with conventional cryp-
tographic techniques to encrypt the subject’s personal data for
secure application.
As shown, the input during enrollment is the subject’s facial
image as well as a unique identifier (ID). This unique identi-
fier should be anonymous and it must not directly relay private
information about the subject (e.g., the subject’s name should
not be used). This identifier is simply used to connect the ex-
tracted facial feature record stored in the vendor-supplied iden-
tification database to a particular secure sketch in the biometric
encryption system. The terminology “secure sketch” is intro-
duced in [20] as a technique that can be used to reliably repro-
duce error-prone biometric inputs without incurring the security
risk inherent in storing them. In enrollment, a secure sketch is
generated as a result of binding the cryptographic key with the
facial features. During watch list identification, the vendor-sup-
plied system will attempt to match input subjects to those in the
system database. If a match is made, the system will output a
claimed identity (ID) which is input into the face recognition
based encryption system in order to release the key and subse-
quently access the protected private information.
In particular, it should be noted that this combined system
is designed under two basic constraints: 1) the face recognition
system will be a commercial system that cannot be modified at
a low-level and 2) the biometric encryption module is used only
for verification/key release. Therefore, the watch list identifica-
tion is performed by the face recognition system alone and the
verification/key release is then performed by the biometric en-
cryption system. The face recognition system and the biometric
encryption system work in cascade. In the next section, the pro-
posed system will be described in detail.
The configuration of the proposed biometric encryption
system is depicted in Fig. 2, which is a generalization of the
system diagram in [13]. As indicated in the figure, biometric
features are used to verify whether the key associated with a
user should be released or not. If released (i.e., the user identity
is verified), the key can then be used for other security pur-
poses. In order to support secure applications, other modules
are constructed around the cryptographic key starting point.
Graphically, this corresponds to a data signal flow from right to
left in Fig. 2, starting with the cryptographic key module. After
the starting point, two diverging paths are implemented: one is
cryptographic hash to generate a hashed key, and the other is
error-correcting code (ECC) to protect against fuzzy variability
and other distortions. The data signals obtained after ECC are
then used as input to a key binding module. The key binding
module utilizes feature vectors to securely embed the encoded
key and produce another secure sketch, to be used during
verification. In the following, various modules are examined,
with a focus on system-level issues. Those modules that employ
conventional techniques are described in brief while emphasis
is put on the key binding module.
A. Facial Image Preprocessing
Facial image preprocessing is a necessary step for each facial
image before feature extraction. The input is a raw facial image
from the camera and the output is a facial image in a standard-
ized format. The input facial images need to be normalized
against variations which commonly occur, such as rotation,
scaling, and dynamic range of pixel values. The stages in the
facial image preprocessing pipeline include RGB to YCbCr
colour transform, luminance component extraction, rotation,
scaling, histogram equalization, masking and vectorization
[1], [21].
B. Feature Extraction
Feature extraction takes the standardized facial image as the
input and output a set of features that are of much lower dimen-
sion than the facial image. There are a large number of feature
extraction algorithms for face recognition proposed in the liter-
ature [1], [22], [23]. Since the focus of this work is biometric
encryption, the simulation studies choose a baseline feature ex-
traction algorithm, the PCA algorithm [24].
C. Cryptographic Key Module
The cryptographic key module is a random number gener-
ator producing a binary key. As a critical point in designing
the system, the cryptographic key is essentially a binary string
to be protected, i.e., securely stored and retrieved using other
supporting modules in the pipeline. The key is to be used for
a secure application, such as encrypting other subject-related
data. A widely employed encryption method is the Advanced
Encryption Standard (AES) [25], which is a symmetric scheme
that has been adopted by various organizations as an encryption
For practical usage, the AES key with the following three
bit lengths are desirable: 128, 192, and 256, which are referred
to as AES-128, AES-192 and AES-256, respectively. For AES
key selection in the self-exclusion context, in general, the more
stringent the security requirements, the longer the key should be.
However, in a biometric encryption context, it should be noted
that this security advantage can only be reaped if the underlying
modules can support the specified key in the first place. If the
associated biometric errors are unacceptably high, it would not
be meaningful to specify an unachievable key requirement.
D. Cryptographic Hash Function
In the proposed system, instead of storing the actual key, its
hashed version is stored. The cryptographic hash function takes
the cryptographic key as the input and generates its hashed value
as the output. The hash function has two related goals: 1) to con-
ceal (making it computationally infeasible to recover) the cryp-
tographic key in a secure sketch form suitable for storage and
2) to provide a secure comparison method for key verification.
These two goals are the focus of the enrollment and the verifi-
cation stages of the biometric encryption system, respectively.
The hash function accepts a variable-length input, and produces
a fixed-length output [26]. For the proposed system, SHA-256
is used as per NIST recommendation.
E. Error-Correcting Code Module
The error-correcting code module takes the cryptographic key
as the input and output an error-correcting coded version of it.
This key will ultimately be utilized in an encryption algorithm,
such as AES. It should be noted that the encryption-decryption
procedure is an all-or-none process. In other words, if the keys,
used during encryption and decryption, do not match exactly,
the recovered data will be incorrect. Therefore, all bits in the
two corresponding binary strings delivered by the underlying
biometric system must be identical for successful decryption.
The feature vectors obtained during enrollment and verifica-
tion differ due to various factors, e.g., inherent variability in
pose, illumination, facial expressions, or environmental noise.
This fuzzy variability results in errors when comparing the fea-
ture vectors. In the proposed system, an ECC is adopted to take
this into account.
In this work, the Bose, Chaudhuri, Hocquenghem (BCH)
codes are used [27]. They are parameterized as : where
denotes the number of bits in a codeword, denotes the
number of bits in a message symbol, and denotes the number
of random bit errors correctable. Relating these parameters to
the requirements in the cryptographic key length (denoted by
) and the size of the feature vector to be generated from the
feature extraction module, then determines the number of bits
to be bound with a feature vector, determines the maximum
number of bits in a cryptographic key (i.e., ), and
determines the number of bit errors allowed.
In addition, it should be noted that the characteristics of
the cryptographic keys impose constraints on the subsequent
schemes to be applied, including the ECC parameters and the
number of feature components to be extracted during enroll-
ment. For example, to support an AES key of bits, BCH codes
with are needed. In cases where multiple ECC options
are available for a given key, the underlying feature extraction
properties should be taken into account. For a particular BCH
code, bits need to be bound with the feature vectors. Thus,
it will affect the computational complexity and the feasibility
of certain types of key binding strategies (e.g., the key binding
strategy could fail if there is not enough useful components for
binding all the bits successfully).
F. Key Binding Module
The objective of the key binding module is to utilize a feature
vector to securely bind the encoded cryptographic key, i.e.,
to generate a secure sketch for storage. Thus, the key binding
module takes the feature vector from the feature extraction
module and the encoded key from the ECC module as the input
and output a secure sketch. For the chosen feature extraction
algorithm (PCA), the number of components that can be kept
depends on the reliability of the components. It should be noted
that each specific choice of a key binding scheme and an ECC
coding results in a specific system performance. In general, to
support longer key sizes requires higher system complexity.
These constraints are design issues that need to be taken into
account for a practical system.
1) QIM-Based Biometric Encryption: Here, we adopt the key
binding method based on quantization index modulation (QIM)
proposed in [15] and [16]. In [15], a theoretical framework based
on QIM was proposed for a one-bit-per-component key binding
strategy. However, neither a complete system description nor
practical simulation results were presented. This approach was
subsequently extended in [16], which allowed for more practical
biometric encryption design criteria to be considered.
In the self-exclusion context, the following implications
when utilizing a QIM-based biometric encryption system can
be noted. The secure sketch consists of continuous values, in
contrast with the binary secure sketch in HDS [13]. QIM-based
biometric encryption binds keys with feature vector through
index modulation using a quantizer ensemble. In other words,
the processes of feature binarization and key binding in many
other biometric encryption systems [13], [14] are fused in
QIM-based systems. No explicit feature binarization is per-
formed as a distinct step. This makes system performance
tuning more flexible.
The QIM design is applied after the feature extraction
module. In utilizing a feature vector to securely bind the en-
coded cryptographic key (through ECC), i.e., to generate a
secure template or sketch suitable for storage, QIM delivers
several unique and advantageous properties [15], [16]. In
particular, the QIM framework provides more flexibility in
balancing the trade-off between FAR and FRR requirements.
By varying the quantizer step size, it is possible to balance
the security and reliability trade-off. This property is useful in
designing practical biometric encryption systems, which are
potentially subject to a wide range of operating conditions.
2) QIM Encoding and Decoding: Originally proposed for
watermarking applications [28], the QIM construction can be
viewed as binding or embedding a secret message (e.g., the en-
coded cryptographic key) using an ensemble of quantizers. The
information to be embedded determines which quantizer needs
to be used, as specified by an associated codebook. The QIM
implementation using dither modulation [23] is chosen in this
work. In this implementation, the quantization partitions and
reconstruction points of the quantizer ensemble can be defined
as shifted versions of a basis quantizer. The advantage is that
the encoding and decoding procedures are simplified, due to the
well-defined structure offered by the dither quantizers. In the
following, the general mechanisms of QIM for key binding will
be described.
In this work, we consider only the QIM on scalar values so we
are binding the encoded key with the feature vector in a compo-
nent-by-component fashion. For notational simplicity, the fea-
ture component to be bound is denoted by and the encoded
key segment to be bound is denoted by . is of real value
and is a binary number. A quantizer is a function that maps
each point in the input space into one of the reconstruction
points in a set , where .
In an -point QIM, there are an ensemble of quantizers
that can map a into one of the recon-
struction points of the quantizer ensemble. is a set of labels
to index the quantizers with . is the set of recon-
struction points of quantizer .For and , the
QIM function becomes , i.e., the quan-
tizer indexed by is chosen. In the following, the QIM encoder
and decoder are defined.
Definition 1: Encoder: from an enrollment feature compo-
nent and an encoded key segment , a secure sketch is
obtained using the quantizer indexed by as
Thus, the secure sketch generated is the offset between the
input and the closest reconstruction point of the quantizer .
Definition 2: Decoder: from a test feature component (ob-
tained during verification) and a given sketch , the decoder
extracts the bound encoded key segment using a minimum dis-
tance scheme as follows:
where is an appropriate distance metric.
In other words, the decoder performs the following steps: 1)
Compensates for the offset; 2) searches for the closest recon-
struction point from all the quantizers; and 3) the label of
the quantizer with the closest reconstruction point corresponds
to the embedded message is the decoded key segment .
The described decoding scheme can be understood by ob-
serving that
where can be interpreted as an equivalent additive noise. This
noise represents the difference between the enrollment feature
component and the test feature component .If is additive
white Gaussian noise (AWGN), the appropriate distance metric
to be used is simply the absolute value of . The allowed
difference between and for successful verification (i.e., the
tolerance) is
where is the distance between two closest reconstruction
points in the quantizer used. In that case, by searching for
the reconstruction point that is closest (i.e., with the minimal
distance) to , the secret quantizer label (i.e., the
encoded key segment) can be recovered.
3) Quantizer Construction: The QIM framework described
above establishes the approach in a general manner while it
leaves open a lot of flexibility in the actual design of the quan-
tizers. Generally, for the QIM approach, the sizeof the partitions
chosen determines the trade-off between the FAR and FRR. As
mentioned previously, the class of dither quantizers [28] is par-
ticularly advantageous, since the associated construction of the
quantizer partitions is simplified. In this case, the number of
quantizers in the ensemble is equal to , where rep-
resents the number of information bits to be embedded.
When using dither lattice quantizers, the reconstruction
points of the quantizers are all constructed as shifts of a base
quantizer , where and represent the
quantization partition and reconstruction points, respectively.
Then, the subsequent quantizers are computed with shifted
codebooks. The minimum and maximum reconstruction points
are respectively and . The following construction is made:
and is the mean, is the standard deviation of the feature
component, and is a scaling factor.
The following observations can be made from the above de-
sign. With the assumption of a symmetric distribution of the
feature components, the definition of and specifies an op-
erating dynamic range of values. The value of provides the
tolerance for the quantizer ensemble as follows.
First, the remaining quantizers are
constructed as dither quantizers [28], with shift step-size
In other words, these partitions and reconstruction points are
all shifted by from the basis quantizer for the remaining
With the given design, we have the range of output secure
sketch to be for all inputs within the quan-
tizer dynamic range , and the tolerance of the QIM de-
coder to be .
From the above, the quantizer range should corre-
spond to the dynamic range of the input features to be effective.
This means, when the distribution is Gaussian, a range covering
several standard deviations should contain a significant portion
of the input range (specified by ). However, when this is not the
case (depending on the type of data inputs as well as the feature
extraction algorithm used), the values of may need to be larger
to deliver acceptable tolerance.
Furthermore, a Gray coding scheme [29] is adopted for map-
ping the quantizers to its label (or index) so that for as a
binary encoded key segment, incremental changes in the feature
vectors result in incremental changes in the recovered key.
4) Bit Allocation: In addition, depending on how the feature
components are used to bind a secret message, we can have dif-
ferent implementations of the key binding framework. There are
two general strategies. In the “one-bit per component strategy”,
each component is used to embed one bit of the encoded key se-
quence. For example, when using a BCH code (255, 131, 18), in
order to embed a 128-bit key, a codeword of 255 bits is generated
by the ECC module. Then, at least 255 components are required
for the one-bit per component procedure. In the “multibit per
component strategy”, each feature component is used to embed
a variable number of bits from the encoded key sequence. For
example, each component can be used to embed 3 bits of the
encoded key. Then, to embed 128-bit cryptographic key (which
is ECC-encoded to 255 bits), 85 feature components would be
required. In all cases, the number of feature components that
should be kept depends on the reliability of the components.
Bit allocation refers to the process of assigning an integer
quantity of bits to be embedded into each of the biometric feature
components. This usually applies to the multi-bit strategy. In
general, there are two bit allocation approaches: uniform bit allo-
cation and variable bit allocation based on component reliability.
In the simulations here, we adopt the simple uniform allocation
strategy only. Specifically, based on the total number of bits
required to be bound (depending on the cryptographic constraint
and the choice of ECC), equal number of bits are allocated to each
retained feature component. This will often result in a number
of bits remaining, which are then simply allocated to a few most
reliable feature components. For example, if 500 feature compo-
nents are used and an ECC codeword length of 1023 bits is used
(i.e., 1023 bits need to be bound), then the first 23 components
will be allocated 3 bits, and the remaining will be allocated 2 bits.
G. Training Requirements
In general, two main components in the biometric encryption
system require training: feature extraction and key binding/re-
lease. The training requirements of the feature extractor vary
depend on which algorithm is used. Usually, the feature ex-
tractor should be trained on images that match the general na-
ture of the images to be used when the system is deployed (i.e.,
lighting, pose, and resolution). For the biometric encryption key
binding/release, the training requirements generally involve cal-
culating the statistics for each feature component across the pop-
ulation and for individual subjects. Specifically, the mean and
variance must be calculated for each component across the en-
tire enrolled population.
Biometric recognition system performance can be generally
measured using two quantities: FAR and FRR. These values
can commonly be varied by way of system parameter choices.
The plot of FAR versus FRR using different parameters gen-
erates what is known as the receiver operating characteristic
(ROC) curve. However, these values have different definitions
depending on whether identification, verification, or watch list
is being performed. Following [18], we give their definitions for
the two scenarios considered in the simulation studies presented
in the next section: watch list and verification.
A. Performance Indicator in the Watch List Scenario
In a watch list operation, enrolled subjects (the watch list) rep-
resent only a small subset of subjects which will be processed
by the system. In this scenario, the system must attempt to detect
whether a given subject entering the premises (termed a probe
subject) is enrolled in the system and, if he or she is enrolled,
identify that subject. When a positive detection and identifica-
tion is achieved, this is considered acceptance in the system.
Conversely, if detection fails, despite the subject being in the
watch-list, then rejection has occurred.
Biometric templates are usually compared using a similarity
measure. The detection performance is affected by the similarity
threshold . Specifically, if a similarity measure is used
to compare two biometric templates, and , then a positive
detection is registered when .
Following detection, identification performance is affected
by means of a ranking threshold, , which determines how
many of the enrolled subjects (which achieved positive detec-
tion when compared to the probe subject) may achieve positive
A correct detection and identification is achieved when
, and , where is a given
probe subject, and is a gallery subject enrolled in the system.
In contrast, a false detection and identification is achieved when
, and . The probability
of correct detection and identification is
where represents the set of all gallery subjects and is
the number of gallery subjects. The probability of false rejection
or FRR . The other measure of
performance is the FAR, which is measured as
where is a set of imposter subjects. In other words, mea-
suring across a set of imposter subjects, the FAR is determined
by the fraction of those subjects exhibiting a similarity with a
gallery subject greater than the threshold .
In the context of the self-exclusion program, the performance
requirements (i.e., minimization) are generally to be placed on
the FRR, rather than the FAR. This may result in a large FAR,
meaning that a potentially significant number of patrons who
are not enrolled in the system will be falsely identified as being
enrolled. In this case, the identified subjects would undergo a
manual verification process by security personnel as long as this
is manageable.
B. Verification Performance
In 1-to-1 verification operation, the system must verify
whether a probe subject matches a certain claimed identity
(i.e., the identity output through face identification). When a
positive verification is achieved, this is considered acceptance
in the system. Conversely, if verification fails, then rejection
has occurred.
As in the watch list scenario, the verification performance is
affected by the similarity threshold . If a similarity measure
is used to compare two biometric templates, and , then
a positive verification is registered when .
Thus, a correct verification is achieved when and
. A false verification is achieved when
and . The measure of probability of correct
verification is then defined as
The probability of false rejection or FRR is:
. As in the watch list scenario, the other measure of per-
formance is the FAR, which is measured as follows:
This section presents simulation results of the proposed bio-
metric encryption system. First, the simulation setup will be de-
scribed, followed by the resulting baseline recognition perfor-
mance without the application of biometric encryption, and fi-
nally the recognition performance of the system with the pro-
posed biometric encryption modules.
A. Data and Simulation Setup
The simulations were performed on a subset of the Pose, Illu-
mination, and Expression (PIE) database from Carnegie Mellon
University (CMU) [30]. The CMU PIE database contains 68 in-
dividuals with face images captured under varying pose, illu-
mination and expression. We choose three frontal poses (C07,
C09, C27), under seven illumination conditions (06, 07, 08, 11,
12, 19, 20). Thus, there are about 21 (3 7) samples per sub-
ject (with some faces missing), which is not difficult in practice
with voluntary and cooperative subjects using video camera.
The properties of this CMU PIE subset are listed in Table I.
This database was chosen over other available databases due to
its large number of images per subject. Biometric encryption
schemes usually depend on reliable intra-class (i.e., within sub-
ject) statistics which cannot be calculated using databases with a
small number of images per subject. The simulations were per-
formed using the MATLAB v.7.5.0 computing environment.
The database was partitioned into a gallery set containing
all but one of the images for each of the subjects, and a probe
set containing the single remaining image for each subject. The
gallery set was used for training the feature extractor and the bio-
metric encryption modules as well as enrollment of the subjects.
The probe set was used for testing the recognition performance.
As mentioned earlier, PCA is the chosen feature extraction al-
gorithm and it is trained on the gallery set. The first 154 PCA
components were retained for each image, constituting 95% of
the signal energy.
For the proposed biometric encryption approach, the bio-
metric encryption module is first tested in isolation to determine
the verification performance, and then as part of the whole
system to test the performance in the watch list scenario. In
the watch list scenario, the face recognition module produces
a ranked list of candidate gallery subject identities for each
probe subject tested, as shown in Fig. 1(b). This list of claimed
identities for each probe subject is passed to the biometric
encryption module where verification is performed on each one
individually. The length of the list of claimed identities may
vary between 0 (i.e., unidentified - no matching subject found
in the gallery) and (the maximum rank allowed for identifi-
cation). The system parameter is to be chosen based on the
application requirements. The final output of the system is the
cryptographic key for subjects producing positive verification.
B. Baseline Watch List Recognition Performance
Since in the self-exclusion scenario, the watch list face recog-
nition operation is to be performed, it is important to first estab-
lish a baseline level of recognition performance to which the
Fig. 3. Baseline watch list recognition performance using maximum rank
, 10, and 20. The chosen operating points for each scenario are labeled OP5,
OP10, and OP20, respectively.
system with biometric encryption will be compared. Thus, the
baseline recognition performance under the watch list scenario
was simulated first.
Using the definitions found in Section IV, each probe subject
is compared with each enrolled gallery subject via a sim-
ilarity metric .If is less than a given threshold for all
gallery subjects, then subject is unidentified and rejected. If
there are gallery subjects for which is greater than , then all
those subjects are ranked according to the value (i.e., greater
similarity achieves higher rank) and the first are returned.
The similarity metric used for classification is the normalized
inner product, defined as follows:
where and are the -component feature vectors from
gallery subject and probe subject , respectively. This rep-
resents the cosine of the angle between the two vectors, with
possible values ranging . The greater the value of
achieved, the more similar the two compared feature vectors are.
For a given , a set of (FAR, FRR) value pairs are generated by
varying the similarity threshold . For the provided simulation
results, was linearly varied in the range , with a total
of 1000 points. As shown in Fig. 3, the recognition performance
was simulated for 5, 10, and 20.
It should be noted that the actual performance values (i.e.,
FAR and FRR) are not significant here, since they depend on
the image database, the feature extractor, and the chosen clas-
sifier—some or all of which will be different in the practical
operating scenario, depending on the choice of vendor. What is
significant in these results is the demonstration of the effect that
the choice of has on recognition performance as well as the
relative recognition performance compared to the system with
biometric encryption.
It should also be noted that the self-exclusion operating sce-
nario requires minimal FRR since this represents the rate at
which enrolled self-exclusion subjects would go undetected and
allowed onto the gaming premises. This is in contrast to many
other face recognition systems reported in the literature, which
place an emphasis on minimizing FAR. As such, for each sce-
nario, an operating point is chosen where FRR is minimized.
These operating points must be fixed in order to simulate the en-
tire system with the biometric encryption module in place. This
is because the operating points determine the identification re-
sults to be passed on to the biometric encryption modules. The
operating points are labeled in Fig. 3 and listed in Table II.
C. Performance of the Proposed Biometric Encryption System
The recognition performance of the proposed QIM-based
biometric encryption system is first simulated in isolation as
a verification operation. The verification results are shown in
Fig. 4, where the results are grouped according to the achieved
key length. It should be noted that short keys are used here
for demonstrating the behavior of the system and some of the
key lengths are not for practical use, such as a 16-bit key. The
key length in this paper is constrained by the feature extraction
method, PCA. This constraint could be alleviated through
selecting an appropriate commercial face recognition product.
For keys with approximately the same length (as grouped in
Fig. 4), it can be seen that shorter codeword length generally
achieves better performance since using more low energy PCA
components tend to make classification more difficult. Next, the
configurations listed in Table III are selected for simulation of
the full watch list system. The results from these four configura-
tions are selected from Fig. 4 and shown in Fig. 5. It should be
noted here that the verification performance in different cases
are affected not only by the key length but also the respective
ECC coding configuration and bit allocation scheme.
The full watch list system with the proposed QIM-based bio-
metric encryption module was simulated using the selected op-
erating points OP5, OP10, and OP20 and the four key length
configurations described in Table III. The results are shown in
Fig. 6. As can be seen, for all tested key lengths, the addi-
tion of the QIM biometric encryption module is able to provide
improved recognition results, compared to the operating point
without BE. Specifically, the use of the biometric encryption
module is able to significantly reduce FAR while achieving ap-
proximately the same FRR.
D. Discussions
The proposed biometric encryption system is simulated both
in isolation (1-to-1 verification operation) and as part of the full
watch list scenario. In isolation, the proposed biometric encryp-
tion system exhibited performance allowing the reliable binding
Fig. 4. ROC curves for the isolated verification performance with various key lengths. (a) 16, 19, 21, and 22 bit.s (b) 36, 37 and 40 bits. (c) 64, 67 and 71 bits.
(d) 130 and 131 bits.
Fig. 5. ROC curves for the isolated verification performance with selected con-
figurations for different key lengths.
of short keys. While for the full watch list scenario, the pro-
posed biometric encryption system has achieved improved FAR
results compared to the system without biometric encryption.
This could be understood by the fact that the biometric encryp-
tion module receives a candidate list of identities from the watch
list module. Falsely accepted imposter subjects are placed on
the list by the watch list module, while the biometric encryp-
tion module cannot add to this list. Thus, the biometric encryp-
tion module cannot increase the number of subjects falsely ac-
cepted. This is inherent in the system design that has the watch
list module in series with the biometric encryption module. In
all simulation cases, the biometric encryption module in fact re-
jected many imposter candidates, thus reducing the FAR. How-
ever, the equivalent implication of the system design is that the
full system cannot achieve a lower FRR than the watch list
module alone. This is because subjects falsely rejected by the
watch list module cannot be placed back on the candidate list
by the biometric encryption module. In fact, the biometric en-
cryption module may falsely reject legitimate subjects placed
on the candidate list, thus increasing the FRR.
Therefore, the simulation studies have shown the possibility
of biometric encryption module to significantly reduce the FAR
(from the watch list alone) with a marginal (or zero) increase in
the FRR. In addition, the proposed biometric encryption system
is able to produce a curve of operating points, offering system
Fig. 6. ROC curves of the proposed biometric encryption system for the full
watch list system with three operating points. (a) operating point (OP5).
(b) operating point (OP10). (c) operating point (OP20).
designers an important degree of freedom to choose the desir-
able operating point.
This paper presents a biometric encryption system in an at-
tempt to address the privacy concern in the deployment of the
face recognition technology. A self-exclusion scenario of face
recognition is the focus of this research, with a novel design
of a biometric encryption system proposed, integrated with the
face recognition technology. From a system perspective, various
issues are studied, ranging from image preprocessing, feature
extraction, to cryptography, error-correcting coding/decoding,
key binding, and bit allocation. The proposed biometric encryp-
tion system is tested on the CMU PIE face database. Simulation
results demonstrate that in the proposed system, the biometric
encryption module tends to significantly reduce the false accep-
tance rate with a marginal increase in the false rejection rate.
The authors would like to thank K. Peltsch from the Ontario
Lottery and Gaming Corporation and Dr. A. Cavoukian and Dr.
A. Stoianov from the Information and Privacy Commissioner of
Ontario for many useful discussions.
[1] J. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, “Face recogni-
tion using kernel direct discriminant analysis algorithms,” IEEE Trans.
Neural Netw., vol. 14, no. 1, pp. 117–126, Jan. 2003.
[2] H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, “MPCA: Mul-
tilinear principal component analysis of tensor objects,” IEEE Trans.
Neural Netw., vol. 19, no. 1, pp. 18–39, Jan. 2008.
[3] R. Hietmeyer, “Biometric identification promises fast and secure pro-
cessing of airline passengers,” Int. Civil Aviation Org. J., vol. 55, no.
9, pp. 10–11, 2000.
[4] A. Cavoukian and A. Stoianov, “Biometric encryption,” Biometric
Technol. Today, vol. 15, no. 3, p. 11, Mar. 2007.
[5] A. Cavoukian and A. Stoianov, “Biometric encryption: A positive sum
technology that achieves strong authentication, security and privacy,”
White Paper, Office of the Information and Privacy Commissioner of
Ontario Mar. 2007.
[6] U. Uludag, S. Pankanti, S. Prabhakar, and A. K. Jain, “Biometric
cryptosystems: Issues and challenges,” Proc. IEEE, vol. 92, no. 6, pp.
948–960, Jun. 2004.
[7] A. Bodo, “Method for Producing a Digital Signature With Aids of a
Biometric Feature,” Germany German Patent DE 42 43 908 A1, 1994.
[8] C. Soutar, D. Roberge, A. Stoianov, R. Gilroy, and B. V. K. V. Kumar,
“Biometric encryption,” ICSA Guide to Cryptog., pp. 649–6750, Mar.
[9] A. Juels and M. Sudan, “A fuzzy vault scheme,” in Proc. IEEE Int.
Symp. Information Theory, 2002, p. 408.
[10] T. C. Clancy, N. Kiyavash, and D. J. Lin, “Secure smartcard-based
fingerprint authentication,” in Proc. ACM Workshop on Biometrics:
Methods and Applications, 2003, pp. 42–52.
[11] Y. Wang and K. N. Plataniotis, “Fuzzy vault for face based crypto-
graphic key generation,” in Proc. Biometrics Symp., Sep. 2007, pp. 1–6.
[12] G. I. Davida, Y. Frankel, and B. J. Matt, “On enabling secure appli-
cations through off-line biometric identification,” in Proc. IEEE Symp.
Security and Privacy, May 1998, pp. 148–157.
[13] T. A. M. Kevenaar, G. J. Schrijen, M. v. d. Veen, A. H. M. Akkermans,
and F. Zuo, “Face recognition with renewable and privacy preserving
binary templates,” in Proc. IEEE Workshop on Automatic Identification
Advanced Technologies, Oct. 2005, pp. 21–26.
[14] C. Chen, R. N. J. Veldhuis, T. A. M. Kevenaar, and A. H. M. Akker-
mans, “Multi-bits biometric string generation based on the likelihood
ratio,” in Proc. IEEE Int. Conf. Biometrics: Theory, Applications, and
Systems, Sep. 2007, pp. 1–6.
[15] J. P. Linnartz and P. Tuyls, “New shielding functions to enhance pri-
vacy and prevent misuse of biometric templates,” in Proc. Int. Conf.
Audio and Video Based Biometric Person Authentication, Jun. 2003.
[16] I. Buhan, J. Doumen, P. Hartel, and R. Veldhuis, Constructing Practical
Fuzzy Extractors Using QIM Centre for Telematics and Information
Technology, University of Twente, Enschede, The Netherlands, Tech.
Rep. TR-CTIT-07-52, 2007.
[17] K. Kollreider, H. Fronthalera, and J. Biguna, “Non-intrusive liveness
detection by face images,” Image and Vis. Comput.; Special Issue on
Multimodal Biometrics, vol. 27, no. 3, pp. 233–244, Feb. 2009.
[18] P. Grother, R. J. Micheals, and P. J. Phillips, “Face recognition vendor
test 2002 performance metrics,” in Proc. Int. Conf. Audio and Video
Based Biometric Person Authentication, Jun. 2003, pp. 937–945.
[19] A. Stoianov, private communication Aug. 2007.
[20] Y. Dodis, L. Reyzin, and A. Smith, “Fuzzy extractors: How to gen-
erate strong keys from biometrics and other noisy data,” in Advances
in Cryptology—EUROCRYPT 2004, ser. Lecture Notes in Computer
Science. Berlin, Germany: Springer, 2004, vol. 3027, pp. 523–540.
[21] H. Lu, “Multilinear Subspace Learning for Face and Gait Recognition”
Ph.D. dissertation, University of Toronto, Toronto, ON, Canada, 2008
[Online]. Available:
[22] H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, “Uncorrelated
multilinear discriminant analysis with regularization and aggregation
for tensor object recognition,” IEEE Trans. Neural Netw., vol. 20, no.
1, pp. 103–123, Jan. 2009.
[23] G. Shakhnarovich and B. Moghaddam, “Face recognition in sub-
spaces,” in Handbook of Face Recognition, S. Z. Li and A. K. Jain,
Eds. Berlin, Germany: Springer-Verlag, 2004, pp. 141–168.
[24] M. Turk and A. Pentland, “Eigenfaces for recognition,J. Cog. Neu-
rosci., vol. 3, no. 1, pp. 71–86, 1991.
[25] J. Daemen and V. Rijmen, Aes Proposal: Rijndael 1999.
[26] A. J. Menezes, P. C. V. Oorschot, and S. A. Vanstone, Handbook of
Applied Cryptography. Boca Raton, FL: CRC Press, 1997.
[27] R. Morelos-Zaragoza, The Art of Error Correcting Coding.New
York: Wiley, 2006.
[28] B. Chen and G. W. Wornell, “Dither modulation: A new approach to
digital watermarking and information embedding,” in Proc. SPIE Secu-
rity and Watermarking of Multimedia Contents, Apr. 1999, vol. 3657,
pp. 342–353.
[29] W. Press and S. Teukolsky, Numerical Recipes in C, 2nd ed. Cam-
bridge, U.K.: Cambridge Univ. Press, 1992.
[30] T. Sim, S. Baker, and M. Bsat, “The CMU pose, illumination, and ex-
pression database,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25,
no. 12, pp. 1615–1618, Dec. 2003.
Karl Martin (S’00–M’09) received the B.A.Sc. de-
gree in engineering science (electrical specialty) and
the M.A.Sc. degree in electrical engineering from the
University of Toronto, Toronto, ON, Canada, in 2001
and 2003, respectively. He is currently pursuing the
Ph.D. degree in the Edward S. Rogers Sr. Department
of Electrical and Computer Engineering, University
of Toronto.
His research interests include multimedia security
and privacy, biometrics, multimedia processing,
wavelet-based image coding, and object-based
Mr. Martin is a member of the IEEE Signal Processing Society, Communi-
cations Society and the Circuits and Systems Society. He has been a Technical
Reviewer for numerous journals and conferences. Since 2003, he has held the
position of Vice-Chair of the Signal Processing Chapter, IEEE Toronto Section.
Haiping Lu (S’02–M’09) received the B.Eng. and
M.Eng. degrees in electrical and electronic engi-
neering from Nanyang Technological University,
Singapore, in 2001 and 2004, respectively, and the
Ph.D. degree in electrical and computer engineering
from the University of Toronto, Toronto, ON,
Canada, in 2008.
Currently, he is a Research Fellow with the Insti-
tute for Infocomm Research, Agency for Science,
Technology and Research (A*STAR), Singapore.
Before joining A*STAR, he was a Postdoctoral
Fellow at the Edward S. Rogers Sr. Department of Electrical and Computer
Engineering, University of Toronto. His current research interests include
statistical pattern recognition, machine learning, multilinear algebra, tensor
object processing, biometric encryption, and data mining.
Francis Minhthang Bui (S’99) received the B.A.
degree in french language and the B.Sc. degree in
electrical engineering from the University of Cal-
gary, Calgary, AB, Canada, in 2001 and the M.A.Sc.
and Ph.D. degrees in electrical engineering from
the University of Toronto, ON, Canada, in 2003 and
2009, respectively.
He is currently a Postdoctoral Fellow at the Univer-
sity of Toronto. His research interests include signal
processing methodologies for resource allocation and
security in wireless communication networks.
Konstantinos N. (Kostas) Plataniotis
(S’90–M’92–SM’03) received the B. Eng. degree
in computer engineering from the University of
Patras, Patras, Greece, in 1988 and the M.S. and
Ph.D. degrees in electrical engineering from Florida
Institute of Technology (Florida Tech), Melbourne,
Florida, in 1992 and 1994, respectively.
He is a Professor with The Edward S. Rogers Sr.
Department of Electrical and Computer Engineering,
University of Toronto, Toronto, ON, Canada, an Ad-
junct Professor with the School of Computer Science,
Ryerson University, Toronto, a Member of The University of Toronto’s Knowl-
edge Media Design Institute, and the Director of Research for the Identity, Pri-
vacy and Security Initiative at the University of Toronto. His research interests
include biometrics, communications systems, multimedia systems, and signal
and image processing.
Dr. Plataniotis is the Editor-in-Chief (2009–2011) for the IEEE SIGNAL
PROCESSING LETTERS, a registered professional engineer in the province of
Ontario, and a member of the Technical Chamber of Greece. He is the 2005
recipient of IEEE Canada’s Outstanding Engineering Educator Award “for
contributions to engineering education and inspirational guidance of graduate
students” and the co-recipient of the 2006 IEEE TRANSACTIONS ON NEURAL
NETWORKS Outstanding Paper Award for the published in 2003 paper entitled
“Face Recognition Using Kernel Direct Discriminant Analysis Algorithms”.
Dimitrios Hatzinakos (S’86–M’90–SM98) received
the Diploma degree from the University of Thessa-
loniki, Thessaloniki, Greece, in 1983, the M.A.Sc.
degree from the University of Ottawa, Ottawa, ON,
Canada, in 1986, and the Ph.D. degree from North-
eastern University, Boston, MA, in 1990, all in elec-
trical engineering.
In September 1990, he joined the Department of
Electrical and Computer Engineering, University of
Toronto, Toronto, ON, where he currently holds the
rank of Professor with tenure. He served as Chair of
the Communications Group of the Department during the period July 1999 to
June 2004. Since November 2004, he is the holder of the Bell Canada Chair in
Multimedia at the University of Toronto. He is Co-founder and Director of the
Identity, Privacy, and Security Initiative (IPSI) at the University of Toronto. His
research interests are in the areas of multimedia signal processing, multimedia
security, multimedia communications and biometric systems. His experience
includes consulting through Electrical Engineering Consociates Ltd. and con-
tracts with United Signals and Systems Inc., Burns and Fry Ltd., Pipetronix
Ltd., Defense R&D Canada (DRDC), Nortel Networks, Vivosonic Inc., and
CANAMET Inc. He is author/co-author of more than 200 papers in technical
journals and conference proceedings and he has contributed to 12 books in his
areas of interest. He is the co-author of Multimedia Encoding for Access Con-
trol with Traitor Tracing: Balancing Secrecy, Privacy and Traceability (Berlin,
Germany: VDM Verlag Dr. Müller, 2008).
Dr. Hatzinakos is an Associate Editor for the IEEE TRANSACTIONS
ON MOBILE COMPUTING. He served as an Associate Editor for the IEEE
TRANSACTIONS ON SIGNAL PROCESSING from 1998 to 2002 and Guest Editor
for the Special Issue on Signal Processing Technologies for Short Burst
Wireless Communications of Signal Processing (October 2000). He was
a member of the IEEE Statistical Signal and Array Processing Technical
Committee (SSAP) from 1992 to 1995 and Technical Program Co-Chair of
the 5th Workshop on Higher-Order Statistics in July 1997. He is a member
of EURASIP, the Professional Engineers of Ontario (PEO), and the Technical
Chamber of Greece.
... Yet another encryption approach was presented in [92] where the authors demon-strate an algorithm to selectively encrypt objects, as opposed to encrypting rectangular regions of interest. A somewhat orthogonal approach was suggested in [91] to automatically search for persons in video recordings without the video data leaving a sealed off computer system. ...
... In her white papers (e.g. [21,22]), she and her co-authors described -quite similar to [91] -how encrypted video data could be analysed for counter-terrorism purposes without revealing the decrypted data. In earlier publications [20], Cavoukian also referred to the concepts presented in [92]. ...
... encrypted) data. In Section 2.4, two papers have already been mentioned [22,91]. Another impactful approach was presented in [76]. ...
The introduction of computer technology into modern societies has enabled unprecedented possibilities of communication and data processing. While its applications are always intended for - and generally succeed in - improving the living standards, there can be inherent risks of new technologies that need extra attention and consciousness to be thwarted. In particular, one risk coming with the possibilities of vast data processing is data security. Unlike data in the pre-digital ages that could be tracked and controlled relatively easily due to the tangible nature of its carrying media like paper, data in the digital age appears much more elusive to the human senses. Furthermore, the pace at which data is transmitted and processed has surpassed what anybody could naturally perceive or comprehend. Even experts of computer science can only operate the hardware through several levels of abstraction. This leaves computer laypersons in the situation that they can just as little as the experts perceive what is going on inside the chips and networks, but they are furthermore lacking the expert's intuition and ability to conceptually descend from the higher to lower abstraction levels. They must therefore rely on the expert testimony particularly when it comes to the security of their data within the black box as which a computer system appears to them. If there is a lack of trust in the expert's testimony regarding data security, one could say that laypersons have lost their power of informational self-determination. This thesis is a step towards re-empowering citizens of modern societies with this sovereignty, by contributing to an increased trust in computer systems. Most of the author's research contributions have been in the specific field of physically unclonable functions (PUFs). However, the author has also been involved in scientific exchange with researchers from other disciplines to see his work in a broader context. A result of this exchange has been the so-called Digital Cloak of Invisibility (DCI). This thesis does therefore not only describe the author's work regarding PUFs but also describes the DCI concept in more detail than published before. A link between these two parts is that PUFs can be one of the components to implement the DCI. The DCI is the result of an ongoing inter-disciplinary cooperation of the author with not just computer scientists but also philosophers, economists and lawyers. It essentially constitutes a separation of powers for the collection, storage and analysis of personal data. The most vivid example for this would be privacy-preserving video surveillance, in which individual-related data is automatically erased from the recorded images. This erasure, however, is done in a cryptographic manner such that it could later be revoked using a specific key. This key is in possession of a party not suspicious of secretly collaborating with those in possession of the anonymised recordings. After elaborating on the DCI, this thesis continues to present the author's work on PUFs. With their property to increase trust in devices, PUFs can play a vital role in a DCI implementation. Because even if the hardware of a DCI device (e.g. a surveillance camera) is designed properly, it has to remain trusted for its entire life cycle. A PUF provides an alternative to storing secret keys in the memory of a circuit. Instead, the keys are derived from physical characteristics at runtime that are unique to each individual chip. Such PUF-generated keys are not just harder to extract from a circuit through hardware attacks, they are also sensitive to such tampering attempts. Because malicious changes to the hardware can impact the physical characteristics from which the keys are derived and thereby alter the key in the process of extracting it. In his work, the author focused on the implementation of delay-based PUFs on Intel field programmable gate arrays (FPGAs). Delay-based PUFs take the delays of signal lines as the chip-specific characteristics from which the keys are derived. FPGAs, due to their flexibility and relatively low costs, are both well suited for the implementation of delay based PUFs and a promising technology for DCI implementations. Among the author's main contributions has been the elaboration of how to implement and refine Ring-Oscillator (RO)-PUFs using the Intel FPGA architecture and design tools. Furthermore, he has shown the existence of location-specific biases in the FPGA fabric leading to a lowered uniqueness of the PUF-generated keys and has presented post-processing methods to compensate this. All of this has been done with the help of newly developed metrics and culminated in the creation of a new kind of RO-PUF using programmable delay lines (PDLs) to enable a far superior area efficiency. Such technologies are capable of implementing a DCI system and make it trustworthy. Future work will have to investigate the social and political requirements for its establishment in society. E.g. how can the hardware's trustworthiness be conveyed to laypersons in a traceable and transparent way? Who can act as the different parties in the separation of powers constituted by a DCI and what makes them trustworthy to the populace? At the end of such a research and development process, the implementation of DCIs could enable citizens of modern societies to reclaim sovereignty over their sensitive data, that is to take the power back!
... Several methods are employed to preserve the features of biometric data such as encryption alone, or watermarking along with encryption. However, encryption may put a boundary to the capacity of big scale biometric systems because it is computationally expensive [1] Also, there is a condition such that the decryption of the templates must happen before the template matching. This condition can also reduce the protection strength. ...
... AES based on principle de nitions including substitution-permutation network, combination of both substitution and permutation. AES uses Rijndael cipher which has a xed block size of 128 bits, and a key size of 128, 192, or 256 bits [22]. ...
Full-text available
In the recent years, Internet of Things (IoT) technologies are growing to make great progress for the nuclear energy applications. The protection of their sensitive information has become an important challenge in implementing secure application services. These services should meet the major security attributes including confidentiality, availability, and integrity. This paper introduces a security scheme for the transmitted sensitive information and secure monitoring of the critical radiation levels at the nuclear facilities. It evaluated through integrating the cryptography and steganography techniques with cloud computing services. The cryptography techniques based on Advanced Encryption Standard (AES) and Rivest, Shamir-Adleman (RSA) algorithms. The scheme uses the extracted cryptography keys from authenticated biometric attributes. The proposed scheme provides a low computational time suitable for fast responding in the emergencies. It allows securing access for the encrypted sensitive measurements, files, and images with high data integrity and confidentiality. Furthermore, it hides the confidential sensitive information with great capacity and imperceptibility through the transmitted carrier image. The security performance analysis ensures the robustness of introducing scheme against various attacks through authentication, encryption, and information hiding techniques. It resists the serious attacks, including the man in the middle, noise, and Distributed Denial of Service (DDOS) attacks.
... In order to protect the privacy of individuals, biometric recognition jointing with cryptography is a promising approach [14, 20-22, 24-26, 31, 39]. Martin et al. [36] presented a biometric encryption system that focused on a self-exclusion scenario. Based on homomorphic encryption, a general framework for multibiometric template protection was introduced in [9]. ...
Full-text available
Face-based biometric recognition is widely used nowadays, where substantial face images are commonly stored on third-party servers. Since the sensitive information of an individual is contained in facial image such as the age and health condition, it is necessary to protect its privacy and security. This paper investigates a cancelable color face template protection algorithm. To make full use of quaternion representation, the structural information including local variance and gradient is respectively served as the real part. To achieve revocability and ability to redistribute, the strategy of random permutation with binary matrix is adopted. Afterwards, the quaternion-based two-dimensional principal component analysis is employed to extract features. With them, the extreme learning machine can be trained and used for recognition. Experimental results performed on four different color face datasets have demonstrated that the fusion of structural information can greatly improve the accuracy. More importantly, the random permutation not only does not reduce the recognition accuracy, but also guarantees the security and revocation of face template.
... However, in [44], the authors show that a multibiometric system, jointly using voice and face recognition, can provide an effective solution in outdoor conditions. In order to provide user privacy [33], we guarantee the security of biometries by using cancelable biometrics. Most of the existing techniques implementing cancelable biometrics exploit non reversible transformations [35], which project real biometric data in a new space, while preserving the topological properties of the original feature space. ...
Full-text available
On-the-field maintenance of complex equipments, that may involve multiple subjects and stakeholders, is one of most challenging scenarios for Enterprise Rights Management (ERM). In this paper, we present an ERM system that guarantees the “on-site” protection of information confidentiality. In particular, our system features local data encryption and minimal data transfers. A secure key management protocol is executed by the devices operating on-site and the remote manufacturer’s support center and guarantees an efficient and dynamic enforcement of arbitrary data-provider-defined access policies. Operator identities are verified by means of strong multi-biometric verification schemes whilst protecting their biometries by means of cancelable biometries. To this end, we provide the first experimental evaluation of cancelable biometrics based on the fusion of face and voice biometries, that may be of independent interest.
... In [33,34], the authors presented a novel biometric encryption system that addresses the privacy issue, using face recognition based on principal component analysis (PCA) features. Instead of storing the cryptographic key, its hashed version is stored, and then PCA features are used to verify whether the key associated with a user should be released or not. ...
The paper presents an improved fuzzy vault approach for face recognition under unconstrained environments. First, we parameterize the number of chaff points needed and determine the threshold separating the genuine points from the chaff points in the vault. The second improvement consists of enhancing the security of the fuzzy vault. Clancy et al. (in: Proceedings of ACM SIGMM2003, 2003) and Mihailescu (in: Computer Vision and Pattern Recognition (cs.CV); Cryptography and Security (cs.CR), 2007) discussed brute-force attacks to prove the weaknesses of the popular Juels and Sudan fuzzy vault method for some parameters such as the total number of points in the vault. To remedy such limitations, we introduce a cancellable and revocable biometric approach for face recognition based on the local binary patterns histograms. In addition to the revocability of the biometric data, we also achieved very recognition accuracy of more than 95% outperforming many of the exiting biometric approaches. More importantly, the proposed approach is developed to secure biometric data and achieve excellent recognition accuracy even under unconstrained environments.
... The new proposed encryption algorithms uses fingerprint key in the encryption process, which is uses the fingerprint features in generating the encryption key, this type of key can increase the robustness of this algorithm [8]. ...
... Several means are employed to protect biometric data such as only encryption, or watermarking and encryption. Encryption can be used as one potential mechanism of protecting the biometric features as in [89]. However, encryption may limit the capacity of large scale biometric systems because it can be computationally expensive. ...
The iris biometric is a well-established technology which is already in use in several nation-scale applications and it is still an active research area with several unsolved problems. This work focuses on three key problems in iris biometrics namely: segmentation, protection and cross-matching. Three novel methods in each of these areas are proposed and analyzed thoroughly. In terms of iris segmentation, a novel iris segmentation method is designed based on a fusion of an expanding and a shrinking active contour by integrating a new pressure force within the Gradient Vector Flow (GVF) active contour model. In addition, a new method for closed eye detection is proposed. The experimental results on the CASIA V4, MMU2, UBIRIS V1 and UBIRIS V2 databases show that the proposed method achieves state-of-theart results in terms of segmentation accuracy and recognition performance while being computationally more efficient. In this context, improvements by 60.5%, 42% and 48.7% are achieved in segmentation accuracy for the CASIA V4, MMU2 and UBIRIS V1 databases, respectively. For the UBIRIS V2 database, a superior time reduction is reported (85.7%) while maintaining a similar accuracy. Similarly, considerable time improvements by 63.8%, 56.6% and 29.3% are achieved for the CASIA V4, MMU2 and UBIRIS V1 databases, respectively. With respect to iris biometric protection, a novel security architecture is designed to protect the integrity of iris images and templates using watermarking and Visual Cryptography (VC). Firstly, for protecting the iris image, text which carries personal information is embedded in the middle band frequency region of the iris image using a novel watermarking algorithm that randomly interchanges multiple middle band pairs of the Discrete Cosine Transform (DCT). Secondly, for iris template protection, VC is utilized to protect the iii iris template. In addition, the integrity of the stored template in the biometric smart card is guaranteed by using the hash signatures. The proposed method has a minimal effect on the iris recognition performance of only 3.6% and 4.9% for the CASIA V4 and UBIRIS V1 databases, respectively. In addition, the VC scheme is designed to be readily applied to protect any biometric binary template without any degradation to the recognition performance with a complexity of only O(N). As for cross-spectral matching, a framework is designed which is capable of matching iris images in different lighting conditions. The first method is designed to work with registered iris images where the key idea is to synthesize the corresponding Near Infra-Red (NIR) images from the Visible Light (VL) images using an Artificial Neural Network (ANN) while the second method is capable of working with unregistered iris images based on integrating the Gabor filter with different photometric normalization models and descriptors along with decision level fusion to achieve the cross-spectral matching. A significant improvement by 79.3% in cross-spectral matching performance is attained for the UTIRIS database. As for the PolyU database, the proposed verification method achieved an improvement by 83.9% in terms of NIR vs Red channel matching which confirms the efficiency of the proposed method. In summary, the most important open issues in exploiting the iris biometric are presented and novel methods to address these problems are proposed. Hence, this work will help to establish a more robust iris recognition system due to the development of an accurate segmentation method working for iris images taken under both the VL and NIR. In addition, the proposed protection scheme paves the way for a secure iris images and templates storage. Moreover, the proposed framework for cross-spectral matching will help to employ the iris biometric in several security applications such as surveillance at-a-distance and automated watch-list identification.
Full-text available
Cloud computing are the most valuable innovations for business, research, development, etc., providing cheap, virtual-services that will require expensive local hardware. With the availability of Cloud Computing we place almost everything in the cloud, but what do we really know about its security. One of the security risks in cloud computing according to Garfunkel is data intrusion. Privacy preservation of online data while offering efficient functionalities has become an important and focused research issue. Encryption is one of the fundamental techniques that manage the digital rights of any personal or other confidential information. In this paper we present a Privacy preserved image recognition system on MSB encrypted face images performed using one time padding, followed by Face image recognition with PCA-principal component analysis followed by SIFT-Scale invariant feature transform on selected images. MSB encryption provides protection of image with low PSNR from un-trusted managing authority cloud server. Whole face image recognition process performed in encrypted domain.
Full-text available
This book presents essential principles, technical information, and expert insights on multimedia security technology. Illustrating the need for improved content security as the Internet and digital multimedia applications rapidly evolve, it presents a wealth of everyday protection application examples in fields including . Giving readers an in-depth introduction to different aspects of information security mechanisms and methods, it also serves as an instructional tool on the fundamental theoretical framework required for the development of advanced techniques.
Images of faces, represented as high-dimensional pixel arrays, often belong to a manifold of intrinsically low dimension. Face recognition, and computer vision research in general, has witnessed a growing interest in techniques that capitalize on this observation, and apply algebraic and statistical tools for extraction and analysis of the underlying manifold. In this chapter we describe in roughly chronological order techniques that identify, parameterize and analyze linear and nonlinear subspaces, from the original Eigenfaces technique to the recently introduced Bayesian method for probabilistic similarity analysis, and discuss comparative experimental evaluation of some of these techniques. We also discuss practical issues related to the application of subspace methods for varying pose, illumination and expression.
We have developed a near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach taken in this system is motivated by both physiology and information theory, as well as by the practical requirements of near-real-time performance and accuracy. Our approach treats the face recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. The system functions by projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as "eigenfaces," because they are the eigenvectors (principal components) of the set of faces; they do not necessarily correspond to features such as eyes, ears, and noses. The projection operation characterizes an individual face by a weighted sum of the eigenface features, and so to recognize a particular face it is necessary only to compare these weights to those of known individuals. Some particular advantages of our approach are that it provides for the ability to learn and later recognize new faces in an unsupervised manner, and that it is easy to implement using a neural network architecture.
This article is a précis of a paper released by the Information and Privacy Commissioner of Ontario, Canada, Ann Cavoukian, and biometric scientist, Alex Stoianov. The paper considers biometric encryption and describes how the technique could achieve strong authentication, security and privacy
We consider the problem of embedding one signal (e.g., a digital watermark), within another 'host' signal to form a third, 'composite' signal. The embedding must be done in such a way that minimizes distortion between the host signal and composite signal, maximizes the information-embedding rate, and maximizes the robustness of the embedding. In general, these three goals are conflicting, and the embedding process must be designed to efficiently trade-off the three quantities. We propose a new class of embedding methods, which we term quantization index modulation (QIM), and develop a convenient realization of a QIM system that we call dither modulation in which the embedded information modulates a dither signal and the host signal is quantized with an associated dithered quantizer. QIM and dither modulation systems have considerable performance advantages over previously proposed spread-spectrum and low-bit(s) modulation systems in terms of the achievable performance trade-offs among distortion, rate, and robustness of the embedding. We also demonstrate these performance advantages in the context of 'no-key' digital watermarking applications, in which attackers can access watermarks in the clear. We also examine the fundamental limits of digital watermarking from an information theoretic perspective and discuss the achievable limits of QIM and alternative systems.
Building on the success of the first edition, which offered a practical introductory approach to the techniques of error concealment, this book, now fully revised and updated, provides a comprehensive treatment of the subject and includes a wealth of additional features. The Art of Error Correcting Coding, Second Edition explores intermediate and advanced level concepts as well as those which will appeal to the novice. All key topics are discussed, including Reed-Solomon codes, Viterbi decoding, soft-output decoding algorithms, MAP, log-MAP and MAX-log-MAP. Reliability-based algorithms GMD and Chase are examined, as are turbo codes, both serially and parallel concatenated, as well as low-density parity-check (LDPC) codes and their iterative decoders.
We consider the problem of embedding one signal (e.g., a digital watermark), within another 'host' signal to form a third, 'composite' signal. The embedding must be done in such a way that minimizes distortion between the host signal and composite signal, maximizes the information-embedding rate, and maximizes the robustness of the embedding. In general, these three goals are conflicting, and the embedding process must be designed to efficiently trade-off the three quantities. We propose a new class of embedding methods, which we term quantization index modulation (QIM), and develop a convenient realization of a QIM system that we call dither modulation in which the embedded information modulates a dither signal and the host signal is quantized with an associated dithered quantizer. QIM and dither modulation systems have considerable performance advantages over previously proposed spread-spectrum and low-bit(s) modulation systems in terms of the achievable performance trade-offs among distortion, rate, and robustness of the embedding. We also demonstrate these performance advantages in the context of 'no-key' digital watermarking applications, in which attackers can access watermarks in the clear. We also examine the fundamental limits of digital watermarking from an information theoretic perspective and discuss the achievable limits of QIM and alternative systems.© (1999) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.