Conference PaperPDF Available

A secure biometric authentication scheme based on robust hashing

Authors:
  • Qatar Computing Research Institute HBKU

Abstract and Figures

In this paper, we propose a secure biometric based authentication scheme which fundamentally relies on the use of a robust hash function. The robust hash function is a one-way transformation tailored specifically for each user based on their biometrics. The function is designed as a sum of properly weighted and shifted Gaussian functions to ensure the security and privacy of biometric data. We discuss various design issues such as scalability, collision-freeness and security. We also provide test results obtained by applying the proposed scheme to ORL face database by designating the biometrics as singular values of face images.
Content may be subject to copyright.
A Secure Biometric Authentication Scheme Based
on Robust Hashing
Yagiz Sutcu
Polytechnic University
Six Metrotech Center
Brooklyn, NY 11201
ysutcu01@utopia.poly.edu
Husrev Taha Sencar
Polytechnic University
Six Metrotech Center
Brooklyn, NY 11201
taha@isis.poly.edu
Nasir Memon
Polytechnic University
Six Metrotech Center
Brooklyn, NY 11201
memon@poly.edu
ABSTRACT
In this paper, we propose a secure biometric based authentication
scheme which fundamentally relies on the use of a robust hash
function. The robust hash function is a one-way transformation
tailored specifically for each user based on their biometrics. The
function is designed as a sum of properly weighted and shifted
Gaussian functions to ensure the security and privacy of biometric
data. We discuss various design issues such as scalability,
collision-freeness and security. We also provide test results
obtained by applying the proposed scheme to ORL face database
by designating the biometrics as singular values of face images.
Categories and Subject Descriptors
E.m [Data]: Miscellaneous – biometrics, security, robust hashing.
General Terms
Security, Design, Human Factors.
Keywords
Authentication, Biometrics, Robust Hashing, Security, Privacy.
1. INTRODUCTION
Today, as a member of technology driven society, we are faced
with many security and privacy related issues and one of them is
reliable user authentication. Although for most of the cases,
traditional password based authentication systems may be
considered secure enough, the level of security is limited to
relatively weak human memory and therefore, it is not a preferred
method for systems which require high level of security. An
alternative approach is to use biometrics (fingerprints, iris data,
face and voice characteristics) instead of passwords for
authentication. Higher entropy and uniqueness of biometrics make
them favorable in so many applications which require high level
of security, and recent developments of biometrics technology
enable widespread use of biometrics-based authentication
systems.
Despite the qualities of biometrics, they have also some privacy
and security related shortcomings. In the privacy point of view,
most of the biometrics-based authentication systems have
common weakest link which is the need for a template database.
Typically, during the enrollment stage, every user presents some
number of samples of their biometric data and using this
information, some descriptive features of that type of biometric
(i.e., singular values, DCT coefficients, etc.) are extracted.
Analyzing these extracted features, templates for each and every
user are constructed. During authentication, a matching algorithm
tries to match the biometric data acquired by a sensor with the
templates stored in the template database. According to the result
of the matching algorithm, authentication succeeds or fails. This
enrollment and authentication process is illustrated in Figure 1.
Table 1. Properties of different authentication techniques [6]
Method Examples Properties
What you know
User ID
Password
PIN
Shared
Easy to guess
Forgotten
What you have
Cards
Badges
Keys
Shared
Duplication
Lost or stolen
What you know
+
What you have
ATM card
+
PIN
Shared
PIN is weakest link
Something
unique about
user
Fingerprint
Face, Iris,
Voice, …
Not possible to share
Forging difficult
Cannot be lost or stolen
Main weakness of the biometrics is the fact that, if biometrics
compromised, there is no way to assign a new template, and
therefore, storing biometric templates should be avoided.
However, unlike passwords, the dramatic variability of biometric
data and the imperfect data acquisition process prevents the use of
secure cryptographic hashing algorithms for securing the
biometrics data. Secure cryptographic hashing algorithms such as
MD-5 and SHA-1 give completely different outputs even if the
inputs are very close to each other. This problem made researchers
to ask the following question: Is it possible to design a robust
hashing algorithm such that, the hashes of two close inputs are
same (or close) whereas inputs which are not that close will give
completely different outputs?
In recent years, researchers have proposed many different ideas to
overcome this problem. Juels and Wattenberg [1] proposed a
fuzzy commitment scheme which simply uses quantization idea to
define closeness in the input space. Depending on the
Permission to make digital or hard copies of all or part of this work fo
r
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To cop
y
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
M
M-SEC’05, August 12, 2005, New York, New York, USA.
Copyright 2005 ACM 1-59593-032-9/05/0008...$5.00.
111
quantization level, if noisy biometric data is close enough to its
nominal value determined at the time of enrollment, user will be
successfully authenticated. Later, Juels and Sudan [4] proposed
“fuzzy vault” scheme, which combines the polynomial
reconstruction problem with error correcting codes, in order to be
able to handle unordered feature representations. Tuyls et al. [2],
[3] also used error-correction techniques with quantization to
handle the variability of biometric data. Ratha et al. [6] and
Davida et al. [5] were among the first to introduce the concept of
cancelable biometrics. In [6], the main idea is to use a
noninvertible transform to map biometric data to another space
and store that mapped template instead of the original one. This
approach will give the opportunity to cancel that template and
corresponding transformation when the biometric data is
compromised. Vielhauer et al. [?] also proposed a simple method
to calculate biometric hash values using statistical features of
online signatures. The idea behind their approach can be
summarized as follows: After the determination of the range of
feature vector components, the length of extended intervals and
corresponding offset values of each interval are calculated. At the
time of authentication, extracted feature values are first
normalized using the length and offset values determined
previously and then rounded accordingly to get the hash value.
Although this approach is simple and fast, hash values cannot be
assigned freely due to nature of the scheme and this makes the
collision resistance performance of the proposed method
questionable. Furthermore, need for storing the offset and interval
length values for each individual is another weakness from the
security point of view. More recently, Connie et al. [10], Teoh et
al. [11] and Jin et al. [12] proposed similar bio-hashing methods
for cancelable biometrics problem. A detailed survey of all these
approaches can be found in [7].
Figure 1. Enrollment and authentication process of a biometric
authentication system [13].
In this paper, we analyze the performance and feasibility of a
biometric based authentication system which relies on the
sequential use of a robust hash function and a cryptographic hash
function (i.e., MD-5, SHA-1). The robust hash function is a one-
way function designed as a sum of many Gaussian functions. In
section 2, we give the details of our approach and discuss related
design issues and challenges. In section 3, we elaborate on the
setup and present simulation results. Our conclusions and the
scope of future work are provided in Section 4.
2. PROPOSED SCHEME
In [6], Ratha et al. proposed the use of a noninvertible distortion
transform, in either the signal domain or the feature domain to
secure the biometric data of the user. This will not only eliminate
the need for storing biometric template in the database but also
provide flexibility to change the transformation from one
application to another to ensure the security and privacy of
biometric data. Figure 2 simply illustrates that noninvertible
transformation idea such that, the value of a feature x is mapped
to another space (y) meaning that, given y, it is not possible to
find the value of x since the inverse transform is one-to-many.
However, in this setup matching process needs to be performed in
transformed space, and it is not a trivial task to design such a
transform because of the characteristics of the feature vector.
Typically, depending on the type of biometric used and feature
extraction process, the components of feature vectors take
different values changing in some range, rather than taking precise
values, and therefore candidate transform has to satisfy some
smoothness criteria. While providing robustness against to
variability of same user’s biometric data, that transformation also
has to distinguish different users successfully.
Apart from the difficulty in design of such transformations, the
smoothness properties of that transformation might reveal the
range information of the feature vector components to some
extent. Furthermore, overlapping or even close ranges may pose
another problem for this design and especially it becomes more
difficult to satisfy the required robustness.
Figure 2. An one-way transformation example.
In this context, other than the one-way transform and error
tolerance requirements, there are other important design issues
that need to be addressed. One concern is the scalability of the
overall system. Since the number of users may vary over the time,
the design has to be flexible enough to accommodate new user
addition and deletion. That is, it should be possible to create new
accounts at minimum cost as well as providing collision free
operation. Another design issue is the user-dependence of these
transformations. If not impossible, it is extremely difficult to
design such a single non-invertible transformation for each user
that satisfies all design specifications. Finally, output space of the
candidate transformation needs to be quantized in order to make it
112
possible to combine this transformation with a secure hashing
algorithm.
Considering these issues, we propose an alternate form of one-
way transformation which is combined with a secure
cryptographic hash function. The one-way transformation is
designed as a combination of various Gaussian functions to
function as robust hash. The cryptographic hash is used to secure
the biometric templates stored in the database.
In this approach, we simply assume that every component of the
n-dimensional feature vector is taking some value in some
range without imposing any constraint on the values and ranges as
follows:
T
iniii
vvvV ],...,,[
21
=
is the n-dimensional feature vector of i
th
user
of the system and
njNivvv
ijijijijij
,...1;,...,1 ==+
δδ
where 2
δ
ij
determine the range of the j
th
component of the feature
vector of the i
th
user and N is the total number of the users.
In the enrollment stage, enough number of samples of biometric
data is acquired from users. Using these data, range information of
each user’s feature vector (
δ
ij
) is obtained. Once this information
is determined, every component of the feature vectors are
considered separately and a single Gaussian (red Gaussian in
Figure 3) is fitted to corresponding range considering the output
value assigned to that component of the feature vector. Let us
explain this fitting operation with the help of an example.
Consider j
th
component, v
ij
, of the feature vector of user i. Assume
that v
ij
takes values between (v
ij
-
δ
ij
) and (v
ij
+
δ
ij
) and also
assume that o
ij
is the assigned output value for that component of
the feature vector. Set of points to be used for Gaussian fitting
will be:
{(x
1
,y
1
), (x
2
,y
2
), (x
3
,y
3
)} where
(x
1
,y
1
) = (v
ij
-
δ
ij
, o
ij
) ; (x
2
,y
2
) = (v
ij
, o
ij
+ r) and
(x
3
,y
3
) = (v
ij
+
δ
ij
, o
ij
) with r is a uniformly selected random
number between 0 and 1.
After that stage, some number of fake Gaussian functions are
generated and combined with the first one in order to cover the
whole range and hide the real range information and this process
will be repeated n times for every user. This process is illustrated
in Figure 3.
Figure 3. Design process of proposed one-way transformation.
Certainly the parameters of these transformations are determined
and given to the users by an authorized, trusted third party and
furthermore this information is stored in a smartcard or a token
which needs to be used at the time of authentication.
Authentication process will be performed in the following
manner: Firstly, user’s biometric data will be acquired with a
sensor and his/her feature vector will be extracted. Secondly, one-
way transformation, stored in the smart-card, will be generated,
and it will be evaluated at the extracted feature vector component
values. Lastly, values obtained after quantization will be
concatenated together to form a string and than hashed. The
hashed value will be compared to user’s entry for authentication,
as illustrated in Figure 3.
Assuming the fact that hashing algorithm used in this scheme is
secure, for an attacker who has access to the database,
determining the real values of the feature vector by looking at
hashed values stored in the database will not be possible.
Furthermore, even though the information on the smartcard is
compromised, it still remains difficult for an attacker to guess the
real values of the biometric data of the user by only analyzing the
shape of one-way transformation of that user.
This approach is also scalable not only because of the fact that
generating gaussians is relatively a simple task, but also it is
possible to generate and assign different output values for each
and every component of a feature vector while satisfying
collision-free operation. Considering a number of potential users,
one can generate m-by-n matrix (where m is the total number of
users and n is the dimensionality of the feature vector) ensuring
that any two rows of this matrix are not identical. By the time of a
new user account needed, one row from that matrix will be
assigned to that user and his/her one-way transformation will be
designed using these values.
113
Figure 4. Authentication process of proposed scheme.
However, since the range information is hidden by the peaks of
the gaussians, these transformations are not used in an efficient
manner. This weakness of the proposed approach may be
observed by an intelligent attacker and help him/her to reduce
brute force guessing space for biometric data. To be able to reduce
this leakage of information, number of fake gaussians should be as
high as possible but also these fake gaussians need to have
variance and magnitude parameter values close to real gaussian
fitted to the real range. But in this case, especially if the length of
user range is relatively high with compared to the length of
overall range for a specific component of his/her feature vector, it
will not be possible to generate so many number of fake
gaussians. The reason for that constraint is the consequence of the
fact that, summation of overlapping tails of gaussians will have a
relatively high value and this will make the design difficult and
resulting transformation will have a poor hiding quality.
Finally, since the proposed approach is generic, type of biometric
data may be changed regularly to assure the privacy and security
of the system. The proposed approach is tested on the ORL face
database using simple singular value based feature vectors and
performance of the scheme will be presented in the following
section.
3. EXPERIMENTAL RESULTS
In recent years, singular values have been introduced as the
feature vector for face recognition and other applications. In this
study, we also used singular values as feature vector for testing
our scheme and in the following sub-sections, we will give a brief
explanations about singular value decomposition and its
properties and then explain our experimental setup.
3.1 Singular Value Decomposition
Let us first introduce the singular value decomposition of a
matrix.
Theorem 1 (Singular Value Decomposition)
),min(0...
),,...,(
,
21
21
nmpandwith
diagwhereVUA
thatsuchRVandRU
matricesorthogonalexisttherethenRAIf
p
p
T
nxnmxm
mxn
==ΣΣ=
λλλ λλλ
Following theorem provides the necessary information about the
sensitivity of singular values of a matrix.
Theorem 2 (Perturbation)
Eofnorminduced
isEwherepiforE
thenAofSVDbeVUAletand
AofonperturbatiabeREAALet
ii
T
mxn
2
,...,1
,
~
~
~
~
~
,
~
22
=Σ= +=
λλ
Since SVD is one of the well-known topics of linear algebra, we
omitted to give detailed analysis of this subject and interested
reader may find more details in [9].
3.2 Experiments and Results
The ORL face database [8] is created for face recognition related
research studies and as a result, differences of facial expressions
of the subjects are more than acceptable limits for a biometric
authentication system. However, since creating a new set of face
images for our study is not trivial, we decided to make our
preliminary tests using this database.
ORL face database consists of 10 different images of 40 distinct
subjects and the size of each image is 92x112, 8-bit grey levels. In
our simulation, we randomly divide each 10 samples of subjects
into two parts namely, training and test sets while training set has
6 of the images, test set has the remaining 4 samples. In our
simulations, only first 20 singular values of the images are
considered and none of the data pre-processing techniques (such
as principal component analysis (PCA), linear discriminant
analysis (LDA), etc) are used.
The performance of the proposed scheme is determined in terms
of basic performance measures of biometric systems, namely,
False Acceptance Rate (FAR) and False Rejection Rate (FRR).
However, another type of performance measure that needs to be
considered is due to the possibility that a one-way transformation
designed for a particular user can be used in authentication of
another user. (This is the likelihood of user X authenticating
himself as user Y while using user Y’s smartcard.) This type of
error can be interpreted as a factor contributing to FAR. For the
sake of clarity, we will denote such errors by FAR-II.
In our analysis, we first extract a feature vector from the set of
training images, and then determine the range of variation for
each feature vector component. The range for each component is
calculated by measuring the maximum and minimum values
observed in the training set and expanding this interval by some
tolerance factor (e.g., 5% or 10%) in order to account for the
possible variation in a feature value that is not represented within
the available training images. Our results obtained for 5% and
114
10% tolerance factors are given in Tables 2 and 3. It should be
remembered that in our experiments, we used 6 out of 10 images
(available for each person) to estimate the range and tested the
scheme on the rest of the images
Table 2. FRR results
Correct
Authentication
Ratio
# of correctly
authenticated
subjects
(5% tolerance)
# of correctly
authenticated
subjects
(10% tolerance)
4/4 2 15
3/4 8 10
2/4 13 10
1/4 13 4
0/4 4 1
Total 40 40
Table 3. FAR-II results
Incorrect
Authentication
Ratio
#of incorrectly
authenticated
subjects
(5% tolerance)
# of incorrectly
authenticated
subjects
(10% tolerance)
0/39 12 1
1/39 12 7
2/39 9 3
3/39 6 4
4/39 1 25
Total 40 40
Table 2 summarizes the FRR performance of the proposed scheme
in the following manner: First column stands for the correct
authentication ratio, which is the ratio of correctly authenticated
number of unseen test images to the total number of unseen
images — 4 images. On the other hand, each row shows the
number of persons that were successfully authenticated at a given
authentication ratio. For example, the number 2, which stands in
the second column of first row indicates that; there are 2 subjects
(out of 40), who are authenticated successfully for all of the test
images. Similarly, the number 4 (second column and fifth row)
denotes that there are 4 subjects (out of 40) that were not
authenticated at all, indicating that the assumed tolerance factor is
not satisfactory.
In Table 3, FAR-II performance of our scheme is presented in a
similar manner. For a given user, all remaining (39) users are tried
to be authenticated using that user’s smart-card (one-way
transform function) and presenting their own biometric data and
results obtained are summarized in Table 3. First column of Table
3 represents the ratio of incorrectly authenticated users to the
number of remaining users — 39 users. For example, there are 12
(out of 40) users who were never confused by any other user,
meaning that, none of the remaining 39 users were authenticated
as one of them. On the other hand, with a tolerance factor of 10%
there are 25 users whose authentication data were collided with at
least 4 of the remaining 39 users.
In our scheme, any of the users who uses his/her own smart-card,
is authenticated as another user, which means, FAR is zero.
However, false acceptance results (FAR-II), presented in Table 3,
which actually indicate the rate of being authenticated as another
user using other user’s smart-card. One of the reasons to observe
such a relatively high false acceptance rate (especially with a
tolerance factor of 10%) is due to nature of ORL face database
which contains images captured under extensively varying
conditions. As a result, actual range information of the singular
values could not be estimated efficiently due to the high variations
depending on the differences of facial expressions of the subjects.
It should be noted that, to further improve the performance one
can employ data pre-processing techniques such as PCA or LDA.
It is reasonable to expect that, when appropriate pre-processing
techniques are employed along with higher dimensional feature
vectors (e.g., more than 20 singular values), performance of the
proposed scheme will be better. These considerations will be the
parts of our future work.
4. CONCLUSION AND FUTURE WORK
We proposed a secure biometric based authentication scheme
which employs a user-dependant one-way transformation
combined with a secure hashing algorithm. Furthermore, we
discussed its design issues such as scalability, collision-freeness
and security. We tested our scheme using ORL face database and
presented simulation results. Preliminary results show that,
proposed scheme offers a simple and practical solution to one of
the privacy and security weakness of biometrics-based
authentication systems namely, template security.
In order to improve the results, our future focus is three-fold: (1)
To find a more flexible and efficient way to design one-way
transformations with less parameters; (2) To find a metric for
measuring and comparing data hiding quality of these one-way
transformations. (3) To test our approach on larger databases also
with different types of biometric data.
5. REFERENCES
[1] A. Juels and M. Wattenberg, “A fuzzy commitment scheme,”
in Proc. 6th ACM Conf. Computer and Communications
Security, G. Tsudik, Ed., 1999, pp. 28–36.
[2] J.-P. Linnartz and P. Tuyls, “New shielding functions to
enhance privacy and prevent misuse of biometric templates,”
in Proc. 4th Int. Conf. Audio and Video-Based Biometric
Person Authentication, 2003, pp. 393–402.
[3] E. Verbitskiy, P. Tuyls, D. Denteneer, and J. P. Linnartz,
“Reliable biometric authentication with privacy protection,”
presented at the SPIE Biometric Technology for Human
Identification Conf., Orlando, FL, 2004.
[4] A. Juels and M. Sudan, “A fuzzy vault scheme,” in Proc.
IEEE Int. Symp. Information Theory, A. Lapidoth and E.
Teletar, Eds., 2002, p. 408.
[5] G. I. Davida, Y. Frankel, and B. J. Matt, “On enabling secure
applications through off-line biometric identification,” in
Proc. 1998 IEEE Symp. Privacy and Security, pp. 148–157.
[6] N. Ratha, J. Connell, and R. Bolle, “Enhancing security and
privacy in biometrics-based authentication systems,” IBM
Syst. J., vol. 40, no. 3, pp. 614–634, 2001.
[7] U. Uludag, S. Pankanti, S. Prabhakar, and A. K. Jain,
“Biometric Cryptosystems: Issues and Challenges”,
Proceedings of the IEEE, Vol. 92, No. 6, June 2004.
[8] The ORL Database of Faces, available at
http://www.uk.research.att.com/facedatabase.html
[9] Strang, G., “Introduction to linear algebra”, 1998, Wellesley,
MA, Wellesley- Cambridge Press.
115
[10] T. Connie, A. Teoh, M. Goh, and D. Ngo, “Palmhashing: a
novel approach for cancelable biometrics”, Elsevier
Information Processing Letters, Vol. 93, (2005) 1-5.
[11] A. B. J. Teoh, D.C.L. Ngo, and A. Goh, “Personalised
cryptographic key generation based on facehashing”,
Elsevier Computers & Security, Vol. 23, (2004), 606-614.
[12] A. T. B. Jin, D.N.C Ling, and A. Goh, “Biohashing: two
factor authentication featuring fingerprint data and tokenized
random number”, Elsevier Pattern Recognition, Vol. 37,
(2004) 2245-2255.
[13] S. Prabhakar, S. Pankanti, and A. K. Jain, “Biometric
Recognition: Security and Privacy Concerns”, IEEE
SECURITY & PRIVACY, March/April 2003.
116
... The security and privacy concerns of the cancellable biometric technique can be improved and evaluated depending on the non-invertibility technique. It is a technique related to the complexity of recovering an original biometric feature in relation to a secure erasable pattern [1]. The non-invertibility of the transformation determines the protection of the template schemes based on the transformation of the characteristics of the biometric input. ...
... Progress in biohashing is depicted in Fig. 1. It can be broken down as follows: Figure 1: The process of tokenised random number (TRN) [1] First, the fingerprint is transformed and the fingerprint features are extracted. Feature vectors contain various features of the attached fingerprint that are used to generate hash code. ...
... Using Eq., we convert the edge-enhanced RGB image to its grayscale image (1). To convert an RGB image to grey, the red, green, and blue values must be calculated according to the intensity of the gamma-extended linear encoding. ...
Article
Full-text available
In this paper, a novel cancellable biometrics technique called Multi-Biometric-Feature-Hashing (MBFH) is proposed. The MBFH strategy is utilized to actualize a single direction (non-invertibility) biometric shape. MBFH is a typical model security conspire that is distinguished in the utilization of this protection insurance framework in numerous sorts of biometric feature strategies (retina, palm print, Hand Dorsum, fingerprint). A more robust and accurate multilingual biological structure in expressing human loneliness requires a different format to record clients with inseparable comparisons from individual biographical sources. This may raise worries about their utilization and security when these spread out designs are subverted as everybody is acknowledged for another biometric attribute.The proposed structure comprises of four sections: input multi-biometric acquisition, feature extraction, Multi-Exposure Fusion (MEF) and secure hashing calculation (SHA-3). Multimodal biometrics systems that are more powerful and precise in human-unmistakable evidence require various configurations to store a comparative customer that can be contrasted with biometric wellsprings of people. Disparate top words, biometrics graphs can't be denied and change to another request for positive Identifications (IDs) while settling. Cancellable biometrics is may be the special procedure used to recognize this issue.
... Methods of key generation, i.e., direct production of the key from raw biometry or template without using supplementary code are studied in [49][50][51]. Biometric template T is mapped into the space of cryptographic keys (usually bit strings) by a special function: K(T) : T → {0, 1} n , where n is the length of the key. One function is used for registration and recognition. ...
... The error probability in (49) is determined by the previous step: p = p H . Hence, possible Reed-Solomon codes here are determined by codeword length s and message length L. ...
Article
Full-text available
Passwords are ubiquitous in today’s world, as are forgetting and stealing them. Biometric signs are harder to steal and impossible to forget. This paper presents a complete system of methods that takes a secret key and the iris image of the owner as input and generates a public key, suitable for storing insecurely. It is impossible to obtain source data (i.e., secret key or biometric traits) from the public key without the iris image of the owner, the irises of other persons will not help. At the same time, when the iris image of the same person is presented the secret key is restored. The system has been tested on several iris image databases from public sources. It allows storing 65 bits of the secret key, with zero possibility to unlock it with the impostor’s iris and 10.4% probability to reject the owner in one attempt.
... The authors further proposed three cancelable transformations: Polar, Cartesian, and Functional transformations for the fingerprint templates in [54]. Sutcu et al. [55] too proposed a non-invertible biometric template security method in the form of a robust hash function that preserved the privacy of biometric features in face images using a one-to-many transformation function. Khan et al. [56] presented a novel framework with random projections having inherence and knowledge factors. ...
... Furthermore, personalized hash func- . Intra-user variability in PQRST complexes due to physical activity or temporary stress [4] tions have been developed based on the biometric information of each user, called robust hash functions [32], [33] or kernelized hash functions [34], [35]. These functions preserve the privacy and discriminatory power of biometric information while addressing intra-user variability. ...
Article
Full-text available
An essential part of cloud computing, IoT, and in general the broad field of digital systems, is constituted by the mechanisms which provide access to a number of services or applications. Biometric techniques aim to manage the access to such systems based on personal data; however, some biometric traits are openly exposed in the daily life, and in consequence, they are not secret, e.g., voice or face in social networks. In many cases, biometric data are non-cancelable and non-renewable when compromised. This document examines the vulnerabilities and proposes hardware and software countermeasures for the protection and confidentiality of biometric information using randomly created supplementary information. Consequently, a taxonomy is proposed according to the operating principle and the type of supplementary information supported by protection techniques, analyzing the security, privacy, revocability, renewability, computational complexity, and distribution of biometric information. The proposed taxonomy has five categories: (1) biometric cryptosystems, (2) cancelable biometrics, (3) protection schemes based on machine learning or deep learning, (4) hybrid protection schemes, and (5) multibiometric protection schemes. Furthermore, this document proposes quantitative evaluation measures to compare the performance of protection techniques. Likewise, this research highlights the advantages of injective and linear mapping for the protection of authentication and identification systems, allowing the non-retraining of these systems when the protected biometric information is canceled and renewed. Finally, this work mentions commercial products for cancelable biometric systems and proposes future directions for adaptive and cancelable biometric systems in low-cost IoT devices.
... Later, Teoh et al. [430] proposed BioHashing, an adaptation of the hashing process commonly applied to passwords to deal with fingerprint variability. A similar approach has been proposed by Sutcu et al. [418]. ...
Preprint
Full-text available
Artificially intelligent perception is increasingly present in the lives of every one of us. Vehicles are no exception, (...) In the near future, pattern recognition will have an even stronger role in vehicles, as self-driving cars will require automated ways to understand what is happening around (and within) them and act accordingly. (...) This doctoral work focused on advancing in-vehicle sensing through the research of novel computer vision and pattern recognition methodologies for both biometrics and wellbeing monitoring. The main focus has been on electrocardiogram (ECG) biometrics, a trait well-known for its potential for seamless driver monitoring. Major efforts were devoted to achieving improved performance in identification and identity verification in off-the-person scenarios, well-known for increased noise and variability. Here, end-to-end deep learning ECG biometric solutions were proposed and important topics were addressed such as cross-database and long-term performance, waveform relevance through explainability, and interlead conversion. Face biometrics, a natural complement to the ECG in seamless unconstrained scenarios, was also studied in this work. The open challenges of masked face recognition and interpretability in biometrics were tackled in an effort to evolve towards algorithms that are more transparent, trustworthy, and robust to significant occlusions. Within the topic of wellbeing monitoring, improved solutions to multimodal emotion recognition in groups of people and activity/violence recognition in in-vehicle scenarios were proposed. At last, we also proposed a novel way to learn template security within end-to-end models, dismissing additional separate encryption processes, and a self-supervised learning approach tailored to sequential data, in order to ensure data security and optimal performance. (...)
... Later, Teoh et al. [430] proposed BioHashing, an adaptation of the hashing process commonly applied to passwords to deal with fingerprint variability. A similar approach has been proposed by Sutcu et al. [418]. ...
Thesis
Full-text available
Artificially intelligent perception is increasingly present in the lives of every one of us. Vehicles are no exception, as advanced driver assistance systems (ADAS) help us comply with speed limits, keep within the lanes, and avoid accidents. In the near future, pattern recognition will have an even stronger role in vehicles, as self-driving cars will require automated ways to understand what is happening around (and within) them and act accordingly. Within pattern recognition, biometrics offer promising applications in vehicles, from keyless access control to the automatic personalisation of driving and environmental conditions based on the recognised driver. Similarly, wellbeing monitoring technologies have long attracted attention to the possibility of recognising activity, emotions, sleepiness, or stress from drivers and passengers. However, these two topics are starkly opposed, since wellbeing recognition relies on intrasubject variability while biometrics thrives on intersubject variability. Despite their differences, biometric recognition and wellbeing monitoring could (and should) coexist. Continuous identity recognition from seamlessly acquired data could be used to personalise wellbeing monitoring models and attain improved performance. These personalised models could be the key to more robust ways of monitoring drivers’ drowsiness and attention and avoiding accidents. In a broader sense, they could be applied to all vehicle occupants, paving the way towards the accurate recognition of activity, emotions, comfort, and even violence episodes in shared autonomous vehicles. This doctoral work focused on advancing in-vehicle sensing through the research of novel computer vision and pattern recognition methodologies for both biometrics and wellbeing monitoring. The main focus has been on electrocardiogram (ECG) biometrics, a trait well-known for its potential for seamless driver monitoring. Major efforts were devoted to achieving improved performance in identification and identity verification in off-the-person scenarios, well-known for increased noise and variability. Here, end-to-end deep learning ECG biometric solutions were proposed and important topics were addressed such as cross-database and long-term performance, waveform relevance through explainability, and interlead conversion. Face biometrics, a natural complement to the ECG in seamless unconstrained scenarios, was also studied in this work. The open challenges of masked face recognition and interpretability in biometrics were tackled in an effort to evolve towards algorithms that are more transparent, trustworthy, and robust to significant occlusions. Within the topic of wellbeing monitoring, improved solutions to multimodal emotion recognition in groups of people and activity/violence recognition in in-vehicle scenarios were proposed. At last, we also proposed a novel way to learn template security within end-to-end models, dismissing additional separate encryption processes, and a self-supervised learning approach tailored to sequential data, in order to ensure data security and optimal performance. Following the results of this work, one can conclude that truly personalised wellbeing is yet to be achieved. However, this work has built a strong framework to support future work towards the goal of integrating biometric recognition and wellbeing monitoring in a multimodal, seamless, continuous, and realistic way. Overall, this doctoral work led to numerous contributions to biometrics and wellbeing monitoring in general, resulting directly in twenty-four scientific publications in major biometrics and pattern recognition venues. Its quality and impact have been recognised by the scientific community with over three hundred citations and multiple awards, including the EAB Max Snijder Award 2022.
... It is a general consensus among researchers in this domain that the transformed templates should have large variations for different users (inter-user variability) as presented by the un-transformed templates. Yagiz et al. [11] presented a non-invertible transformation model for face images. Rathgeb et al. [12,13] proposed three non-invertible transformation schemes; polar, folding and Cartesian to generate secure transformed ngerprint templates. ...
Article
Full-text available
The COVID-19 outbreak and its medical distancing phenomenon have effectively turned the global healthcare challenge into an opportunity for Telecare Medical Information Systems. Such systems employ the latest mobile and digital technologies and provide several advantages like minimal physical contact between patient and healthcare provider, easy mobility, easy access, consistent patient engagement, and cost-effectiveness. Any leakage or unauthorized access to users’ medical data can have serious consequences for any medical information system. The majority of such systems thus rely on biometrics for authenticated access but biometric systems are also prone to a variety of attacks like spoofing, replay, Masquerade, and stealing of stored templates. In this article, we propose a new cancelable biometric approach which has tentatively been named as “Expression Hash” for Telecare Medical Information Systems. The idea is to hash the expression templates with a set of pseudo-random keys which would provide a unique code (expression hash). This code can then be serving as a template for verification. Different expressions would result in different sets of expression hash codes, which could be used in different applications and for different roles of each individual. The templates are stored on the server-side and the processing is also performed on the server-side. The proposed technique is a multi-factor authentication system and provides advantages like enhanced privacy and security without the need for multiple biometric devices. In the case of compromise, the existing code can be revoked and can be directly replaced by a new set of expression hash code. The well-known JAFFE (The Japanese Female Facial Expression) dataset has been for empirical testing and the results advocate for the efficacy of the proposed approach.
... Keeping the above in mind, raw biometric data should be treated with special care, which means primarily reducing its processing to a minimum. A fundamental solution to this issue is the use of so-called one-way processing [6], which generates from a sample of raw biometric data a template/profile/code that is an imprint of that data. Such a code still has individual properties and can be compared with others, so the idea of using biometrics is preserved. ...
Article
Full-text available
This paper presents an analysis of innovations in the biometrics market, which have started to play a very important role in personal identification and identification systems. The aim of the study was to analyze current customs and opinions regarding payment methods, as well as to identify threats and opportunities for new biometric solutions in this area. First, the history of the biometrics market is presented. Acceptance patterns of new technologies are explored and modified. The authors used literature reviews, qualitative research (focus groups), and quantitative research (questionnaire survey) as methods. The main value and importance of biometrics is the uniqueness of biometric patterns (e.g., face, fingerprint, iris, etc.), which takes the security of these systems to a new level. The results of the quantitative study based on the qualitative survey show positive verification of the hypothesized reasons; e.g., importantly, that the age of potential users of biometric payments influences the fear about personal data. Fear of losing personal data affects the perceived safety of biometric payments. Perceived security has a very strong influence on attitudes towards biometric payments, which is the strongest predictor of behavioral intention to use biometric payments.
Article
To strengthen security, we now use biometric technologies instead of passwords or tokens in a variety of authentication applications. Biometrics depend on parts of the human body. They have an advantage of not being lost. This paper seeks to safeguard biometrics within recognition systems by inducing intended distortions in biometric traits prior to saving them in a database for preventing hackers from accessing the actual biometrics. We can still use the biometrics, even if they are stolen or hacked in this situation by changing the distorted versions. Two approaches for face biometric distortion are presented in this paper, namely a regularization approach and a Linear Minimum Mean Square Error (LMMSE) approach. The regularization strategy begins by adding noise to the original faces, and then applying a regularized inverse on the noisy face images to create face images with amplified fixed noise patterns. The process of noise amplification is really a masking process that helps in the generation of cancellable templates. These versions of the face images can be used as templates for verification that can be cancelled and replaced with new templates if required. The other approach is cancellable face recognition based on the LMMSE algorithm, which begins with noise addition to the original faces and then application of LMMSE algorithm to the face images to obtain new templates with magnified fixed noise patterns. Finally, we compare between the two approaches in the cancellable face recognition application.
Article
Full-text available
We propose a new scheme for reliable authentication of physical objects. The scheme allows not only the combination of noisy data with cryp-tographic functions but has the additional property that the stored reference information is non-revealing. By breaking into the database and retrieving the stored data, the attacker will not be able to obtain any realistic approxima-tion of the original physical object. This technique has applications in secure storage of biometric templates in databases and in authentication of PUFs (Physical Uncloneable Functions).
Article
Full-text available
Because biometrics-based authentication offers several advantages over other authentication methods, there has been a significant surge in the use of biometrics for user authentication in recent years. It is important that such biometrics-based authentication systems be designed to withstand attacks when employed in security-critical applications, especially in unattended remote applications such as e-commerce. In this paper we outline the inherent strengths of biometrics-based authentication, identify the weak links in systems employing biometrics-based authentication, and present new solutions for eliminating some of these weak links. Although, for illustration purposes, fingerprint authentication is used throughout, our analysis extends to other biometrics-based methods.
Article
Full-text available
Various aspects and advantages of biometric system are presented. A biometric system is essentially a pattern-recognition system that recognizes a person based on a feature vector derived from a specific physiological or behavioral characteristic that the person possesses. Depending on the application context, a biometric system typically operates in one of two modes: verification or identification. In verification mode, the system validates a person's identity by comparing the captured biometric characteristic with the individual's biometric template, which is prestored in the system database.
Article
Full-text available
In traditional cryptosystems, user authentication is based on possession of secret keys; the method falls apart if the keys are not kept secret (i.e., shared with non-legitimate users). Further, keys can be forgotten, lost, or stolen and, thus, cannot provide non-repudiation. Current authentication systems based on physiological and behavioral characteristics of persons (known as biometrics), such as fingerprints, inherently provide solutions to many of these problems and may replace the authentication component of traditional cryptosystems. We present various methods that monolithically bind a cryptographic key with the biometric template of a user stored in the database in such a way that the key cannot be revealed without a successful biometric authentication. We assess the performance of one of these biometric key binding/generation algorithms using the fingerprint biometric. We illustrate the challenges involved in biometric key generation primarily due to drastic acquisition variations in the representation of a biometric identifier and the imperfect nature of biometric feature extraction and matching algorithms. We elaborate on the suitability of these algorithms for digital rights management systems.
Article
We describe a simple and novel cryptographic construction that we refer to as a fuzzy vault. A player Alice may place a secret value κ in a fuzzy vault and “lock” it using a set A of elements from some public universe U. If Bob tries to “unlock” the vault using a set B of similar length, he obtains κ only if B is close to A, i.e., only if A and B overlap substantially. In constrast to previous constructions of this flavor, ours possesses the useful feature of order invariance, meaning that the ordering of A and B is immaterial to the functioning of the vault. As we show, our scheme enjoys provable security against a computationally unbounded attacker. Fuzzy vaults have potential application to the problem of protecting data in a number of real-world, error-prone environments. These include systems in which personal information serves to authenticate users for, e.g., the purposes of password recovery, and also to biometric authentication systems, in which readings are inherently noisy as a result of the refractory nature of image capture and processing.
Article
Human authentication is the security task whose job is to limit access to physical locations or computer network only to those with authorisation. This is done by equipped authorised users with passwords, tokens or using their biometrics. Unfortunately, the first two suffer a lack of security as they are easy being forgotten and stolen; even biometrics also suffers from some inherent limitation and specific security threats. A more practical approach is to combine two or more factor authenticator to reap benefits in security or convenient or both. This paper proposed a novel two factor authenticator based on iterated inner products between tokenised pseudo-random number and the user specific fingerprint feature, which generated from the integrated wavelet and Fourier–Mellin transform, and hence produce a set of user specific compact code that coined as BioHashing. BioHashing highly tolerant of data capture offsets, with same user fingerprint data resulting in highly correlated bitstrings. Moreover, there is no deterministic way to get the user specific code without having both token with random data and user fingerprint feature. This would protect us for instance against biometric fabrication by changing the user specific credential, is as simple as changing the token containing the random data. The BioHashing has significant functional advantages over solely biometrics i.e. zero equal error rate point and clean separation of the genuine and imposter populations, thereby allowing elimination of false accept rates without suffering from increased occurrence of false reject rates.
Conference Paper
In developing secure applications and systems, the designers often must incorporate secure user identification in the design specification. In this paper, we study secure off-line authenticated user identification schemes based on a biometric system that can measure a user's biometric accurately (up to some Hamming distance). The schemes presented here enhance identification and authorization in secure applications by binding a biometric template with authorization information on a token such as a magnetic strip. Also developed here are schemes specifically designed to minimize the compromise of a user's private biometrics data, encapsulated in the authorization information, without requiring secure hardware tokens. In this paper we furthermore study the feasibility of biometrics performing as an enabling technology for secure system and application design. We investigate a new technology which allows a user's biometrics to facilitate cryptographic mechanisms.
Article
Among the various computer security techniques practice today, cryptography has been identified as one of the most important solutions in the integrated digital security system. Cryptographic techniques such as encryption can provide very long passwords that are not required to be remembered but are in turn protected by simple password, hence defecting their purpose. In this paper, we proposed a novel two-stage technique to generate personalized cryptographic keys from the face biometric, which offers the inextricably link to its owner. At the first stage, integral transform of biometric input is to discretise to produce a set of bit representation with a set of tokenised pseudo random number, coined as FaceHash. In the second stage, FaceHash is then securely reduced to a single cryptographic key via Shamir secret-sharing. Tokenised FaceHashing is rigorously protective of the face data, with security comparable to cryptographic hashing of token and knowledge key-factor. The key is constructed to resist cryptanalysis even against an adversary who captures the user device or the feature descriptor.
Article
We propose a novel cancelable biometric approach, known as PalmHashing, to solve the non-revocable biometric issue. The proposed method hashes palmprint templates with a set of pseudo-random keys to obtain a unique code called palmhash. The palmhash code can be stored in portable devices such tokens and smartcards for verification. Multiple sets of palmhash codes can be maintained in multiple applications. Thus the privacy and security of the applications can be greatly enhanced. When compromised, revocation can also be achieved via direct replacement of a new set of palmhash code. In addition, PalmHashing offers several advantages over contemporary biometric approaches such as clear separation of the genuine-imposter populations and zero EER occurrences. In this paper, we outline the implementation details of this method and also highlight its potentials in security-critical applications.