ArticlePDF Available

Abstract and Figures

Attacking fingerprint-based biometric systems by presenting fake fingers at the sensor could be a serious threat for unattended applications. This work introduces a new approach for discriminating fake fingers from real ones, based on the analysis of skin distortion. The user is required to move the finger while pressing it against the scanner surface, thus deliberately exaggerating the skin distortion. Novel techniques for extracting, encoding and comparing skin distortion information are formally defined and systematically evaluated over a test set of real and fake fingers. The proposed approach is privacy friendly and does not require additional expensive hardware besides a fingerprint scanner capable of capturing and delivering frames at proper rate. The experimental results indicate the new approach to be a very promising technique for making fingerprint recognition systems more robust against fake-finger-based spoofing attempts
Content may be subject to copyright.
Fake Finger Detection by Skin Distortion Analysis
Athos Antonelli, Raffaele Cappelli, Dario Maio, and Davide Maltoni, Member, IEEE
Abstract—Attacking fingerprint-based biometric systems by
presenting fake fingers at the sensor could be a serious threat for
unattended applications. This work introduces a new approach
for discriminating fake fingers from real ones, based on the anal-
ysis of skin distortion. The user is required to move the finger
while pressing it against the scanner surface, thus deliberately
exaggerating the skin distortion. Novel techniques for extracting,
encoding and comparing skin distortion information are formally
defined and systematically evaluated over a test set of real and
fake fingers. The proposed approach is privacy friendly and does
not require additional expensive hardware besides a fingerprint
scanner capable of capturing and delivering frames at proper rate.
The experimental results indicate the new approach to be a very
promising technique for making fingerprint recognition systems
more robust against fake-finger-based spoofing attempts.
Index Terms—Biometric systems, fake fingers, security, skin
distortion, skin elasticity.
IOMETRIC systems offer great benefits with respect to
other authentication techniques: in particular, they are
often more user friendly and can guarantee the physical pres-
ence of the user. Thanks to their good performance and to the
growing market of low-cost acquisition devices, fingerprint-
based identification/verification systems are becoming very
popular and are being deployed in a wide range of applications:
from PC logon to electronic commerce, from ATMs to phys-
ical access control [18]. On the other hand, it is important to
understand that, as any other authentication technique, finger-
print recognition is not totally spoof-proof. The main potential
threats for fingerprint-based systems are [28], [29]
attacking the communication channels, including replay at-
tacks on the channel between the sensor and the rest of the
attacking specific software modules (e.g., replacing the
feature extractor or the matcher with a Trojan horse);
attacking the database of enrolled templates;
presenting fake fingers to the sensor.
Recently, the feasibility of the last type of attack has been
reported by some researchers [19], [25]: they showed that
it is actually possible to spoof some fingerprint recognition
systems with well-made fake fingertips (Fig. 1), created with
the collaboration of the fingerprint owner or from a latent
Manuscript received January 18, 2006; revised May 3, 2006. This work was
supported by the European Commission (BioSec—FP6 IST-2002-001766). The
associate editor coordinating the review of this manuscript and approving it for
publication was Dr. Anil Jain.
A. Antonelli is with the Biometrika s.r.l., Forlì 47100, Italy (e-mail: an-
R. Cappelli, D. Maio, and D. Maltoni are with the DEIS—Università di
Bologna, Cesena (FO) 47023, Italy (e-mail:; maio@;
Digital Object Identifier 10.1109/TIFS.2006.879289
fingerprint; in the latter case, the procedure is more difficult
but still possible.
A deep study on the feasibility of spoofing some commercial
fingerprint scanners was performed by the authors within the
BioSec project [1], [5], [2]. From the critical review of the
related bibliography (as described in Section II) and from the
24-months experience we accumulated by making hundreds of
fake fingers with different materials and procedures and using
them to spoof existing fingerprint scanners (of different types:
optical, capacitive, thermals, RF-based, etc.), we may draw
some conclusions.
Forging a fake finger is not as easy as some authors claim,
even when the person whose finger has to be cloned is
cooperative; it is necessary to find the right materials to
mould the cast, learn the right process and handle with care
the artificial finger.
Creating a fake finger from a latent fingerprint is sig-
nificantly more difficult, requiring a skill comparable to
that of a forensic expert equipped with the appropriate
To the best of our knowledge and from the experience
gained testing recent scanners provided with fake detec-
tion mechanisms, nowadays, in spite of the claims of some
fingerprint scanner producers, no commercial fingerprint
scanner (among those we tested) seems to be resistant to
well-made fake fingerprints.
The lack of satisfactory solutions to reject fake fingers
shows that there are a lot of challenges in fake detection;
more research and investments on fingerprint fake detec-
tion methods are needed.
This work introduces a novel method for discriminating fake
fingers from real ones, based on the analysis of a peculiar char-
acteristic of the human skin: its elasticity. When a real finger
moves on a scanner surface, it produces a significant amount of
distortion, which can be observed to be quite different from that
produced by fake fingers. Usually fake fingers are more rigid
than skin and the distortion is definitely lower; even if highly
elastic materials are used, it seems very difficult to precisely
emulate the specific way a real finger is distorted, because the
behavior is related to the way the external skin is anchored to
the underlying derma and influenced by the position and shape
of the finger bone.
The analysis of skin distortion requires in input a sequence
of frames instead of a single static image. To this purpose,
the fingerprint scanner must be able to deliver a set of frames
(Fig. 2) to the processing unit at a high speed (at least 20 frames
per second). In our study, we used the prototype of a fingerprint
scanner that the company Biometrika developed within the
BioSec project [5] (Fig. 3).
A database of video sequences has been collected, acquiring
images both from real and fake fingers. Systematic experiments
1556-6013/$20.00 © 2006 IEEE
Fig. 1. Fake ngertips created with different materials. From left to right: gelatin, silicone, and latex.
Fig. 2. Set of frames acquired while a nger was rotating over the surface of a ngerprint scanner.
Fig. 3. Specic version of the scanner Fx3000 (by Biometrika) that allows to acquire and transfer frames to the host at 20 fps.
have been performed to understand how much the proposed
method is capable to discriminate real from fake ngers; the re-
sults achieved are very promising.
The rest of this work is organized as follows. Section II sum-
marizes the state-of-the art in this eld, Section III describes
the proposed approach, Section IV reports the experimentation
carried out to validate the new technique, and nally Section V
draws some conclusions.
Several papers have been recently devoted to this important
topic: from the analysis of potential weaknesses in generic bio-
metric systems [28], [29], [34], to experiments aimed at in-
vestigating how current ngerprint verication systems can be
spoofed [6], [16], [19], [25], [33]; from proposals of possible
solutions [1], [2], [9], [10], [20], [24], to surveys of the current
state-of-the-art [31].
It is worth noting that the idea of spoong ngerprint recog-
nition systems by using a fake reproduction of the ngertip is
not a novelty. The idea seems to have been described for the
rst time by the mystery writer R. A. Freeman in the book
The Red Thumb Mark [12], published in 1907. More re-
cently, James Bond in the lm Diamonds are Forever (1971)
was able to spoof a ngerprint check with a thin layer of
latex glued on his ngertip [35]. However, only recently some
researchers published the results of experiments aimed at ana-
lyzing such vulnerability.
In [25], the authors described two methods for creating
fake ngers: duplication with cooperation and without co-
operation; in both these cases, the material used to create
the fakes was silicone; six different commercial ngerprint
scanners were tested and the authors reported to be able to
spoof all of them at the rst or second attempt.
In [19], it was reported that fakes created with gelatin
were more effective, in particular against scanners based
on solid-state sensors [18]; similar to [25], the authors
described cooperative and noncooperative fake-creation
methods; 11 commercial ngerprint scanners were tested,
with a success rate higher than 67% for both the coopera-
tive and the noncooperative scenarios.
In [6], three commercial ngerprint scanners were tested:
all of them were spoofed by fake ngers made of gelatin,
with a level of ease depending on the scanner and software
In [16], the studies reported in [25] and [19] were extended
by testing new scanners that included specic fake detec-
tion measures; the authors concluded that such measures
were able to reject fake ngers made of nonconductive ma-
terials (such as silicone), but were not able to detect con-
ductive materials such as gelatin.
The main fake nger detection techniques that have been pro-
posed to date can be roughly classied as explained in the rest
of this section.
Analysis of skin details in the acquired images: using very
high resolution sensors (e.g., 1000 dpi) allows to capture
some details that may be useful for fake detection, such as
sweat pores [18] or coarseness of the skin texture [20]. In
fact, it has been experimentally noted that typical materials
used to make fake ngers (e.g., gelatin) usually consist of
large organic molecules that tend to amalgamate, resulting
in a surface coarser than human skin and where small de-
tails such as pores are not present or poorly reproduced.
Analysis of static properties of the finger: additional hard-
ware is used to capture information such as temperature
[25], impedance or other electric measurements [15], [32],
odor [2], and spectroscopy [21]. In [2], electronic noses are
used with the aim of detecting the odor of those materials
that are typically used to create fake ngers (e.g., silicone
or gelatin); spectroscopy-based techniques expose the skin
to multiple wavelengths of light and analyze the reected
spectrum: nonhuman tissues show a spectrum usually quite
different from human ones. Other techniques [7] direct light
to the nger from two or more sources and capture nger-
print images with different illuminations: the authors claim
that it is possible to discriminate between real and fake n-
gers by comparing such differently illuminated images.
Analysis of dynamic properties of the finger, such as: skin
perspiration [10], [24], pulse oximetry [23], blood pulsa-
tion [17], [23] and skin elasticity [1], [11], [9]. To date, fake
detection by skin-perspiration is probably the technique
most deeply studied in scientic publications: the idea is
to exploit the perspiration of the skin that, starting from
the pores, diffuses in the ngerprint patter following the
ridge lines, making them appear darker over time. In [24],
the perspiration process is detected through a time-series
of images acquired from the scanner over a time window
of a few seconds. Skin elasticity, which produces distortion
in the acquired ngerprint images [18], has been studied in
some previous works, but mainly focusing on the problems
that such distortion causes to ngerprint matching algo-
rithms [3], [22], [27], [30], or trying to nd a mathematical
model to explain its behavior [8]. In [11], it was suggested
that the acquisition of a video sequence of ngerprint im-
ages could be used to dene a new type of biometric fea-
ture, which combines a physiological trait (ngerprint) to
behavioral traits (e.g., a particular movement of the nger
on the sensor chosen by the user); the authors underlined
that this new biometric feature, among the other advan-
tages, could be harder to be spoofed, but they did not re-
ported any experiment with fake ngers. In [1], we briey
introduced a fake detection approach based on skin distor-
tion and reported some preliminary results. In this paper,
the whole technique is described and experiments with a
new prototype scanner are reported and discussed.
The user is required to place a nger onto the scanner surface
and to apply some pressure while rotating the nger in either
clockwise or counter-clockwise direction (this particular move-
ment has been chosen after some initial tests, as it seems quite
easy for the user and it produces the right amount of distortion).
A sequence of frames is acquired at a high frame rate during the
movement and analyzed to extract relevant features related to
skin distortion. Although the nger can be rotated at different
speed, we experimentally found that an angular speed of about
per second is optimal for measuring the distortion.
Some constraints are enforced to simplify the subsequent pro-
cessing steps; in particular
any frame such that the amount of rotation with respect
to the previous one (inter-frame rotation) is less than
is discarded (the inter-frame rotation angle is calculated
as described in Section III-B);
is a parameter whose
optimal value has been experimentally determined as 0.25
(see Section IV-B);
only frames acquired when the rotation of the nger is
less than
are considered: when angle has been
reached, the acquisition halts (the rotation angle of the
nger is calculated as described in Section III-E-1).
is a parameter that was set to 15 in the experimentations
(see Section IV-B); hence, if we assume an angular speed
of about 15
per second, on the average, the user is required
to rotate the nger for about 1 s before the system informs
her or him that the acquisition process is terminated.
be a sequence of images that satises
the above constraints: each frame
, , is segmented
by isolating the ngerprint area from the background; then, for
each frame
, the following steps are per-
formed (Fig. 4):
computation of the optical ow between the current frame
and the next one;
computation of the distortion map;
temporal integration of the distortion map;
computation of the DistortionCode from the integrated dis-
tortion map.
At the beginning of the sequence, the nger is assumed re-
laxed (i.e., nondistorted), without any supercial tension; this is
Fig. 4. Main steps of the feature extraction approach: a sequence of acquired ngerprint images is processed to obtain a sequence of DistortionCodes.
reasonable since when the nger approaches the sensor platen
there is no skin distortion.
The isolation of the ngerprint area from the background is
performed by computing the gradient of the image block-wise:
be a generic pixel in the image and a
square block of frame
centered in : each whose gra-
dient module exceeds a given threshold is associated to the fore-
ground [18] (Fig. 5). Only foreground blocks are considered in
the rest of the algorithm.
A. Computation of the Optical Flow
Block-wise correlation is computed to detect the new position
of each block in frame . For each block ,
the vector
denotes the estimated
Fig. 5. Fingerprint image before and after the segmentation from the back-
movement of from frame to frame . In the fol-
lowing, for simplifying the notation,
will be indicated as
Fig. 6. From left to right: two consecutive images, their difference (reported to graphically highlight the movement) and the corresponding optical ow.
Fig. 7. Optical ow before (on the left) and after (on the right) the regularization process.
. A graphical representation of the movement vectors (see
Fig. 6), is also known in the literature as optical ow [4].
This method is in theory only translation-invariant but, since
the images are taken at a fast frame rate, for small blocks it is pos-
sible to assume a certain rotation- and deformation-invariance.
The block size (in pixels) is a parameter that should be
adjusted according to the sensor area and resolution. If the
blocks are too small, they do not contain enough information
to univocally identify their positions in the subsequent frame.
Otherwise, if they are too large, two problems may arise: the
algorithm would become computationally expensive and the
distortion could make the matching unfeasible. To increase
the accuracy of the optical ow, the blocks can be also partially
overlapped: in this case the distance among the centers of two
consecutive blocks is smaller than the block size.
In order to lter out outliers produced by noise, by false cor-
relation matches, or by other anomalies, the optical ow is then
regularized as follows.
1) Each
such that
is discarded . This step allows to remove
outliers, under the assumption that the movement of each
block cannot deviate too much from the largest movement
of the blocks of the previous frame;
is a parameter that
should correspond to the maximum expected acceleration
between two consecutive frames.
2) For each
, the value is calculated as the weighted
average of the 3
3 neighbours of , using a 3 3
Gaussian mask; elements discarded by the previous step
are not included in the average: if no valid elements are
is marked as invalid.
3) Each
such that is discarded. This
step allows to remove elements that are not consistent with
their neighbours;
is a parameter that controls the strength
of this procedure.
are recalculated as in step 2, but considering only the
elements retained at step 3.
Fig. 7 shows the optical ow before (
vectors) and after
vectors) the steps described above.
B. Computation of the Distortion Map
The center of rotation
is estimated as the
weighted average of the positions
of all the foreground blocks
such that the corresponding movement vector is
is valid (1)
is the average of the elements in set .
The inter-frame rotation angle
(around the center ) and
the translation vector
are then computed in the
least square sense, starting from all of the average movement
Fig. 8. Graphical representation of a distortion map (A) and of the corre-
sponding integrated distortion map (B). Blocks with a lighter gray color denote
higher distortion values.
using the GaussNewton approach to numerically solve the
problem [13].
If the nger were moving solidly, then each movement vector
would be coherent with
and . Even if the movement is not
and still encode the dominant movement and, for
each block
, the distortion can be computed as the inco-
herence of each average movement vector
with respect to
and . In particular, if a movement vector were computed
according to a solid movement, then its value would be
and, therefore, the distortion can be dened as the residual
if is valid
A distortion map is dened as a block-wise image whose
blocks encode the distortion values
[Fig. 8(a)].
C. Temporal Integration of the Distortion Map
The computation of the distortion map, made on just two con-
secutive frames, is affected by three problems.
The movement vectors are discrete (because of the discrete
nature of the images) and, in case of small movement, the
loss of accuracy might be signicant.
Errors in seeking the new position of blocks could lead to
a wrong distortion estimation.
The measured distortion is proportional to the amount of
movement between two frames (and, therefore, depends
on the nger speed), without considering previous ten-
sion accumulated/released. This makes it difcult to com-
pare a distortion map against the distortion map of another
An effective solution to the above problems is to perform a
temporal-integration of the distortion map, resulting in an in-
tegrated distortion map [Fig. 8(b)]. The temporal integration is
simply obtained by block-wise summing the current distortion
map to the distortion map accumulated in the previous frames.
Each integrated distortion element is dened as shown in (5) at
the bottom of the page with
The rationale behind the denition is that if the norm of the
average movement vector
is smaller than the norm of the
estimated solid movement
, then the block is moving slower
than expected and this means it is accumulating tension (i.e., dis-
tortion). Otherwise, if the norm of
is larger than the norm
, the block is moving faster than expected, thus it is slip-
ping on the sensor surface releasing the tension accumulated.
The integrated distortion map solves most of the previously
listed problems: 1) discretization and local estimation errors are
no longer serious problems because the integration tends to pro-
duce smoothed values; 2) for a given movement trajectory, the
integrated distortion map is quite invariant with respect to the
nger speed. Fig. 9 shows the integrated distortion maps com-
puted for a given image sequence acquired by rotating a real
D. Distortioncode
Comparing two sequences of integrated distortion maps, both
acquired under the same movement trajectory, is the basis of
this fake nger detection approach. On the other hand, directly
comparing two sequences of integrated distortion maps would
be computationally very demanding and it would be quite dif-
cult to deal with the unavoidable local changes between the
To simplify this task, a feature vector (called DistortionCode
for the analogy with the FingerCode introduced in [14]) is ex-
tracted from each integrated distortion map:
circular annuli
of increasing radius (
, , where is the radius of
the smaller annulus) are centered in
and superimposed to the
map. For each annulus
, a feature is computed as the av-
erage of the integrated distortion elements of the blocks falling
inside it (Fig. 10)
belongs to annulus (6)
The number of annuli
and the radius are parameters
that must be chosen to optimally cover a typical ngerprint, ac-
cording to the sensor area and resolution.
A DistortionCode
is obtained from each frame ,
if is valid and
if is not valid
Fig. 9. Sequence of integrated distortion maps.
Fig. 10. Integrated distortion map with the annuli superimposed. Note that
background blocks are discarded at the beginning of the process (see Section III)
and therefore they are not taken into account in (6).
The DistortionCodes are invariant to rotation since distortion
values are averaged over the circular annuli; they are also in-
variant with respect to the position of the ngerprint in the image
(translation), since they are centered in
; in any case, it should
be noted that translation accuracy is not critical because of the
integrated and global nature of the features adopted.
A DistortionCode sequence
is then dened by normalizing
the distortion codes
The obtained DistortionCode sequence (Fig. 11) characterizes
the distortion of a particular nger under a specic movement.
Further sequences from the same nger do not necessarily lead to
the same DistortionCode sequence: the overall length might be
different, because the user could produce the same trajectory (or
a similar trajectory) faster or slower. While a minor rotation ac-
cumulates less tension, during a major rotation the nger could
slip and the tension be released in the middle of the sequence.
Therefore, a straightforward comparison of DistortionCode se-
quences is not feasible and an alignment technique like those
introduced in Sections III-E1 and III-E2 is necessary.
E. Distortion Match Function
In order to discriminate a real nger from a fake one, the
DistortionCode sequence acquired at verication/identica-
tion time (current sequence) is compared with a reference
sequence obtained from a real nger. The reference sequence
may be a sequence acquired from the nger of the same user
during an enrolment session (similarly to what happens in
biometric recognition), or a predened ideal sequence to be
adopted for all users (in this case the fake-detection system
does not require an enrolment stage, see Section IV-C). Let
be the reference sequence and
the current sequence; a distor-
tion match function DMF
compares the reference and
the current sequence and returns a score in the range
indicating how much the current sequence is similar to the
reference sequence (1 means maximum similarity).
A Distortion Match Function must dene how to do the
Step 1) Calculate the similarity between two Distortion-
Step 2) Align the elements by establishing a correspondence
between the DistortionCodes of the two sequences
and .
Step 3) Measure the similarity between the two aligned
As to Step 1), a simple Euclidean distance between two Dis-
tortionCodes has been adopted, since it is a good metric and also
very efcient to be computed, having the vectors a very small di-
mensionality. As to Step 2), two different approaches have been
Aligning the sequences according to the accumulated inter-
frame rotation (Section III-E1).
Aligning the sequences using dynamic time warping
(DTW) [26] (Section III-E2).
In both cases, the result of Step 2) is a new DistortionCode
, obtained from
during the alignment process with ; has the same car-
dinality of
and the nal similarity can be simply computed
(Step 3) as the average Euclidean distance of corresponding Dis-
tortionCodes in
and .
Fig. 11. Sequence of DistortionCodes calculated on the integrated distortion maps in Fig. 9.
Fig. 12. Example of DTW alignment. On the left, the mapping function , which maps each DistortionCode in the current sequence to a DistortionCode
in the reference sequence. On the right, a graphical representation of the same mapping: note that the same DistorsionCode in the reference sequence c
an be
associated twice or more times (or not associated at all), not only to deal with different lengths but, more in general, to
nd the optimal alignment.
1) Aligning the Sequences According to the Accumu-
lated Inter-Frame Rotation: Any DistortionCode
be associated to the rotation angle
obtained by accumu-
lating the inter-frame rotation angles
(see Section III-B):
. This approach determines optimal pairing
between the DistortionCodes in
and according to rota-
tion angles
; interpolation is used to deal with discretization
effects. The new sequence
is obtained by calculating, for
each pair
in the current sequence, a new distor-
tion code
from the two consecutive DistortionCodes in
the reference sequence
and (where
) as follows:
Equation (8) simply estimates
as the linear interpolation
of the distortion codes corresponding to the two closest rotation
2) Aligning the Sequences Using Dynamic Time Warping:
The main limitation of the previous alignment approach is that
the distortion is not only related to the amount of rotation, but
also to the pressure applied while rotating the nger, and more
in general to the movement performed, hence aligning only on
the basis of the rotation angle may be not always a good choice.
An alternative approach to align the two DistortionCode se-
quences is based on DTW [26]. Using DTW with constrained
endpoints, slope three and the Euclidean distance as a cost func-
tion, each DistortionCode
in is associated to a Distor-
(see Fig. 12). This allows to warp
the time dimension of the reference sequence
to obtain the
new sequence
The DTW algorithm aligns the two sequences according to
the less expensive path. If the two sequences are similar, the
resulting path will have a total cost low and will be quite close
to the diagonal path (Fig. 12).
3) Computation of the Final Score: Once the new sequence
has been obtained (using one
of the approaches described above), the nal score can be com-
puted as follows:
The normalization coefcient
ensures that the
score is always in the range
. In fact, for any Distortion-
Code sequence
and for any of its ele-
, it is easy to prove that
Constraint (10) follows directly from the denition of Distor-
tionCode sequence (7), constraint (11) from the denitions of
integrated distortion map (5) and DistortionCode (6).
It is worth noting that the transformations performed to obtain
the new sequence
do not violate the two constraints in both
the proposed approaches, since:
in the rst one,
, thus (10)
is guaranteed by the triangular inequality and (11) by the
denition of
in the DTW approach,
, thus (10) and
(11) are trivially veried.
A. Measuring Fake Detection Errors
A ngerprint scanner that embeds a fake-nger detection
mechanism has to decide, for each transaction, if the current
sample comes from a real nger or from a fake one. This deci-
sion will be unavoidably affected by errors, that should be as
low as possible: in particular, a scanner could reject real ngers
and/or accept fake ngers, independently of the users identity.
In the rest of this section, we assume a system operating in
verication mode. Let
be the proportion of fake-nger
transactions where the system incorrectly considered the input
to come from a real nger. Let
be the proportion of
real-nger transactions where the system incorrectly considered
the input to come from a fake sample.
and must
not be confused with identity verication errors typical of any
biometric system; in the following, to avoid confusion we will
denote with
and the identity verication error
rates. Under the simplifying hypothesis of no correlation be-
tween the two classes of errors (fake detection errors and iden-
tity verication errors), and assuming the identify verication
performance is not signicantly decreased by the fake-detection
mechanism, the overall FRR error can be estimated as
(for an au-
thorized user trying to be authenticated normally using the
real enrolled nger).
Depending on the hypotheses (Real or Fake nger, Enrolled
or Nonenrolled ngerprint) under which the transaction is per-
formed, the overall FAR error can be estimated as
(for an
attacker trying to be authenticated using a real nger, dif-
ferent from the enrolled one);
(for an attacker
trying to be authenticated using a fake reproduction of a
nger which is not the enrolled one);
(for an attacker trying to
be authenticated using a fake reproduction of the enrolled
nger), where
, since, even if a fake
ngerprint is created by using professional equipments, its
quality is usually lower than the real nger it is designed
to imitate and therefore the chance that the identity ver-
ication algorithm does not match it with the users real
template is higher.
Actually, the two classes of errors (fake detection errors and
identify verication errors) could be correlated in some cases:
for instance, a low-quality nger may determine both a high
(due, for example, to the difculty of calculating the
correct optical ow) and a high
(due to the few number
of minutiae that can be reliably found in its ngerprint images).
It should be also considered that the adoption of a fake-detection
approach may affect the performance of the identity verica-
tion system. For instance, due to the need of measuring specic
features for fake-detection, it could be more difcult to acquire
good quality images, thus increasing
(e.g., in the case of
ngerprint distortion, due to the need of producing distorted im-
ages it could be more difcult to acquire good quality images,
at the beginning of the image sequence, which are not affected
by distortion). Anyway, studying such correlation is beyond the
scope of this work, and will be better investigated in the future.
The experiments carried out in this study consider only fake-
detection errors (
and ), to avoid reporting per-
formance indicators depending on the identity verication ac-
curacy of a specic biometric algorithm. There is obviously a
strict trade-off between
and : both are functions
of a fake-detection threshold
. depends also on how
much skilled the attacker is, which technologies the attacker is
able to implement, how much time and money (s)he can invest,
etc. In the experimentation performed in this work we assumed
the attackers were experts of the application domain and
skilled in manufacturing fake ngers (the fake ngers man-
ufactured in our tests were made by people with 24-month
attacks were carried out using some known methods (e.g.,
fake ngers made of silicone, gelatin and other materials
commercially available);
the attackers were aware of the particular fake-detection
technique adopted and did their best to defeat it (in our
tests fake ngers were created trying to emulate as much
as possible the human skin deformation);
attacks had to be performed in a short time and without live
feedback from the device.
In Sections IV-B and C,
and errors measured
in the experimentation are reported, together with the
(the value such that ).
B. Database
In order to evaluate the proposed approach, a database of
image sequences was collected using a prototype ngerprint
scanner by Biometrika. No public available benchmark database
could be used, due to the specic requirements of the fake de-
tection algorithm (each sample must consist of a sequence of
images acquired by a scanner while the user is rotating her/his
nger and producing distortion, and samples from both real and
fake ngers acquired by the same device must be available). The
database was collected at the Biometric System Laboratory of
the University of Bologna acquiring, from each of 45 volun-
teers, two ngers (thumb and forenger of the right hand); 10
image sequences were recorded for each nger. 40 fake ngers
were manufactered (10 made of RTV silicone, 10 of gelatin, 10
of latex, and 10 of wood glue). Instead of making whole 3D
fake ngers, we manufactured just thin layers reproducing the
ngertips (see Fig. 1 for some sample pictures): this allowed
to better imitate genuine nger movements when trying to at-
tack the system. For each fake nger, 10 image sequences were
recorded. The prototype scanner produces 400
560 ngerprint
images at 569 DPI and captures images at 20 fps. In Fig. 13 and
in Fig. 14 some sample ngerprint images are shown.
The volunteers received a brief training before the rst acqui-
sition. Sequences having a total nger rotation angle less than
were discarded, and the user was asked to repeat the acqui-
sition; no other quality check was adopted during the collection
of the data (for instance ensuring that a minimum amount of dis-
tortion was produced in the sequence).
Fig. 13. Sample images from the database of sequences: a real nger and four fake ngers (the rst image of each sequence is shown).
Fig. 14. Some images from two sequences in the database: a real nger (top row) and a fake nger (bottom row).
The acquisition of the image sequences from the fake ngers
was performed by experts, trying to emulate as much as pos-
sible the deformation of the skin in real ngers and choosing
the optimal conditions for each material; for instance, image se-
quences from fakes made of gelatin were acquired a hour after
their creation, when their elasticity is similar to the human skin,
and not later, when they become rigid and easier to be discrim-
inated from real ngers.
The parameters of the approach (see Table I) were adjusted
on a totally disjoint dataset that was collected using a different
acquisition sensor (see [1]). The only different parameter is the
block size, which here was set to 16
16 pixels to increase the
processing speed.
C. Results
As introduced in Section III-E, the fake detection approach
here proposed may be used in two different modalities:
per-user reference sequence: for each user, during an en-
rolment stage, a sequence of frames is acquired from
the selected nger, the corresponding DistortionCode se-
is calculated and stored as the reference se-
quence for that user (similar to what happens with the
ngerprint template to be used in a biometric recognition
predened reference sequence: a single reference sequence
is adopted for all of the users and no enrolment stage is
required for the fake detection system.
Both of these operating modalities were experimented by
using the same test set described in the previous section.
In the per-user reference sequence modality, the following
transactions were performed on the test set:
4050 genuine attempts (each sequence was matched
against the remaining sequences of the same nger, ex-
cluding the symmetric matches to avoid correlation, thus
performing 45 attempts for each of the 90 real ngers);
36 000 impostor attempts (each of the 400 fake sequences
was matched against the rst sequence of each real nger).
Fig. 15. Integrated distortion maps from the predened reference sequence used in the experimentation; it is worth noting that the shape of the deformed region
is almost elliptical and distortion is mainly conned to an elliptical annulus around the center of rotation, as discussed in [8].
Note that, since only fake-detection performance was eval-
uated (not combined with identity verication) and con-
sidering that the proposed approach is based on the elastic
properties of real/fake ngers and not on the ridge-line pat-
tern, it is not necessary that a fake nger corresponding to
the real nger is used in the impostor attempts: any fake
nger can be matched against any real nger without sig-
nicantly affecting the results.
In the predened reference sequence modality, a sequence ac-
quired from a well-trained user (not included in the test data-
base) was selected as the predened sequence (Fig. 15) and the
following transactions were performed on the test set:
900 genuine attempts (each sequence was matched against
the reference sequence, thus performing 10 attempts for
each of the 90 real ngers);
400 impostor attempts (each of the 400 fake sequences was
matched against the reference sequence).
Table II reports the
obtained for the two alignment
approaches (Sections III-E1 and III-E2) in the two modalities,
respectively; Fig. 16 compares the ROC graphs.
An error analysis was performed by visually inspecting the
100 real nger sequences that obtained the lowest scores in
the predened reference sequence modality with the DTW
alignment. In Table III, each sequence is labeled according to
the most evident error cause: 70% of the errors were due to an
incorrect movement (e.g., moving the nger in a nonuniform
way, translating instead of rotating,) or a too fast movement.
Table IV analyzes the distribution of false rejection errors
among the different users; since 10 sequences were acquired
from two ngers of each user, the maximum number of errors
for each user is 20. It is worth noting that all of the users were
able to provide good sequences with both the ngers (only one
user had more than 10 errors among the 100 examined: 8 with
the rst nger and 4 with the second).
On a Pentium IV PC at 3.2 GHz, the feature extraction takes
about 100 ms for each frame: the most demanding step (80% of
the feature extraction time) is the correlation, whose complexity,
in the worst case, is proportional to the square of the number of
foreground pixels in the image. However, thanks to an MMX
optimization of the correlation routine, an efcient implemen-
tation has been achieved. The matching step proved to be very
efcient: the average time is less than 1 ms for both the align-
ment approaches. The average transaction time is about two sec-
onds, including acquisition, feature extraction, and matching.
V. C
Attacks to ngerprint-based biometric systems using fake re-
productions of the nger may be a serious threat, in particular
for nonsupervised access control applications and remote au-
thentication applications.
Fig. 16. ROC graphs of the two alignment approaches: per-user reference sequence modality (on the left) and prede
ned reference sequence modality (on the
This work introduced a fake nger detection approach based
on skin elasticity: novel techniques for extracting skin distortion
information, for encoding it as DistortionCodes, and for nor-
malizing and comparing DistortionCode sequences have been
formally dened and experimentally evaluated over a test set of
real and fake ngers. Two different operating modalities have
been proposed: the former (per-user reference sequence) where
the user is required to perform an enrollment before using
the system, the latter (predened reference sequence) where no
enrollment is required for the fake-detection (obviously enroll-
ment is still necessary for ngerprint recognition).
Contrary to what one may expect, the performance of the pre-
dened modality was better than that of the per-user modality.
The analysis of the main error causes for both the modalities
suggested that this behavior could probably be ascribed to these
The reference DistortionCode sequence (which all the cur-
rent sequences were compared to) was obtained from a
well-trained user with a uniform and smooth movement,
resulting in a sequence that was able to correctly represent
most of real nger distortions and was very difcult to em-
ulate using fake ngers.
During the database collection, the volunteers received
only a quick training and no specic quality control mea-
sure was enforced (except the minimum amount of nger
rotation, see Section IV-B). For this reason, a good portion
of the users did not produce enough distortion and their
corresponding DistortionCode sequences, when used as
the reference sequence in the per-user modality, were not
enough dissimilar from the fake nger sequences.
It is also worth noting that, in the per-user modality, the inter-
frame rotation angle alignment approach achieved better results
than the DTW-based one. This may be explained by considering
that, if on the one hand DTW is more exible in adapting to a
given reference sequence (potentially decreasing
), on
the other, if no minimum quality is enforced for the reference
sequences, the greater exibility is likely to affect
We may conclude that the predened modality, besides being
simpler to be deployed in a nal system, achieves better results
with nonhabituated users; on the other hand, the performance
with the per-user modality may be increased if users are well-
trained and habituated.
We believe the experimental results are very promising; in
fact, although the system did make errors (the best
achieved was 11.24%), we must underline that what we mea-
sured in our experimentation was not the robustness with
respect to zero-effort attempts, but the robustness to attacks
carried out by experts that were aware of the specic fake-de-
tection technology and did their best to emulate the human skin
deformation. The same experts achieved a very high success
rate (comparable to those reported in [19]) in spoong all the
commercial devices they tested [5], including devices with
specic fake-nger countermeasures. However, it should be
pointed out that the proposed system, as any other fake-detec-
tion mechanisms, trades usability for security: for some large
scale low-security applications it may not be worth adopting
a fake-nger detection system, while, for other high-security
applications, the fake-detection operating threshold may be
adjusted to meet the given constraints.
Although in the experiments performed we were not able to
nd a way to make the proposed system ineffective, as for any
other similar system it cannot be totally excluded that someone
might nd a combination of techniques and materials that sig-
nicantly decrease its efcacy. Combining this fake-detection
system with other methods based on uncorrelated features (e.g.,
impedance, odor [2]) could make the resulting system even more
The proposed fake detection approach is not privacy invasive,
since it does not collect any information (such as, for instance,
blood pulsation or blood pressure) that may reveal medical dis-
eases; it has the further advantage of not requiring expensive ad-
ditional hardware, provided that the ngerprint scanner is able
to acquire images at a proper frame rate.
Future work will be mainly dedicated to
implementation and evaluation of alternative alignment
techniques for the DistortionCode sequences;
experimentation on a larger user population;
implementation of quality control measures for the enroll-
ment stage in the per-user modality;
better understanding the relation between fake detection
errors and identity verication errors.
While this paper is being written, a usability study is being
conducted by Prof. Bentes team at the University of Cologne
where the Biometrika ngerprint scanner equipped with our
fake detection approach is being experimented outside of lab-
oratory environments. The feedback from that experimentation
will help to improve the approach here introduced, thank to
the complementary information that a user-centered perspective
may provide.
The authors would like to thank G. Alboni from Biometrika
(Italy) and J.-F. Mainguet from Atmel (France) for their fruitful
cooperation on the fake-nger detection topic within the scope
of the BioSec project.
[1] A. Antonelli, R. Cappelli, D. Maio, and D. Maltoni, A new approach
to fake nger detection based on skin distortion, in Proc. Int. Conf.
Biometric Authentication, Hong Kong, China, Jan. 2006.
[2] D. Baldisserra, A. Franco, D. Maio, and D. Maltoni, Fake ngerprint
detection by odor analysis,in Proc. Int. Conf. Biometric Authentica-
tion, Hong Kong, China, Jan. 2006.
[3] A. M. Bazen and S. Gerez, Fingerprint matching by thin-plate spline
modeling of elastic deformations,Pattern Recognit., vol. 36, no. 8, pp.
18591867, Aug. 2003.
[4] S. S. Beauchemin and J. L. Barron, The computation of optical ow,
ACM Comput. Surv., vol. 27, no. 3, pp. 433467, 1995.
[5] BioSec European Research ProjectFP6 IST-2002-001766 [Online].
[6] J. Blommé, Evaluation of Biometric Security Systems Against Arti-
cial Fingers, M.S. thesis, Linköping Univ., Linköping, Sweden, 2003.
[7] K. Brownlee, Method and Apparatus for Distinguishing a Human
Finger From a Reproduction of a Fingerprint,U.S. Patent 6 292 576,
[8] R. Cappelli, D. Maio, and D. Maltoni, Modelling plastic distortion in
ngerprint images,in Proc. 2nd Int. Conf. Advances in Pattern Recog-
nition (ICAPR2001), Rio de Janeiro, Brazil, Mar. 2001, pp. 369376.
[9] Y. Chen and A. Jain, Fingerprint deformation for spoof detection,in
Proc. Biometrics Symp., Crystal City, VA, Sep. 1921, 2005.
[10] R. Derakhshani, S. A. C. Schuckers, L. A. Hornak, and L. O. Gorman,
Determination of vitality from a non-invasive biomedical measure-
ment for use in ngerprint scanners, Pattern Recognit., vol. 36, pp.
383396, 2003.
[11] C. Dorai, N. K. Ratha, and R. M. Bolle, Dynamic behavior analysis
in compressed ngerprint videos, IEEE Trans. Circuits Syst. Video
Technol., vol. 14, no. 1, pp. 5873, Jan. 2004.
[12] R. A. Freeman, The Red Thumb Mark. London, U.K.: Collingwood,
[13] M. T. Heath, Scientic Computing: An Introductory Survey, 2nd ed.
New York: McGraw-Hill, 2002.
[14] A. K. Jain, S. Prabhakar, and L. Hong, A multichannel approach to
ngerprint classication, IEEE Trans. Pattern Anal. Machine Intell.,
vol. 21, no. 4, pp. 348359, Apr. 1999.
[15] P. Kallo, I. Kiss, A. Podmaniczky, and J. Talosi, Detector for Recog-
nizing the Living Character of a Finger in a Fingerprint Recognizing
Apparatus, U.S. Patent 6 175 641, Jan. 16, 2001.
[16] H. Kang, B. Lee, H. Kim, D. Shin, and J. Kim, A study on performance
evaluation of the liveness detection for various ngerprint sensor mod-
ules, in Proc. KES, 2003, pp. 12451253.
[17] P. D. Lapsley, J. A. Less, D. F. Pare Jr, and N. Hoffman, Anti-Fraud
Biometric Sensor That Accurately Detects Blood Flow, U.S. Patent
5 737 439, Apr. 7, 1998.
[18] D. Maltoni, D. Maio, A. K. Jain, and S. Prabhakar, Handbook of Fin-
gerprint Recognition. New York: Springer, 2003.
[19] T. Matsumoto, H. Matsumoto, K. Yamada, and S. Hoshino, Impact
of articial Gummy”fingers on ngerprint systems, Proc. SPIE, vol.
4677, Jan. 2002.
[20] Y. S. Moon, J. S. Chen, K. C. Chan, K. So, and K. C. Woo, Wavelet
based ngerprint liveness detection, Electron. Lett., vol. 41, no. 20,
pp. 11121113, 2005.
[21] K. Nixon, Novel spectroscopy-based technology for biometric and
liveness verication, Proc. SPIE, vol. 5404, 2004.
[22] Proc. Biometric Authentication ECCV Int. Workshop , ser. Lect. Notes
Comput. Sci., D. Maltoni and A. K. Jain, Eds. Prague, Czech Re-
public: Springer, May 15, 2004, vol. 3087.
[23] D. Osten, H. M. Carim, M. R. Arneson, and B. L. Blan, Biometric,
Personal Authentication System, U.S. Patent 5 719 950, Feb. 17, 1998.
[24] S. T. V. Parthasaradhi, R. Derakhshani, L. A. Hornak, and S. A. C.
Schuckers, Time-series detection of perspiration as a liveness test in
ngerprint devices,IEEE Trans. Syst., Man, Cybern. C, vol. 35, no. 3,
pp. 335343, Aug. 2005.
[25] T. Putte and J. Keuning, Biometrical ngerprint recognition: Dont
get your ngers burned, in Proc. IFIP TC8/WG8.8, 4th Working Conf.
Smart Card Research and Adv. App., 2000, pp. 289303.
[26] L. Rabiner and B. H. Juang, Fundamentals of Speech Recognition.
Englewood Cliffs, NJ: Prentice-Hall, 1993.
[27] N. K. Ratha and R. M. Bolle, Effect of controlled acquisition on n-
gerprint matching,in Proc. Int. Conf. Pattern Recognit., 1998, vol. 2,
pp. 16591661.
[28] N. K. Ratha, J. H. Connell, and R. M. Bolle, Enhancing security and
privacy in biometrics-based authentication systems,IBM Syst. J., vol.
40, no. 3, pp. 614634, 2001.
[29] ——, An analysis of minutiae matching strength, in Proc. 3rd Int.
Conf. Audio- and Video-Based Biometric Person Authentication, 2001,
pp. 223228.
[30] A. Ross, S. C. Dass, and A. K. Jain, Fingerprint warping using ridge
curve correspondences, IEEE Trans. Pattern Anal. Mach. Intell., vol.
28, no. 1, pp. 1930, Jan. 2006.
[31] S. Schuckers, Spoong and anti-spoong measures, Inform. Security
Tech. Rep., vol. 7, no. 4, pp. 5662, 2002.
[32] D. R. Setlak, Fingerprint Sensor Having Spoof Reduction Features
and Related Methods,U.S. Patent 5 953 441, 1999.
[33] L. Thalheim and J. Krissler, Body check: Biometric access protection
devices and their programs put to the test,CtMag., Nov. 2002.
[34] U. Uludag and A. K. Jain, E. J. Delp, III and P. W. Wong, Eds.,
on biometric systems: A case study in ngerprints, in
Proc. SPIE, Jun.
2004, vol. 5306, Security, Steganography, and Watermarking of Multi-
media Contents VI, pp. 622633.
[35] Diamonds Are Forever (Film) 1971.
Athos Antonelli received the Laurea degree in
computer science from the University of Bologna,
Bologna, Italy, in 1998.
From 1998 to 2002, he led the research team of
CT Software, Cesena, Italy, for sonar data processing
and underwater image analysis. From 2003 to 2005,
he was a Research Fellow with the Biometric System
Laboratory, University of Bologna, where he studied
new fake detection approaches within the BioSec Eu-
ropean Project. Currently, he is with Biometrika srl,
Forlì, Italy, where he leads innovative algorithms de-
velopment for biometric solutions.
Raffaele Cappelli received the Laurea degree
(Hons.) in computer science from the University of
Bologna, Bologna, Italy, in 1998, and the Ph.D. de-
gree in computer science and electronic engineering
from the University of Bologna in 2002.
Currently, he is an Associate Researcher at the
University of Bologna. He is a member of the Bio-
metric System Laboratory, University of Bologna.
His research interests include pattern recognition,
image retrieval by similarity and biometric systems
(ngerprint classication and recognition, synthetic
ngerprint generation, ngerprint analysis, face recognition, and performance
evaluation methodologies).
Dario Maio is a Full Professor at the University of
Bologna, Bologna, Italy. He is Chair of the Cesena
Campus and Director of the Biometric System
Laboratory, Cesena. He has published many papers
in numerous elds, including distributed computer
systems, computer performance evaluation, data-
base design, information systems, neural networks,
autonomous agents, and biometric systems. He is
author of the books Biometric Systems, Technology,
Design and Performance Evaluation (Springer,
2005) and The Handbook of Fingerprint Recognition
(Springer, 2003) which received the PSP award from the Association of
American Publishers. Before joining the University of Bologna, he received a
fellowship from the Italian National Research Council (C.N.R.) for working
on the air-trafc-control project. He is with DEIS and IEIIT-C.N.R. where he
teaches database and information systems.
Davide Maltoni (M05) is an Associate Professor
with the Department of Electronics, Informatics,
and Systems, University of Bologna, Bologna,
Italy. He teaches computer architectures and pattern
recognition in the Computer Science Department,
University of Bologna, Cesena. His research in-
terests are in the area of pattern recognition and
computer vision. In particular, he is active in the
eld of biometric systems (ngerprint recognition,
face recognition, hand recognition, performance
evaluation of biometric systems). He is co-director of
the Biometric System Laboratory, Cesena, which is internationally known for
its research and publications in the eld. He is author of two books Biometric
Systems, Technology, Design and Performance Evaluation (Springer, 2005)
and The Handbook of Fingerprint Recognition (Springer, 2003) which received
the PSP award from the Association of American Publishers.
Dr. Maltoni is an Associate Editor of the IEEE T
FORENSICS AND SECURITY and Pattern Recognition.
... Traditional methods distinguish spoof from live fingerprints by extracting hand-crafted features such as anatomical features (e.g. the locations and distribution of core), physiological features (e.g. perspiration and ridge distortion), and texture based features, from the fingerprint images [26][27][28][29]. For example, encoding skin distortion information, Fractional Fourier transforms and curvelet transform-based method are all effective methods adopted to extract hand-crafted features to discriminate live and spoof fingerprints [28,30,31]. ...
... perspiration and ridge distortion), and texture based features, from the fingerprint images [26][27][28][29]. For example, encoding skin distortion information, Fractional Fourier transforms and curvelet transform-based method are all effective methods adopted to extract hand-crafted features to discriminate live and spoof fingerprints [28,30,31]. However, these methods are sensitive to "noise", so they have a poor generalization performance [32]. ...
Due to the diversity of attack materials, fingerprint recognition systems (AFRSs) are vulnerable to malicious attacks. It is of great importance to propose effective Fingerprint Presentation Attack Detection (PAD) methods for the safety and reliability of AFRSs. However, current PAD methods often have poor robustness under new attack materials or sensor settings. This paper thus proposes a novel Channel-wise Feature Denoising fingerprint PAD (CFD-PAD) method by considering handling the redundant "noise" information which ignored in previous works. The proposed method learned important features of fingerprint images by weighting the importance of each channel and finding those discriminative channels and "noise" channels. Then, the propagation of "noise" channels is suppressed in the feature map to reduce interference. Specifically, a PA-Adaption loss is designed to constrain the feature distribution so as to make the feature distribution of live fingerprints more aggregate and spoof fingerprints more disperse. Our experimental results evaluated on LivDet 2017 showed that our proposed CFD-PAD can achieve 2.53% ACE and 93.83% True Detection Rate when the False Detection Rate equals to 1.0% (TDR@FDR=1%) and it outperforms the best single model based methods in terms of ACE (2.53% vs. 4.56%) and TDR@FDR=1%(93.83% vs. 73.32\%) significantly, which proves the effectiveness of the proposed method. Although we have achieved a comparable result compared with the state-of-the-art multiple model based method, there still achieves an increase of TDR@FDR=1% from 91.19% to 93.83% by our method. Besides, our model is simpler, lighter and, more efficient and has achieved a 74.76% reduction in time-consuming compared with the state-of-the-art multiple model based method. Code will be publicly available.
... Other hardware-based solutions, such as the capture means of biological signals of life, like blood flow and pulse rate detection [45] and electrocardiogram (ECG) [46] or electroencephalogram (EEG) signals [47] , are also discussed in the literature. However, all the biological signals either require expensive capture equipment or in some cases [48] may add a time delay to the user authentication process. ...
... Hardwarebased approaches rely on the detection of signals that confirm that the subject of the recognition process is a genuine one. Although hardware-based approaches present higher performance and reliability, they are intrusive and require extra capturing hardware, added to the sensor of the fingerprint recognition scheme, which comes at great expense and in some cases adds a time delay on the verification process [48] . These are mostly the reasons that a relatively small number of hardware-based solutions, in contrast to software-based methods, can be found in the literature. ...
Full-text available
Nowadays, the number of people that utilize either digital applications or machines is increasing exponentially. Therefore, trustworthy verification schemes are required to ensure security and to authenticate the identity of an individual. Since traditional passwords have become more vulnerable to attack, the need to adopt new verification schemes is now compulsory. Biometric traits have gained significant interest in this area in recent years due to their uniqueness, ease of use and development, user convenience and security. Biometric traits cannot be borrowed, stolen or forgotten like traditional passwords or RFID cards. Fingerprints represent one of the most utilized biometric factors. In contrast to popular opinion, fingerprint recognition is not an inviolable technique. Given that biometric authentication systems are now widely employed, fingerprint presentation attack detection has become crucial. In this review, we investigate fingerprint presentation attack detection by highlighting the recent advances in this field and addressing all the disadvantages of the utilization of fingerprints as a biometric authentication factor. Both hardware- and software-based state-of-the-art methods are thoroughly presented and analyzed for identifying real fingerprints from artificial ones to help researchers to design securer biometric systems.
... Initial studies in fingerprint PAD using the dynamic distortion followed the conclusions of Cappelli [33]. A systematic study on skin distortion was conducted to analyze the distortion caused by the elasticity of human skin [34]. Based on the research observations, the experiment initially suggests that genuine fingerprints and PAIs cause different distortions since artificial fingerprints are more rigid, consequently cause lower distortion compared to genuine fingerprints. ...
... Moreover, previous studies had attempted to define some characteristics of different PAI species. For instance, Antonelli et al. [34] had studied five PAI species and concluded that artificial artefacts are more rigid than genuine fingerprints. Thus, the produced distortion while rotating and pressuring the finger during a presentation is higher for genuine users. ...
Full-text available
Fingerprint recognition systems have been widely deployed in authentication and verification applications, ranging from personal smartphones to border control systems. Recently, the biometric society has raised concerns about presentation attacks that aim to manipulate the biometric system’s final decision by presenting artificial fingerprint traits to the sensor. In this paper, we propose a presentation attack detection scheme that exploits the natural fingerprint phenomena, and analyzes the dynamic variation of a fingerprint’s impression when the user applies additional pressure during the presentation. For that purpose, we collected a novel dynamic dataset with an instructed acquisition scenario. Two sensing technologies are used in the data collection, thermal and optical. Additionally, we collected attack presentations using seven presentation attack instrument species considering the same acquisition circumstances. The proposed mechanism is evaluated following the directives of the standard ISO/IEC 30107. The comparison between ordinary and pressure presentations shows higher accuracy and generalizability for the latter. The proposed approach demonstrates efficient capability of detecting presentation attacks with low BPCER where BPCER is 0% for an optical sensor and 1.66% for a thermal sensor at 5% APCER for both.
... On the one hand, traditional AFRSs have low tolerance to poor-quality images, such as worn-out fingerprints and ultrawet/ dry fingers, which will badly affect the recognition accuracy. On the other hand, presentation attacks (PAs) are bringing a raising security problem to AFRSs, which caused concerns about the reliability of such systems [3][4][5][6]. Even spoof fingers made from very low cost materials [7] can easily attack those AFRSs [8]. ...
The technology of optical coherence tomography (OCT) to fingerprint imaging opens up a new research potential for fingerprint recognition owing to its ability to capture depth information of the skin layers. Developing robust and high security Automated Fingerprint Recognition Systems (AFRSs) are possible if the depth information can be fully utilized. However, in existing studies, Presentation Attack Detection (PAD) and subsurface fingerprint reconstruction based on depth information are treated as two independent branches, resulting in high computation and complexity of AFRS building.Thus, this paper proposes a uniform representation model for OCT-based fingerprint PAD and subsurface fingerprint reconstruction. Firstly, we design a novel semantic segmentation network which only trained by real finger slices of OCT-based fingerprints to extract multiple subsurface structures from those slices (also known as B-scans). The latent codes derived from the network are directly used to effectively detect the PA since they contain abundant subsurface biological information, which is independent with PA materials and has strong robustness for unknown PAs. Meanwhile, the segmented subsurface structures are adopted to reconstruct multiple subsurface 2D fingerprints. Recognition can be easily achieved by using existing mature technologies based on traditional 2D fingerprints. Extensive experiments are carried on our own established database, which is the largest public OCT-based fingerprint database with 2449 volumes. In PAD task, our method can improve 0.33% Acc from the state-of-the-art method. For reconstruction performance, our method achieves the best performance with 0.834 mIOU and 0.937 PA. By comparing with the recognition performance on surface 2D fingerprints, the effectiveness of our proposed method on high quality subsurface fingerprint reconstruction is further proved.
... Behavioral biometrics are a relatively new type of biometrics, which refer to the inherent dynamic behavioral patterns of human motions, such as gaits [37], voices [38], keystroke dynamics [39], and finger gestures [40]. However, due to the advanced mobile recording techniques (e.g., visual and acoustic), 3D printing and robotics, the physiological and behavioral biometrics are both under a high risk to be obtained by an adversary [4], [41], [42], [43]. Furthermore, the biometrics' static nature makes them easy to be reused by an adversary for replay attacks. ...
Recently, with the widespread application of mobile communication devices, fingerprint identification is the most prevalent in all types of mobile computing. While they bring a huge convenience to our lives, the resulting security and privacy issues have caused widespread concern. Fraudulent attack using forged fingerprint is one of the typical attacks to realize illegal intrusion. Thus, fingerprint liveness detection (FLD) for True or Fake fingerprints is very essential. This paper proposes a novel fingerprint liveness detection method based on broad learning with uniform local binary pattern (ULBP). Compared to convolutional neural networks (CNN), training time is drastically reduced. Firstly, the region of interest of the fingerprint image is extracted to remove redundant information. Secondly, texture features in fingerprint images are extracted via ULBP descriptors as the input to the broad learning system (BLS). ULBP reduces the variety of binary patterns of fingerprint features without losing any key information. Finally, the extracted features are fed into the BLS for training. The BLS is a flat network, which transfers and places the original input as a mapped feature in feature nodes, generalizing the structure in augmentation nodes. Experiments show that in Livdet 2011 and Livdet 2013 datasets, the average training time is about 1 s and the performance of identifying real and fake fingerprints is effect. Compared to other advanced models, our method is faster and more miniature.KeywordsFingerprint liveness detectionBroad learningULBPBiometricsReal-time
This paper proposes a novel trajectory tracking controller based on RBF neural network and fractional-order sliding mode control (FO-SMC). First, the prescribed performance control (PPC) is introduced into the system to make the tracking error converge to the predefined set. Then, the fractional-order calculus is introduced into SMC to alleviate the chattering of the system. Considering that RBF neural network can compensate for the uncertainty of the UAV motion model, RBF is introduced into the design of the controller. Besides, the Lyapunov theorem proves the stability of the system, and all signals in the closed-loop system are stable. Finally, a case study is carried out through simulation.KeywordsTrajectory trackingUAVFractional-order sliding mode control (FO-SMC)RBF neural networkPrescribed performance control (PPC)
Due to the diversity of attack materials, fingerprint recognition systems (AFRSs) are vulnerable to malicious attacks. It is thus important to propose effective fingerprint presentation attack detection (PAD) methods for the safety and reliability of AFRSs. However, current PAD methods often exhibit poor robustness under new attack types settings. This paper thus proposes a novel channel-wise feature denoising fingerprint PAD (CFD-PAD) method by handling the redundant noise information ignored in previous studies. The proposed method learns important features of fingerprint images by weighing the importance of each channel and identifying discriminative channels and “noise” channels. Then, the propagation of “noise” channels is suppressed in the feature map to reduce interference. Specifically, a PA-Adaptation loss is designed to constrain the feature distribution to make the feature distribution of live fingerprints more aggregate and that of spoof fingerprints more disperse. Experimental results evaluated on the LivDet 2017 dataset showed that the proposed CFD-PAD can achieve 2.53% average classification error (ACE) and a 93.83% true detection rate when the false detection rate equals 1.0% (TDR@FDR=1%). Also, the proposed method markedly outperforms the best single-model-based methods in terms of ACE (2.53% vs. 4.56%) and TDR@FDR=1%(93.83% vs. 73.32%), which demonstrates its effectiveness. Although we have achieved a comparable result with the state-of-the-art multiple-model-based methods, there still is an increase in TDR@FDR=1% from 91.19% to 93.83%. In addition, the proposed model is simpler, lighter and more efficient and has achieved a 74.76% reduction in computation time compared with the state-of-the-art multiple-model-based method.
In the recent times, with the increase in human identity thefts, the fingerprint-based biometric systems play a significant role in secured authentication and access restrictions. The security and privacy are key aspects that need peculiar reflection while designing these recognition systems. However, imposters practice several presentation attack instruments (PAIs) to exploit the biometrical infrastructure to breach the security aspects. Though, biometric systems are receptive to diverse threats, but the presentation attacks (PAs) are the most widely attempted in current scenario. In PA, an attacker makes use of an artifact of a real biometric trait in order to circumvent the sensor module of the system. In this article, we expound state-of-the art fingerprint presentation attack detection (FinPAD) mechanisms along with taxonomy covering the period of 2001-2021. The article presents a comprehensive survey of classical hardware-based, handcrafted fingerprint PAD approaches with a special focus on recent deep leaning-based techniques. We provide a summary of publically available fingerprint anti-spoofing databases, standard PAD evaluation protocols and fingerprint Liveness detection competition (LivDet) series till the year 2021. The study explored several open research challenges that yield future directions to the investigators in this active field of research. Our study reveals that the modern data-driven FinPAD techniques are robust and efficient compared to their hardware-based counterpart in terms of performance. However, designing lightweight fingerprint PAD techniques with smaller datasets that offer better performance in cross-dataset, cross-material, and cross-sensor scenario still remain an open research issue.
Full-text available
Two-dimensional image motion is the projection of the three-dimensional motion of objects, relative to a visual sensor, onto its image plane. Sequences of time-orderedimages allow the estimation of projected two-dimensional image motion as either instantaneous image velocities or discrete image displacements. These are usually called the optical flow field or the image velocity field. Provided that optical flow is a reliable approximation to two-dimensional image motion, it may then be used to recover the three-dimensional motion of the visual sensor (to within a scale factor) and the three-dimensional surface structure (shape or relative depth) through assumptions concerning the structure of the optical flow field, the three-dimensional environment, and the motion of the sensor. Optical flow may also be used to perform motion detection, object segmentation, time-to-collision and focus of expansion calculations, motion compensated encoding, and stereo disparity measurement. We investigate the computation of optical flow in this survey: widely known methods for estimating optical flow are classified and examined by scrutinizing the hypothesis and assumptions they use. The survey concludes with a discussion of current research issues.
Memorizing passwords is out. Laying your finger on a sensor or peering into a webcam can suffice to gain you immediate access to a system. There is the danger, however, that this new ease might be bought at the expense of security. How well do biometric access controls prevent unauthorized access? We have tested eleven products for you. According to estimates of the IBIA, the international organization of biometric devices and programs suppliers, worldwide turnover of biometric security devices and programs this year will for the first time exceed the 500 million euro limit. Though the growth is primarily being driven by large-scale orders by industrial customers and administrative bodies, nevertheless the number of products on the market designed for in-home and in-house PC use is rising. The range of biometric security access tools for PCs meanwhile extends from mice and keyboards with integrated fingerprint scanners to webcam solutions whose software is able to recognize the facial features of registered persons to scanners that make use of the distinct iris patters of humans for identifying individuals. When the PC is booted the security software that goes with the tool writes itself into the log-on routine expanding the latter to include biometric authentication. In many instances the screen saver is integrated into the routine thus allowing for biometric authentication after breaks from work while the PC is still running. Sophisticated solutions, moreover, permit biometry based security protection of specific programs and/or documents.
This paper describes a new biometric technology based on the optical properties of skin. The new technology can perform both identity verification and sample authenticity based on the optical properties of human skin. When multiple wavelengths of light are used to illuminate skin, the resulting spectrum of the diffusely reflected light represents a complex interaction between the structural and chemical properties of the skin tissue. Research has shown that these spectral characteristics are distinct traits of human skin as compared to other materials. Furthermore, there are also distinct spectral differences from person to person. Personnel at Lumidigm have developed a small and rugged spectral sensor using solid-state optical components operating in the visible and very near infrared spectral region (400-940nm) that accurately measures diffusely reflected skin spectra. The sensors are used for both biometric determinations of identity as well as the determination of sample authenticity. This paper will discuss both applications of the technology with emphasis on the use of optical spectra to assure sample authenticity.
The vulnerability of biometric devices to spoof attacks and various anti-spoofing measures are discussed. Spoof attacks involve the use of an artificial biometric sample to gain unauthorized control. Various anti-spoofing measures have been developed to counteract this problem of spoofing. These anti-spoofing techniques include addition of supervision, password, smart cards, enrolment of several biometric samples, multi-modal biometrics, and liveness testing.
Conference Paper
This paper introduces a plastic distortion model to cope with the nonlinear deformations characterizing fingerprint images taken with online acquisition sensors. The problem has a great impact on several practical applications, ranging from the design of robust fingerprint matching algorithms to the generation of synthetic fingerprint images. The experimentation on real data validates the model and demonstrates its efficacy in registering minutiae data from highly distorted fingerprint samples.
Conference Paper
In an automatic fingerprint identification or authentication system, the matcher subsystem handles the most complex task of compensating for scaling, translation, rotation and structural distortions of the fingerprint minutiae features due to skin elasticity. We analyze the effect of controlled image acquisition on matcher performance. We show that simple steps in image acquisition can enhance the system performance vastly
Fingerprints are the oldest and most widely used biometrics for personal identification. Unfortunately, it is usually possible to deceive automatic fingerprint identification systems by presenting a well-duplicated synthetic or dismembered finger. This paper introduces one method to provide fingerprint vitality authentication in order to solve this problem. Detection of a perspiration pattern over the fingertip skin identifies the vitality of a fingerprint. Mapping the two-dimensional fingerprint images into one-dimensional signals, two ensembles of measures, namely static and dynamic measures, are derived for classification. Static patterns as well as temporal changes in dielectric mosaic structure of the skin, caused by perspiration, demonstrate themselves in these signals. Using these measures, this algorithm quantifies the sweating pattern and makes a final decision about vitality of the fingerprint by a neural network trained by examples.