Face Presentation Attack Detection
Javier Hernandez-Ortega, Julian Fierrez, Aythami Morales and Javier Galbally
(corresponding author: Javier Hernandez-Ortega (BiDA Lab,
Universidad Autonoma de Madrid, SPAIN))
(authors: Julian Fierrez (BiDA Lab, Universidad Autonoma
de Madrid, SPAIN), Aythami Morales (BiDA Lab, Universidad
Autonoma de Madrid, SPAIN) and Javier Galbally (European
Commission - DG Joint Research Centre))
(status: EMPTY [NO])
(status: DRAFT [YES])
(status: COMPLETED [NO])
(action required: to write)
Abstract The main scope of this chapter is to serve as a brief introduction to face
presentation attack detection. The next pages present the different presentation at-
tacks that a face recognition system can confront, in which an attacker presents to
the sensor, mainly a camera, an artifact (generally a photograph, a video or a mask)
to try to impersonate a genuine user.
First, we make an introduction of the current status of face recognition, its level
Biometrics and Data Pattern Analytics - BiDA Lab, Universidad Autonoma de Madrid, Madrid,
Spain, e-mail: firstname.lastname@example.org
Biometrics and Data Pattern Analytics - BiDA Lab, Universidad Autonoma de Madrid, Madrid,
Spain, e-mail: email@example.com
Biometrics and Data Pattern Analytics - BiDA Lab, Universidad Autonoma de Madrid, Madrid,
Spain, e-mail: firstname.lastname@example.org
European Commission - DG Joint Research Centre, e-mail: email@example.com.
This is a pre-print of an article to be published in the book:
Handbook of Biometric Anti-Spoofing
S. Marcel, M. Nixon, J. Fierrez, N. Evans (Eds.), Springer, 2019.
2 Javier Hernandez-Ortega, Julian Fierrez, Aythami Morales and Javier Galbally
of deployment and the challenges it faces. In addition, we present the vulnerabilities
and the possible attacks that a biometric system may be exposed to, showing that
way the high importance of presentation attack detection methods.
We review different types of presentation attack methods, from simpler to more
complex ones, and in which cases they could be effective. Later, we summarize the
most popular presentation attack detection methods to deal with these attacks.
Finally we introduce public datasets used by the research community for ex-
ploring vulnerabilities of face biometrics and developing effective countermeasures
against known spoofs.
Over the last decades there have been numerous technological advances that helped
to bring new possibilities to people in the form of new devices and services. Some
years ago, it would have been almost impossible to imagine having in the mar-
ket devices like current smartphones and laptops, at affordable prices that allow a
high percentage of the population to have their own piece of top-level technology at
home, a privilege that historically has been restricted to big companies and research
Thanks to this quick advance in technology, specially in computer science and
electronics, it has been possible to broadly deploy biometric systems for the ﬁrst
time. Nowadays, they are present in a high number of scenarios like: border access
control, surveillance, smartphone authentication, forensics, and on-line services like
e-learning and e-commerce.
Among all the existing biometric traits, face recognition is currently one of the
most extended. Face has been studied as a mean of recognition since the 60s, ac-
quiring special relevance in the 90s following the evolution of computer vision .
Some interesting properties of the interaction of human faces with biometric sys-
tems are: acquisition at a distance, non-intrusively, and the good discriminant char-
acteristics of the face to perform identity recognition.
At present, face is one of the biometric traits with the highest economical and
social impact due to several reasons:
•Face is the second most largely deployed biometric at world level in terms of
market quota right after ﬁngerprints . Each day more and more manufactur-
ers are including face recognition in their products, like Apple with its Face ID
•Face is adopted in most identiﬁcation documents such as the ICAO-compliant
biometric passport  or national ID cards .
Given their high level of deployment, attacks having a face recognition system
as their target are not restricted anymore to theoretical scenarios, becoming a real
threat. There exist all kinds of applications and sensitive information that can be
menaced by attackers. Giving to each face recognition application an appropriate
Introduction to Face Presentation Attack Detection 3
level of security, as it is being done with other biometric traits, like iris or ﬁngerprint,
should be a top priority.
Historically, the main focus of research in face recognition has been given to
the improvement of the performance at the veriﬁcation and identiﬁcation tasks, that
means, distinguishing better between subjects using the available information of
their faces. To achieve that goal, a face recognition system should be able to opti-
mize the differences between the facial features of each user , and also the simi-
larities among samples of the same user. Within the variability factors that can affect
the performance of face recognition systems there are occlusions, low-resolution,
different viewpoints, lighting, etc. Improving the performance of recognition sys-
tems in the presence of these variability factors is currently an active area in face
Contrary to the optimization of their performance, the security vulnerabilities of
face recognition systems have been much less studied in the past, and only over the
recent few years some attention has been given to detecting different types of attacks
. Regarding these security vulnerabilities, Presentation Attack Detection (PAD)
consists on detecting whether a biometric trait comes from a living person or it is a
The rest of this chapter is organized as follows: Section 2 overviews the main vul-
nerabilities of face recognition systems, making a description of several presentation
attack approaches. Section 3 introduces presentation attack detection techniques.
Section 4 presents some available public databases for research and evaluation of
face presentation attack detection. Sections 5 and 6 discuss about architectures and
applications of face PAD. Finally, concluding remarks are drawn in Section 7.
2 Vulnerabilities in Face Biometrics
In the present chapter we concentrate on Presentation Attacks, i.e. attacks against
the sensor of a face recognition system  (see point V1 in Fig.1). An overview of
indirect attacks to face systems can be found elsewhere . Indirect attacks (points
V2-V7 in Fig.1) can be prevented by improving certain points of the face recognition
system, i.e. the communication channels, the equipment and infrastructure involved
and the perimeter security. The techniques needed for improving those modules are
more related to “classical” cybersecurity than to biometrics, so they will not be
covered in this chapter.
On the other hand, presentation attacks are a purely biometric vulnerability that is
not shared with other IT security solutions and that needs speciﬁc countermeasures.
In these attacks, intruders use some type of artifact, typically artiﬁcial (e.g. a face
photo, a mask, a synthetic ﬁngerprint or a printed iris image), or try to mimic the
aspect of genuine users (e.g., gait, signature) to fraudulently access the biometric
A high amount of biometric data are exposed, (e.g. photographs and videos at so-
cial media sites) showing the face, eyes, voice and behaviour of people. Presentation
4 Javier Hernandez-Ortega, Julian Fierrez, Aythami Morales and Javier Galbally
attackers are aware of this reality and take advantage of those sources of information
to try to circumvent face recognition systems . This is one of the well-known
drawbacks of biometrics: “biometric traits are not secrets” . In this context, it is
worth noting that the factors that make face an interesting trait for person recogni-
tion, that is, images that can be taken at distance and in a non-intrusive way, make it
also specially vulnerable to attackers who want to use biometric information in an
In addition to being fairly easy to obtain a face image of the real users, face
recognition systems are known to respond weakly to presentation attacks, for exam-
ple using one of these three categories of attacks:
1. Using a photograph of the user to be impersonated .
2. Using a video of the user to be impersonated .
3. Building and using a 3D model of the attacked face, for example an hyperrealistic
The success probability of an attack may vary considerably depending on the
characteristics of the face recognition system, for example if it uses visible light or
works in another range of the spectrum, if it has one or several sensors, the resolu-
tion, the lighting; and also depending on the characteristics of the artifact: quality
of the texture, the appearance, the resolution of the presentation device, the type of
support used to present the fake, or the background conditions.
Without implementing presentation attack detection measures most of the state-
of-the-art facial biometric systems are vulnerable to simple attacks that a regular
person would detect easily. This is the case, for example, of trying to imperson-
ate a subject using a photograph of his face. Therefore, in order to design a secure
face recognition system in a real scenario, for instance for replacing password-based
Fig. 1 Scheme of a generic biometric system. In this type of system, there exist several modules
and points that can be the target of an attack (V1 to V7). Presentation attacks are performed at
sensor level (V1), without the need of having access to the interior of the system. Indirect attacks
(V2 to V7) can be performed at the databases, the matcher, the communication channels, etc; in
this type of attack the attacker needs access to the interior of the system.
Introduction to Face Presentation Attack Detection 5
authentication, Presentation Attack Detection (PAD) techniques should be a top pri-
ority from the initial planning of the system.
Given the discussion above, it could be stated that face recognition systems with-
out PAD techniques are at clear risk, so a question often rises: What technique(s)
should be adopted to secure them? The fact is that counterfeiting this type of threats
is not a straightforward problem, as new speciﬁc countermeasures need to be devel-
oped and adopted whenever a new attack appears.
With the scope of encouraging and boosting the research in presentation attack
detection techniques in face biometrics, there are numerous and very diverse initia-
tives in the form of dedicated tracks, sessions and workshops in biometric-speciﬁc
and general signal processing conferences [1, 2]; organization of competitions ;
and acquisition of benchmark datasets [13, 52] that have resulted in the proposal of
new presentation attack detection methods ; standards in the area [28, 29]; and
patented PAD mechanisms for face recognition systems .
2.1 Attacking Methods
Typically, face recognition systems can be spoofed by presenting to the sensor (e.g.
a camera) a photograph, a video or a 3D mask of a targeted person (see Fig. 2). There
are other possibilities in order to circumvent a face recognition system, such as using
PHOTO VIDEO 3D MASK OTHERS
Fig. 2 Examples of face presentation attacks: The upper image shows an example of a genuine
user, and below it there are some examples of presentation attacks, depending of the artifact shown
to the sensor: a photo, a video, a 3D mask and others.
6 Javier Hernandez-Ortega, Julian Fierrez, Aythami Morales and Javier Galbally
makeup  or plastic surgery. However, using photographs and videos are the most
common type of attacks due to the high exposition of face (e.g. social media, video-
surveillance), and the low cost of high resolution digital cameras, printers or digital
Regarding the attack types, a general classiﬁcation can be done taking into ac-
count the nature and the level of complexity of the artifact used to attack: photo-
based, video-based and mask-based (as can be seen in Fig. 2). It must be remarked
that this is only a classiﬁcation of the most common types of attacks, but there could
exist more complex and newer attacks that may not fall into in any of these cate-
gories, or that may belong to several categories at the same time.
2.1.1 Photo Attacks
A photo-attack consists in displaying a photograph of the attacked identity to the
sensor of the face recognition system  (see example in Fig. 2).
Photo attacks are the most critical type of attack to be protected from because
of several factors. For example, printing color images from the face of the genuine
user is really cheap and easy to do. These are usually called print-attacks in the liter-
ature . Alternatively, the photos can be displayed in the high-resolution screen of
a device (e.g. a smartphone, a tablet or a laptop). It is also easy to obtain samples of
genuine faces thanks to the recent growth of social media sites like Facebook, Twit-
ter, Flickr, etc . With the price reduction that digital cameras have experimented
in recent years, it is also possible to obtain photos of a legitimate user simply by
using a hidden camera.
Among the photo attack techniques there are also more complex ones like photo-
graphic masks. This technique consists in printing a photograph of the subject’s face
and then making holes for the eyes and the mouth . This is a good way to avoid
presentation attack detection techniques based on blinking and mouth movements
Even if these attacks seem too simple to work in a real scenario, some stud-
ies performed by private security ﬁrms indicate that many commercial systems are
vulnerable to them . Due to the easiness of carrying out this type of attack, im-
plementing robust countermeasures that perform well against them should be a must
for any facial recognition system.
2.1.2 Video Attacks
Similarly to the case of photo attacks, video acquisition of people intended to be
impersonated is also becoming increasingly easier with the growth of public video
sharing sites and social networks, or even using a hidden camera. Another reason to
use this type of attack is that it increases the probability of success by introducing
liveness appearance to the displayed fake biometric sample .
Introduction to Face Presentation Attack Detection 7
Once a video of the legitimate user is obtained, one attacker could play it in any
device that reproduces video (smartphone, tablet, laptop, etc) and then present it to
the sensor/camera , (see Fig. 2). This type of attacks is often referred to in the
literature as replay attacks, a more sophisticated version of the simple photo attacks.
Replay attacks are more difﬁcult to detect, compared to the photo spoofs, as
not only the face texture and shape is emulated but also its dynamics, like eye-
blinking, mouth and/or facial movements . Due to their higher sophistication, it is
reasonable to assume that systems that are vulnerable to photo attacks will perform
even worse with respect to video attacks, and also that being robust against photo
attacks does not mean to be equally strong against video attacks . Therefore,
speciﬁc countermeasures need to be developed and implemented.
2.1.3 Mask Attacks
In this type of attack the presented artifact is a 3D mask of the user’s face. The
attacker builds a 3D reconstruction of the face and presents it to the sensor/camera.
Mask attacks require more skills to be well executed than the previous attacks, and
also access to extra information in order to construct a realistic mask of the genuine
There are different types of masks depending on the complexity of the manu-
facturing process and the amount of data that is required. Some examples, ordered
from simpler to more complex are:
•The simplest method is to print a 2D photograph of the user’s face and then stick
it to a deformable structure. Examples of this type of structures could be a t-shirt
or a plastic bag. Finally, the attacker can put the bag on his face and present it
to the biometric sensor. This attack can mimic some deformable patterns of the
human face, allowing to spoof some low level 3D face recognition systems.
Fig. 3 Example of 3D masks. These are the 17 hard-resin facial masks used to create the 3DMAD
dataset, from .
8 Javier Hernandez-Ortega, Julian Fierrez, Aythami Morales and Javier Galbally
•Image reconstruction techniques can generate 3D models from 2 or more pictures
of the genuine user’s face, e.g. one frontal photo and a proﬁle photo. Using these
photographs, the attacker could be able to extrapolate a 3D reconstruction of the
real face (see Fig. 2). This method is unlikely to spoof top-level 3D face recog-
nition systems, but it can be an easy and cheap option to spoof a high number of
•A more sophisticated method consists in making directly a 3D capture of a gen-
uine user’s face  (see Fig. 3). This method entails a higher level of difﬁculty
than the previous ones since a 3D acquisition can be done only with dedicated
equipment and it is complex to obtain without the cooperation of the end-user.
However, this is becoming more feasible and easier each day with the new gen-
eration of affordable 3D acquisition sensors .
When using any of the two last methods, the attacker would be able to build a 3D
mask with the model he has computed. Even though the price of 3D printing devices
is decreasing, 3D printers with suﬁcient quality and deﬁnition are still expensive.
See reference  for an example of 3D-printed masks. There are some companies
where such 3D face models may be obtained for a reasonable price1.
This type of attack may be more likely to succeed due to the high realism of
the spoofs. As the complete structure of the face is imitated, it becomes difﬁcult to
ﬁnd effective countermeasures. For example, the use of depth information becomes
inefﬁcient against this particular threat.
These attacks are far less common than the previous two categories because of the
difﬁculties mentioned above to generate the spoofs. Despite the technical complex-
ity, mask attacks have started to be systematically studied thanks to the acquisition
of the ﬁrst speciﬁc databases which include masks of different materials and sizes
[13, 18, 34, 37].
3 Presentation Attack Detection
Face recognition systems try to differentiate between genuine users, not to deter-
mine if the biometric sample presented to the sensor is real or a fake. A presentation
attack detection method is usually accepted to be any technique that is able to au-
tomatically distinguish between real biometric traits presented to the sensor and
synthetically produced artifacts.
This can be done in four different ways : (i) with available sensors to detect
in the signal any pattern characteristic of live traits, (ii) with dedicated hardware
to detect an evidence of liveness, which is not always possible to deploy, (iii) with
a challenge-response method where a presentation attack can be detected by re-
questing the user to interact with the system in a speciﬁc way, or (iv) employing
recognition algorithms intrinsically robust against attacks.
1http://real-f.jp, http://www.thatsmyface.com, https://shapify.me, and http://www.sculpteo.com
Introduction to Face Presentation Attack Detection 9
Table 1 Selection of relevant works in software-based face PAD.
Method Year Type of Images Database used Type of features
 2009 Visible and IR photo Private Color (reﬂectance)
 2011 RGB video PRINT-ATTACK Face-background motion
 2012 RGB video REPLAY-ATTACK Texture based
 2013 RGB photo and video NUAA PI, PRINT-ATTACK and CASIA FAS Texture based
 2013 RGB photo and video PRINT-ATTACK and REPLAY ATTACK Texture based
 2013 RGB video PHOTO-ATTACK Motion correlation analysis
 2014 RGB video REPLAY-ATTACK Image Quality based
 2015 RGB video Private Color (challenge reﬂections)
 2016 RGB video 3DMAD and private rPPG (color based)
 2017 RGB video OULU-NPU Texture based
 2018 RGB and NIR video 3DMAD and private rPPG (color based)
Due to its easiness of deployment, the most common countermeasures are based
on employing the already existing hardware and running software PAD algorithms
over it. A selection of relevant PAD works based on software techniques are shown
in Table 1. A high number of the software-based PAD techniques are based on live-
ness detection without needing any special help of the user. This type of approach
is really interesting as it allows to upgrade the countermeasures in existing systems
without the requirement of new pieces of hardware, and permitting authentication
to be done in real time as it does not need user interaction. These presentation attack
detection techniques aim to detect physiological signs of life (such as eye blinking,
facial expression changes, mouth movements,etc), or any other differences between
presentation attack artifacts and real biometric traits (e.g. texture and deformation).
There also exist works in the literature that use special sensors such as 3D scan-
ners to verify that the captured faces are not 2D (i.e. ﬂat objects) , or thermal
sensors to detect the temperature distribution associated with real living faces .
However, these approaches are not popular, even though they tend to achieve higher
presentation detection rates, because in most systems the required hardware is ex-
pensive and not broadly available.
3.1 PAD Methods
The software-based PAD methods can be divided into two main categories depend-
ing on whether they take into account temporal information or not: static and dy-
3.1.1 Static Analysis
This subsection refers to the development of techniques that analyze static features
like the facial texture to discover unnatural characteristics that may be related to
10 Javier Hernandez-Ortega, Julian Fierrez, Aythami Morales and Javier Galbally
The key idea of the texture-based approach is to learn and detect the structure
of facial micro-textures that characterise real faces but not fake ones. Micro-texture
analysis has been effectively used in detecting photo attacks from single face im-
ages: extraction of texture descriptions such as Local Binary Patterns (LBP)  or
Gray-Level Co-occurrence Matrices (GLCM) followed by a learning stage to per-
form discrimination between textures.
For example, the recapturing process by a potential attacker, the printing of an
image to create a spoof, usually introduces quality degradation in the sample, mak-
ing it possible to distinguish between a genuine access attempt and an attack, by
analyzing their textures .
The major drawback of texture-based presentation attack detection is that high
resolution images are required in order to extract the ﬁne details from the faces
that are needed for discriminating genuine faces from presentation attacks. These
countermeasures will not work properly with bad illumination conditions that make
the captured images to have bad quality in general.
Most of the time, the differences between genuine faces and artiﬁcial materials
can be seen in images acquired in the visual spectrum with or without a preprocess-
ing stage. However, sometimes a translation to a more proper feature space , or
working with images from outside the visible spectrum  is needed in order to
distinguish between real faces and spoof-attack images.
Aditionally to the texture, there are other properties of the human face and skin
that can be exploited to differentiate between real and fake samples. Some of these
properties are: absorption, reﬂection, scattering, and refraction .
This type of approaches may be useful to detect photo-attacks, video-attacks, and
also mask-attacks, since all kinds of spoofs may present texture or optical properties
different than real faces.
3.1.2 Dynamic Analysis
These techniques have the target of distinguishing presentation attacks from gen-
uine access attempts based on the analysis of motion. The analysis may consist in
detecting any physiological sign of life, for example: pulse, eye-blinking, facial ex-
pression changes or mouth movements. This objective is achieved using knowledge
of the human anatomy and physiology.
As stated in Section 2, photo attacks are not able to reproduce all signs of life
because of their static nature. However, video attacks and mask attacks can emulate
blinking, mouth movements, etc. Related to these types of presentation attacks, it
can be assumed that the movement of the presented artifacts, differs from the move-
ment of real human faces which are complex nonrigid 3D objects with deformations.
One simple aproximation to this type of countermeasures consists in trying to
ﬁnd correlations between the movement of the face and the movement of the back-
ground respect to the camera [3, 33]. If the fake face presented contains also a piece
of fake background, the correlation between the movement of both regions should
be high. This could be the case of a replay attack, in which the face is shown on
Introduction to Face Presentation Attack Detection 11
the screen of some device. This correlation in the movements allows to evaluate the
degree of synchronization within the scene during a deﬁned period of time. If there
is no movement, as in the case of a ﬁxed support attack, or too much movement,
as in a hand-based attack, the input data is likely to come from a presentation at-
tack. Genuine authentication will usually have uncorrelated movement between the
face and the background, since user’s head generally moves independently from the
Some works regarding face liveness detection are  or  which exploit the
fact that humans blink on average 3 times per minute and analyzed videos to develop
an eye blink-based presentation attack detection scheme.
Other works like  provide more evidence of liveness using Eulerian video
magniﬁcation  applying it to enhance small changes in face regions, that often
go unnoticed. Some changes that are ampliﬁed thanks to this technique are, for ex-
ample, small color and motion changes on the face caused by the human blood ﬂow,
by ﬁnding peaks in the frequency domain that correspond to the human heartbeat
As metioned above, motion analysis approaches usually require some level of
motion between different head parts or between the head and the background. Some-
times this can be achieved through user cooperation . Therefore, some of these
techniques can only be used in scenarios without time requirements as they may
need time for analyzing a piece of video and/or for recording the user’s response to
a command. Due to the nature of these approaches, some videos and well-performed
mask attacks may deceive the countermeasures.
4 Face Presentation Attacks Databases
In this section we overview some publicly available databases for research in face
PAD. The information contained in these datasets can be used for the development
and evaluation of new face PAD techniques against presentation attacks.
As it has been mentioned in the past sections, with the recent spread of biometric
applications, the threat of presentation attacks has grown, and the biometric commu-
nity is starting to acquire large and complete databases to make recognition systems
more robust to presentation attacks.
International competitions have played a key role to promote the development of
PAD measures. These competitions include the IJCB 2017 Competition on Gener-
alized Face Presentation Attack Detection in Mobile Authentication Scenarios ,
and the 2011 and 2013 2D Face Anti-Spooﬁng contests [8, 11].
Despite the increasing interest of the community in studying the vulnerabilities
of face recognition systems, the availability of PAD databases is still scarce. The
acquisition of new datasets is highly difﬁcult because of two main reasons:
•Technical aspects: the acquisition of presentation attack data offers additional
challenges to the usual difﬁculties encountered in the acquisition of standard
12 Javier Hernandez-Ortega, Julian Fierrez, Aythami Morales and Javier Galbally
biometric databases  in order to correctly capture similar fake data than the
present in real attacks (e.g., generation of multiple types of artifacts).
•Legal aspects: as in the face recognition ﬁeld in general, data protection limits
the distribution or sharing of biometric databases among research groups. These
legal restrictions have forced most laboratories or companies working in the ﬁeld
of presentation attacks to acquire their own datasets usually small and limited.
In the area of face recognition PAD, we can ﬁnd the following public databases:
•The NUAA Photo Imposter Database (NUAA PI DB)  was one of the ﬁrst
efforts to generate a large public face PAD dataset. It contains images of real
access attempts and print-attacks of 15 users. The images contain frontal faces
with a neutral expression captured using a webcam. Users were also told to
avoid eye-blinks. The attacks are performed using printed photographs on pho-
tographic paper. Examples from this database can be seen in Fig. 4. The NUAA
PI DB is property of the Nanjing University of Aeronautics and Astronautics,
and it can be obtained at http://parnec.nuaa.edu.cn/xtan/data/
•The YALE-RECAPTURED DB  appeared shortly after, adding to the attacks
of the NUAA PI DB also the difﬁculty of varying illumination conditions as well
as considering LCD spoofs, not only printed photo attacks. The dataset consists
of 640 static images of real access attempts and 1,920 attack samples, acquired
from 10 different users. The YALE-RECAPTURED DB is a compilation of im-
ages from the NUAA PI DB and the Yale Face Database B made by University
•The PRINT-ATTACK DB  represents another step in the evolution of face
PAD databases, both in terms of the size (50 different users were captured) and
of the types of data acquired (it contains video sequences instead of still images).
It only considers the case of photo attacks. It consists of 200 videos of real ac-
cesses and 200 videos of print attack attempts from 50 different users. Videos
were recorded under two different background and illumination conditions. At-
Fig. 4 Samples from the
NUAA Photo Imposter
Database . Samples from
two different users are shown.
Each row corresponds to one
different session. In each row,
the left pair are from a live
human and the right pair from
a photo fake. Images have
been taken from .
Introduction to Face Presentation Attack Detection 13
tacks were carried out with hard copies of high resolution photographs of the
50 users, printed on plain A4 paper. The PRINT-ATTACK DB is property of
the Idiap Research Institute, and it can be obtained at https://www.idiap.
– The PHOTO-ATTACK database  is an extension of the PRINT-ATTACK
database. It also provides photo attacks with the difference that the attack
photographs are presented to the camera using different devices such as mo-
bile phones and tablets. It can be obtained at https://www.idiap.ch/
– The REPLAY-ATTACK database , is also an extension of the PRINT-
ATTACK database. It contains short videos of both real-access and presen-
tation attack attempts of 50 different subjects. The attack attempts present
in the database are video attacks using mobile phones and tablets. The at-
tack attempts are also distinguished depending on how the attack device is
hold: hand-based and ﬁxed-support. Examples from this database can be seen
in Fig. 5. It can be obtained at https://www.idiap.ch/dataset/
•The CASIA FAS DB , similarly to the REPLAY-ATTACK database contains
photo attacks with different supports (paper, phones and tablets) and also replay
video attacks. The main difference with the REPLAY-ATTACK database is that
while in the REPLAY DB only one acquisition sensor was used with different
attacking devices and illumination conditions, the CASIA FAS DB was captured
using sensors of different quality under uniform acquisition conditions. The CA-
SIA FAS DB is property of the Institute of Automation, Chinese Academy of
Sciences (CASIA), and it can be obtained at http://www.cbsr.ia.ac.
(a) (b) (c) (d)
Fig. 5 Examples of real and fake samples from the REPLAY-ATTACK DB . The images
come from videos acquired in two illumination and background scenarios (controlled and adverse).
The ﬁrst row belongs to the controlled scenario while the second row represents the adverse con-
ditions. (a)Shows real samples, (b)shows samples of a printed photo attack, (c)corresponds to a
LCD photo attack, and (d)to a high-deﬁnition photo attack.
14 Javier Hernandez-Ortega, Julian Fierrez, Aythami Morales and Javier Galbally
Table 2 Features of the main public databases for research in face PAD. Comparison of the
most relevant features of each of the databases described in this chapter.
Database Users # (real/fakes) Samples # (real/fakes) Attack Types Support Attack Illumination
NUAA PI  15/15 5,105/7,509 Photo Held Uncont.
YALE-RECAPTURED  10/10 640/1,920 Photo Held Uncont.
PRINT-ATTACKa[4, 3, 9] 50/50 200/1,000 Photo and Video Held and Fixed Cont. & Uncont.
CASIA FAS  50/50 150/450 Photo and Video Held Uncont.
3D MASK-ATTACK  17/17 170/85 Mask Held Cont.
OULU-NPU  55/55 1980/3960 Photo and Video Mobile Uncont.
aContaining also PHOTO-ATTACK DB and REPLAY-ATTACK DB
•The 3D MASK-ATTACK DB , as its name indicates, contains information
related to mask-attacks. As described above, all previous databases contain at-
tacks performed with 2D artifacts (i.e., photo or video) that are very rarely ef-
fective against systems capturing 3D face data. It contains access attempts of
17 different users. The attacks were performed with real-size 3D masks manu-
factured by ThatsMyFace.com2. For each access attempt a video was captured
using the Microsoft Kinect for Xbox 360, that provides RGB data and also depth
information. That allows to evaluate both 2D and 3D PAD techniques, and also
their fusion . Example masks from this database can be seen in Fig. 3. The
3D MASK-ATTACK DB is property of the Idiap Research Institute, and it can
be obtained at https://www.idiap.ch/dataset/3dmad.
•The OULU-NPU DB , is a recent dataset that contains information of PAD
attacks acquired with mobile devices. Nowadays mobile authentication is one of
the most relevant scenarios due to the wide spread of the use of smartphones.
However, in most datasets the images are acquired in constrained conditions.
This type of data may present motion, blur, and changing illumination conditions,
backgrounds and head poses. The database consists of 5940 videos of 55 subjects
recorded in three distinct illumination conditions, with 6 different smartphone
models. The resolution of all videos is 1920×1080 and it comprehends print and
video-replay attacks. The OULU-NPU DB is property of the University of Oulu,
it has been used in the IJCB 2017 Competition on Generalized Face Presentation
Attack Detection , and it can be obtained at https://sites.google.
In Table 2 we show a comparison of the most relevant features of the above
Introduction to Face Presentation Attack Detection 15
Fig. 6 Scheme of a parallel score-level fusion between a PAD and a face recognition system.
In this type of scheme, the input biometric data is sent at the same time to both the face recognition
system and the PAD system, and each one generates a independent score, then the two scores are
fused to take one unique decision.
5 Integration with Face Recognition Systems
In order to create a face recognition system resistant to presentation attacks, the
proper PAD techniques have to be selected. After that, the integration of the PAD
countermeasures with the face recognition system can be done at different levels,
namely, score-level or decision-level fusion .
The ﬁrst possibility consists in using score level fusion as shown in Figure 6.
This is a popular approach due to its simplicity and the good results given in fusion
of multimodal biometric systems [15, 14, 43]. In this case, the biometric data enter
at the same time to both the face recognition system and the PAD system, and each
one computes their own scores. Then the scores from each system are combined
into a new ﬁnal score that is used to determine if the sample comes from a genuine
user or not. The main advantage of this approach is its speed, as both modules, i.e.
the PAD and face recognition modules, perform their operations at the same time.
This fact can be exploited in systems with good parallel computation speciﬁcations,
such as those with multicore/multithread processors.
Another common way to combine PAD and face recognition systems is a serial
scheme, as in Figure 7, in which the PAD system makes its decision ﬁrst, and only
if the samples are determined to come from a living person, then they are processed
by the face recognition system. Thanks to this decision-level fusion, the face recog-
nition system will search for the identity that corresponds to the biometric sample
knowing previously that the sample does not come from a presentation attack. Dif-
ferently to the parallel approach, in the serial scheme the average time for an access
attempt will be longer due to the consecutive delays of the PAD and the face recog-
nition modules. However, this approach avoids extra work to the face recognition
system in the case of a PAD attack, since it should be detected in an early stage.
16 Javier Hernandez-Ortega, Julian Fierrez, Aythami Morales and Javier Galbally
Fig. 7 Scheme of a serial fusion between a PAD and a face recognition system. In this type of
scheme the PAD system makes its decision ﬁrst, and only if the samples are determined to come
from a living person, then they are processed by the face recognition system.
Attackers can use a great number of spoofs with no constraints, each one of different
nature. Therefore, it is of the utmost importance to collect new databases with new
scenarios in order to develop more effective PAD methods. Otherwise, it will be dif-
ﬁcult to grant an acceptable level of security of face recognition systems. However,
it is especially challenging to recreate real attacking conditions in a laboratory eval-
uation. Under controlled conditions, systems are tested against a restricted number
of typical presentation artifacts. These restrictions make it unfeasible to collect a
database with all the different fake spoofs that may be found in the real world.
Normally, PAD techniques are developed to ﬁght against one concrete type of
attack (e.g. printed photos), retrieved from a speciﬁc dataset. The countermeasures
are thus designed to achieve high presentation attack detection against that particular
spoof technique. However, when testing these same techniques against other type of
fake artifacts (e.g. video-replay attacks), usually the system is unable to efﬁciently
detect them. There is one important lesson to be learned from this fact: there is
not a superior PAD technique that outperforms all the others in all conditions; so
knowing wich technique to use against each type of attack is a key element. It would
be interesting to use different countermeasures that have proved to be robust against
particular types of artifacts, in order to develop fusion schemes that combine their
results, achieving that way a high performance against a variety of presentation
attacks data [15, 24].
On the other hand, as technology progresses constantly, new hardware devices
and software techniques continue to appear. It is important to keep track of this quick
technological progress since some of the advances can be the key to develop novel
and efﬁcient presentation attack techniques. For example, focusing the research on
the biological nature of biometric traits (e.g. thermogram, blood ﬂow, etc) should be
considered , as the standard techniques based on texture and movement seem to
be inefﬁcient against some spoof artifacts.
Introduction to Face Presentation Attack Detection 17
Face recognition systems are increasingly being deployed in a diversity of scenarios
and applications. Due to this widespread use, they have to withstand a high variety
of attacks. Among all these threats, one with high impact are presentation attacks.
In this chapter, a review of the strengths and vulnerabilities of face as a biometric
trait has been presented. We have described the main presentation attacks, differ-
entiating between multiple approaches, the corresponding PAD countermeasures,
and the public databases that can be used to evaluate new protection techniques.
The weak points of the existing countermeasures have been stated, and also some
possible future directions to deal with those weaknesses have been discussed.
Due to the nature of face recognition systems, without the correct PAD coun-
termeasures, most of the state-of-the-art systems are vulnerable to attacks, since
they do not integrate any module to discriminate between real and fake samples.
Existing databases are useful resources to study presentation attacks, but the PAD
techniques developed using them might not be robust in all possible attack scenar-
ios. The combination of countermeasures with fusion schemes, and the acquisition
of new challenging databases could be a key asset to counterfeit the new types of
attacks that could appear.
To conclude this introductory chapter, it could be said that even though a great
amount of work has been done to ﬁght against face presentation attacks, there are
still big challenges to be faced in this topic, due to the evolving nature of the attacks,
and the critical applications in which these systems are deployed in the real world.
Acknowledgements This work was done in the context of the TABULA RASA and BEAT
projects funded under the 7th Framework Programme of EU, and the project CogniMetrics
1. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. (ICASSP) (2017)
2. Proc. IEEE/IAPR Int. Joint Conf. Biometrics (IJCB) (2017)
3. Anjos, A., Chakka, M.M., Marcel, S.: Motion-based counter-measures to photo attacks in face
recognition. IET biometrics 3(3), 147–158 (2013)
4. Anjos, A., Marcel, S.: Counter-measures to photo attacks in face recognition: a public database
and a baseline. In: International Joint Conference on Biometrics (IJCB), pp. 1–7 (2011)
5. Bharadwaj, S., Dhamecha, T.I., Vatsa, M., Singh, R.: Computationally efﬁcient face spooﬁng
detection with motion magniﬁcation. In: Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition Workshops, pp. 105–110 (2013)
6. Boulkenafet, Z., Komulainen, J., Akhtar, Z., Benlamoudi, A., Samai, D., Bekhouche, S., Ouaﬁ,
A., Dornaika, F., Taleb-Ahmed, A., Qin, L., et al.: A competition on generalized software-
based face presentation attack detection in mobile scenarios. In: International Joint Conference
on Biometrics (IJCB), pp. 688–696 (2017)
7. Boulkenafet, Z., Komulainen, J., Li, L., Feng, X., Hadid, A.: OULU-NPU: A Mobile Face
Presentation Attack Database with Real-World Variations. In: IEEE International Conference
on Automatic Face Gesture Recognition, pp. 612–618 (2017)
18 Javier Hernandez-Ortega, Julian Fierrez, Aythami Morales and Javier Galbally
8. Chakka, M.M., Anjos, A., Marcel, S., Tronci, R., Muntoni, D., Fadda, G., Pili, M., Sirena, N.,
Murgia, G., Ristori, M., Roli, F., Yan, J., Yi, D., Lei, Z., Zhang, Z., Li, S.Z., Schwartz, W.R.,
Rocha, A., Pedrini, H., Lorenzo-Navarro, J., Castrilln-Santana, M., Mtt, J., Hadid, A., Pietiki-
nen, M.: Competition on counter measures to 2-D facial spooﬁng attacks. In: International
Joint Conference on Biometrics (IJCB) (2011)
9. Chingovska, I., Anjos, A., Marcel, S.: On the Effectiveness of Local Binary Patterns in Face
Anti-spooﬁng. In: IEEE BIOSIG (2012)
10. Chingovska, I., Anjos, A., Marcel, S.: Anti-spooﬁng in action: joint operation with a veri-
ﬁcation system. In: Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition Workshops, pp. 98–104 (2013)
11. Chingovska, I., Yang, J., Lei, Z., Yi, D., Li, S.Z., Kahm, O., Glaser, C., Damer, N., Kuijper,
A., Nouak, A., et al.: The 2nd competition on counter measures to 2D face spooﬁng attacks.
In: International Conference on Biometrics (ICB) (2013)
12. Dantcheva, A., Chen, C., Ross, A.: Can facial cosmetics affect the matching accuracy of face
recognition systems? In: Biometrics: Theory, Applications and Systems (BTAS), 2012 IEEE
Fifth International Conference on, pp. 391–398. IEEE (2012)
13. Erdogmus, N., Marcel, S.: Spooﬁng face recognition with 3D masks. IEEE Transactions on
Information Forensics and Security 9(7), 1084–1097 (2014)
14. Fierrez, J., Morales, A., Vera-Rodriguez, R., Camacho, D.: Multiple classiﬁers in biometrics.
Part 1: Fundamentals and review. Information Fusion (2018)
15. de Freitas Pereira, T., Anjos, A., De Martino, J.M., Marcel, S.: Can face anti-spooﬁng coun-
termeasures work in a real world scenario? In: International Conference on Biometrics (ICB),
pp. 1–8 (2013)
16. Galbally, J., Marcel, S., Fierrez, J.: Biometric antispooﬁng methods: A survey in face recog-
nition. IEEE Access 2, 1530–1552 (2014)
17. Galbally, J., Marcel, S., Fierrez, J.: Image quality assessment for fake biometric detection:
Application to iris, ﬁngerprint, and face recognition. IEEE transactions on image processing
23(2), 710–724 (2014)
18. Galbally, J., Satta, R.: Three-dimensional and two-and-a-half-dimensional face recognition
spooﬁng using three-dimensional printed models. IET Biometrics 5(2), 83–91 (2016)
19. Garcia, C.: Utilizaci ´on de la ﬁrma electr´onica en la Administraci´on espa˜nola iv: Identidad y
ﬁrma digital. El DNI electr´onico. Administraci´on electr´onica y procedimiento administrativo
20. Gipp, B., Beel, J., R ¨ossling, I.: ePassport: The Worlds New Electronic Passport. A Report
about the ePassports Beneﬁts, Risks and its Security. CreateSpace (2007)
21. Gomez-Barrero, M., Galbally, J., Fierrez, J., Ortega-Garcia, J.: Multimodal biometric fusion:
a study on vulnerabilities to indirect attacks. In: Iberoamerican Congress on Pattern Recogni-
tion, pp. 358–365. Springer (2013)
22. Gonzalez-Sosa, E., Vera-Rodriguez, R., Fierrez, J., Patel, V.: Exploring Body Shape from
mmW Images for Person Recognition. IEEE Transactions on Information Forensics and Se-
curity 12(9), 2078–2089 (2017)
23. Goodin, D.: Get your german interior ministers ﬁngerprint here. The Register 30 (2008)
24. Hadid, A., Evans, N., Marcel, S., Fierrez, J.: Biometrics systems under spooﬁng attack: an
evaluation methodology and lessons learned. IEEE Signal Processing Magazine 32(5), 20–30
25. Hernandez-Ortega, J., Fierrez, J., Morales, A., Tome, P.: Time Analysis of Pulse-based Face
Anti-Spooﬁng in Visible and NIR. In: IEEE CVPR Computer Society Workshop on Biomet-
26. Intel: (2017). https://software.intel.com/realsense
27. International Biometric Group and others: Biometrics Market and Industry Report 2009-2014
28. ISO: Information Technology Security Techniques Security Evaluation of Biometrics,
ISO/IEC Standard ISO/IEC 19792:2009, 2009. International Organization for Standardiza-
tion (2009). URL https://www.iso.org/standard/51521.html
Introduction to Face Presentation Attack Detection 19
29. ISO: Information technology – Biometric presentation attack detection – Part 1: Framework.
International Organization for Standardization (2016). URL https://www.iso.org/
30. Jain, A.K., Li, S.Z.: Handbook of face recognition. Springer (2011)
31. Kim, J., Choi, H., Lee, W.: Spoof detection method for touchless ﬁngerprint acquisition appa-
ratus. Korea Patent 1(054), 314 (2011)
32. Kim, Y., Na, J., Yoon, S., Yi, J.: Masked fake face detection using radiance measurements.
JOSA A 26(4), 760–766 (2009)
33. Kim, Y., Yoo, J.H., Choi, K.: A motion and similarity-based fake detection method for bio-
metric face recognition systems. IEEE Transactions on Consumer Electronics 57(2), 756–762
34. Kose, N., Dugelay, J.L.: On the vulnerability of face recognition systems to spooﬁng mask
attacks. In: (ICASSP)International Conference on Acoustics, Speech and Signal Processing,
pp. 2357–2361. IEEE (2013)
35. Lagorio, A., Tistarelli, M., Cadoni, M., Fookes, C., Sridharan, S.: Liveness detection based
on 3D face shape analysis. In: International Workshop on Biometrics and Forensics (IWBF).
36. Li, X., Komulainen, J., Zhao, G., Yuen, P.C., Pietik ¨ainen, M.: Generalized face anti-spooﬁng
by detecting pulse from face videos. In: 23rd International Conference on Pattern Recognition
(ICPR), pp. 4244–4249. IEEE (2016)
37. Liu, S., Yang, B., Yuen, P.C., Zhao, G.: A 3D Mask Face Anti-spooﬁng Database with Real
World Variations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition Workshops, pp. 100–106 (2016)
38. Newman, L.H.: (2016). https://www.wired.com/2016/08/hackers-trick-facial-recognition-
39. Nguyen, D., Bui, Q.: Your face is NOT your password. BlackHat DC (2009)
40. Ortega-Garcia, J., Fierrez, J., Alonso-Fernandez, F., Galbally, J., Freire, M.R., Gonzalez-
Rodriguez, J., Garcia-Mateo, C., Alba-Castro, J.L., Gonzalez-Agulla, E., Otero-Muras, E.,
et al.: The Multiscenario Multienvironment Biosecure Multimodal Database (BMDB). IEEE
Transactions on Pattern Analysis and Machine Intelligence 32(6), 1097–1111 (2010)
41. Pan, G., Wu, Z., Sun, L.: Liveness detection for face recognition. In: Recent advances in face
recognition. InTech (2008)
42. Peixoto, B., Michelassi, C., Rocha, A.: Face liveness detection under bad illumination condi-
tions. In: International Conference on Image Processing (ICIP), pp. 3557–3560 (2011)
43. Ross, A.A., Nandakumar, K., Jain, A.K.: Handbook of multibiometrics, vol. 6. Springer Sci-
ence & Business Media (2006)
44. da Silva Pinto, A., Pedrini, H., Schwartz, W., Rocha, A.: Video-based face spooﬁng detection
through visual rhythm analysis. In: SIBGRAPI Conference on Graphics, Patterns and Images,
pp. 221–228 (2012)
45. Smith, D.F., Wiliem, A., Lovell, B.C.: Face recognition on consumer devices: Reﬂections on
replay attacks. IEEE Transactions on Information Forensics and Security 10(4), 736–745
46. Sun, L., Huang, W., Wu, M.: TIR/VIS correlation for liveness detection in face recognition.
In: International Conference on Computer Analysis of Images and Patterns, pp. 114–121.
47. Tan, X., Li, Y., Liu, J., Jiang, L.: Face liveness detection from a single image with sparse low
rank bilinear discriminative model. Computer Vision–ECCV pp. 504–517 (2010)
48. Turk, M.A., Pentland, A.P.: Face recognition using eigenfaces. In: Computer Society Confer-
ence on Computer Vision and Pattern Recognition (CVPR), pp. 586–591 (1991)
49. Wu, H.Y., Rubinstein, M., Shih, E., Guttag, J., Durand, F., Freeman, W.: Eulerian video mag-
niﬁcation for revealing subtle changes in the world. ACM Transactions on Graphics 31(4)
50. Yang, J., Lei, Z., Liao, S., Li, S.Z.: Face liveness detection with component dependent de-
scriptor. In: International Conference on Biometrics (ICB), pp. 1–6 (2013)
20 Javier Hernandez-Ortega, Julian Fierrez, Aythami Morales and Javier Galbally
51. Zhang, D., Ding, D., Li, J., Liu, Q.: PCA based extracting feature using fast fourier transform
for facial expression recognition. Transactions on Engineering Technologies pp. 413–424
52. Zhang, Z., Yan, J., Liu, S., Lei, Z., Yi, D., Li, S.Z.: A face antispooﬁng database with diverse
attacks. In: International Conference on Biometrics (ICB), pp. 26–31 (2012)