ArticlePDF Available

The Glasgow Face Matching Test


Abstract and Figures

We describe a new test for unfamiliar face matching, the Glasgow Face Matching Test (GFMT). Viewers are shown pairs of faces, photographed in full-face view but with different cameras, and are asked to make same/different judgments. The full version of the test comprises 168 face pairs, and we also describe a shortened version with 40 pairs. We provide normative data for these tests derived from large subject samples. We also describe associations between the GFMT and other tests of matching and memory. The new test correlates moderately with face memory but more strongly with object matching, a result that is consistent with previous research highlighting a link between object and face matching, specific to unfamiliar faces. The test is available free for scientific use.
Content may be subject to copyright.
Glasgow Caledonian University
Research Online @ GCU
School of Life Sciences
The Glasgow Face Matching Test
A. Mike Burton
David White
Allan McNeill
Glasgow Caledonian University,
This Journal Article is brought to you for free and open access by Research Online @ GCU. It has been accepted for inclusion in the repository by an
authorized administrator of Research Online @ GCU. For more information, please contact
Digital Commons Citation
Burton, A. Mike; White, David; and McNeill, Allan, "The Glasgow Face Matching Test" (2010). School of Life Sciences. Paper 440.
© 2010 The Psychonomic Society, Inc. 286
Traditional research on face perception has tended to
focus on two aspects of the problem: recognition of fa-
miliar faces and memory for unfamiliar faces. Theoretical
models, such as that offered by Bruce and Young (1986),
have been used for understanding familiar face recognition
in typical observers and neuropsychologically impaired
patients. Research on face memory, on the other hand, has
tended to be led by difficult forensic problems, such as
eyewitness testimony (e.g., Lane & Meissner, 2008; Mal-
pass & Devine, 1981; Searcy, Bartlett, & Memon, 1999;
Wells & Olson, 2003).
In recent years, it has become clear that unfamiliar face
matching is a problem worthy of study in its own right. At
first glance, this might appear to be a simple problem, but
recent research has shown that matching unfamiliar faces
is, in fact, rather difficult, even when high-quality images
are used. Bruce et al. (1999) presented viewers with 1-in-
10 arrays, in which a photo of a young man was accompa-
nied by 10 possible matches. All the images were shown
in a very similar pose (full face) and in good lighting and
had been taken on the same day, eliminating transient dif-
ferences due to hairstyle, weight, and so forth. Crucially,
target and array photos were taken with different cameras
(one a high-quality video camera and one a studio film
camera). Under these seemingly optimal conditions, with
no time constraints, and with instructions emphasizing ac-
curacy, viewers performed surprisingly poorly. They were
accurate only 70% of the time, for both target-present and
target-absent arrays. This basic finding has been repli-
cated many times and has been extended to situations in
which only target-present arrays were shown, reducing the
problem to a 1-in-10 forced choice, and in which viewers
scored only 80% accurate (Bruce, Henderson, Newman,
& Burton, 2001). These accuracy rates have also been rep-
licated using an entirely different stimulus set, Egyptian
young men as targets, with Egyptian students as viewers
(Megreya & Burton, 2008).
In subsequent studies, researchers have used simple
pairs of faces to measure matching ability (Clutterbuck &
Johnston, 2002; Megreya & Burton, 2006, 2007). Under
these circumstances, similarly poor matching rates have
been observed. Typically, people have found it surpris-
ingly difficult to match two images of an unfamiliar per-
son, making between 10% and 25% errors, depending on
the particular stimulus sets that were used. These error
rates have never been experienced in matching familiar
faces, where ceiling levels of performance have been ob-
served (see Hancock, Bruce, & Burton, 2000). Indeed, a
series of experiments by Clutterbuck and Johnston (2002,
2004, 2005) showed that the ability to match images of
faces was a very good indicator of the viewer’s level of
familiarity with a face and improved predictably with in-
creased exposure to the person depicted.
All the studies listed above employed photo-to-photo
matching, rather than live-person-to-photo matching.
There are a number of security-related situations in which
photo-to-photo matching is important—for example,
when one tries to match an image of a suspect to a sur-
veillance camera image from a crime scene. However, it
is also becoming increasingly common to ask viewers to
match photos to live faces. Matching a photo to a face is
required not only for passport control, but also in more
commonplace settings, such as verifying one’s age in
order to buy alcohol. Two studies have recently demon-
The Glasgow Face Matching Test
A. Mi k e Bu r t o n A n d dA v i d W h i t e
University of Glasgow, Glasgow, Scotland
A n d
Al l A n Mcne i l l
Glasgow Caledonian University, Glasgow, Scotland
We describe a new test for unfamiliar face matching, the Glasgow Face Matching Test (GFMT). Viewers are
shown pairs of faces, photographed in full-face view but with different cameras, and are asked to make same/
different judgments. The full version of the test comprises 168 face pairs, and we also describe a shortened
version with 40 pairs. We provide normative data for these tests derived from large subject samples. We also
describe associations between the GFMT and other tests of matching and memory. The new test correlates
moderately with face memory but more strongly with object matching, a result that is consistent with previous
research highlighting a link between object and face matching, specific to unfamiliar faces. The test is available
free for scientific use.
Behavior Research Methods
2010, 42 (1), 286-291
A. M. Burton,
Gl a s G o w Fa c e Ma t c h i n G te s t 287
(Benton, Hamsher, Varney, & Spreen, 1983). This test re-
quires participants to match faces across different views.
However (and crucially), all images are taken with the
same camera. The test we present here tackles a different
problem: matching two images in the same view but taken
with different cameras. No existing test of face processing
incorporates this task, perhaps because it has only rela-
tively recently become clear that it is nontrivial. More-
over, the issue of camera change is an important one in
forensic settings and in everyday verification of photo ID.
We have argued that it introduces important variability
that discriminates familiar from unfamiliar face process-
ing (Burton, Jenkins, Hancock, & White, 2005; Jenkins
& Burton, 2008).
To summarize, the test of face matching described in
the remainder of this article is intended to complement
existing tests of face processing, rather than to replace
any existing tests. It measures performance on a task that
is not trivially easy and has been shown to correlate well
with levels of familiarity. Furthermore, it mimics a situ-
ation that is commonly encountered in security settings:
how to match two unfamiliar face images in similar poses
but taken with different cameras.
Test Construction
To build a new database of faces, volunteers were re-
cruited through advertising posters in student recreation
areas of a university. Three hundred four individuals con-
tributed their time in exchange for a small payment. They
were 172 men and 132 women, with the mean age for men
being 22.9 years (SD 5 6.7), and for women 23.2 years
(SD 5 7.0). Over the course of a single session, each
volunteer was photographed in a variety of poses, using
two different digital cameras. Volunteers were also filmed
moving between poses and expressions, using a digital
video camera. Thus, for each volunteer, we have images
from three different capture devices taken on the same
day. This large database continues to expand with new vol-
unteers and is available from the authors on request (see
the Note for details).
The Glasgow Face Matching Test (GFMT) comprises
168 pairs of faces. For the construction of the test, only
strated that matching a live person to a photo is no easier
than matching two photos of the same person (Davis &
Valentine, 2009; Megreya & Burton, 2008). This suggests
that the psychological study of face matching addresses a
problem of practical, as well as theoretical, consequence.
There are a number of tests of face recognition ability
already available. However, many of these measure face
memory rather than matching—for example, the Recog-
nition Memory Test for faces (Warrington, 1984) and the
Cambridge Face Memory Test (Duchaine & Nakayama,
2006). Of the available instruments for measuring match-
ing ability, the Benton test is the most commonly used
Figure 1. Example test items from the Glasgow Face Matching
Test. (A) Mismatching pair. (B) Matching pair.
Test Score (% Correct)
Cumulative Percentage
Figure 2. Cumulative frequency of accuracies for the Glasgow Face Matching Test.
288 Bu r t o n , wh i t e , a n d Mcne i l l
. Overall accuracy ranged from 62%–
100%, with a mean of 89.9% (SD 5 7.3). Performance
was slightly better on matching items (92%) than on mis-
matching items (88%), indicating a small response bias to
respond same. Couched as detection measures, this gives
a d value of 2.91, with a criterion of 20.09. With this
large sample size, criterion is signif icantly below zero
[t(299) 5 4.69, p , .01]. There was no correlation be-
tween accuracy and age of viewer (r 5 .09),1 and there
was no performance difference between men and women
[male 89%, female 90.4%; t(298) 5 1.53, n.s.]. In order
to measure the internal reliability of the test, we examined
the split-half association by correlating the subjects’ per-
formance on the first and second halves of the test items.
Association was high, with r 5 .81.
Figure 2 gives the cumulative distribution of accuracies
and may easily be used to establish the norm of any score
against this population. As one might predict for a test
of this kind, the distribution is negatively skewed (skew-
ness 5 21.33, p , .05). However, it is interesting to note
that performance is far from perfect. Recall that the test re-
quires the observer to match two photos of a person taken
minutes apart, in the same pose, with two high- quality
cameras. If we consider that the median performance is
92%, this means that half the sample make at least 8%
errors—that is, 13 items wrong across the 168 items in the
test. Similarly, the poorest 25% made at least 24 matching
errors. In a test with no time limits, in which accuracy is
emphasized, this is perhaps surprising, although it is con-
sistent with our previous work showing rather poor levels
of performance on unfamiliar face matching.
Finally, we note that the mean time to complete the self-
paced test was 15 min and that there was a small, but reli-
able, positive correlation between overall accuracy and
time taken (r 5 .177, p , .01).
The matching test described above reveals substan-
tial individual differences in a task that, at first glance,
might appear relatively easy. In order to establish whether
this variation reflects more general variation in visual-
processing abilities, we also examined our subjects’ per-
formance on three more commonly used tests of visual
matching and memory. Each of the 300 subjects who took
part in the study above also contributed measures on three
further tests: (1) recognition memory for faces, (2) the
Matching Familiar Figures Test (MFFT), and (3) a visual
short-term memory test.
full-face poses were used, in which volunteers displayed a
neutral expression. For each person, we used the full-face
image from one of the still cameras (Camera 1: Fujif ilm
FinePix 0800Zoom, 6 megapixel) and a frame in the same
pose taken from the video camera (Camera 2: Panasonic
NV-DS29B DS29). All images were captured against a
background screen, from a distance of 90 cm. The f ixed
sequence of the photographic session ensured that these
two images were taken roughly 15 min apart.
Following image capture, all the photos were edited to
remove the background and any visible clothing. Images
were cropped neatly around the head, using graphical soft-
ware, and were resized to 350 pixels width, before being
stored in grayscale at a resolution of 72 ppi. When pairs
of stimuli were constructed for the test, faces were posi-
tioned in such a way that the horizontal distance between
the bridge of the nose in the two images was 500 pixels.
Of the 168 test pairs, half are same-face trials, in which
two images of the same person are presented side by side.
These 84 people are also used in different-face trials, such
that one of the person’s images is presented alongside a
similar face from the database. The nonmatching faces
for these trials were chosen on the basis of a pilot study in
which pairwise similarity measures were generated using
a sorting technique (see Bruce et al., 1999). The foils for
these trials were the faces most similar to each of the tar-
get identities. For different trials, as with same trials, the
two photos always came from different cameras. Figure 1
shows examples of face pairs.
Performance on the Test
. Following initial pilots, the GFMT was
presented to 300 subjects. This was a relatively hetero-
geneous sample, recruited through advertisements in the
local media. There were 120 males and 180 females. Mean
age was 30.8 years, with a range of 18–80 and a standard
deviation of 14.
Figure 3. Example array from the visual short-term memory
Table 1
Performance on Four Tests of Matching and Memory
for Faces
Mean (% correct) 89.9 62.4 66.3 62.9
SD 7.3 10.0 21.9 9.4
Gl a s G o w Fa c e Ma t c h i n G te s t 289
tests in previous research using a lineup task (Megreya &
Burton, 2006).
3. Visual short-term memory for objects test. For this
test, circular visual arrays of objects were constructed.
Forty-five common objects were taken from the database
of Rossion and Pourtois (2004). These were used to create
six circular arrays of 5, 6, 7, 8, 9, and 10 objects. An ex-
ample is given in Figure 3. Testing followed the procedure
described by Miller (1956), in his highly influential ac-
count of memory span. The subjects were presented with
each array in turn, starting with the array with the fewest
objects (5 items) and ending with the array with the most
objects (10 items). Each array was presented on the screen
for 5 sec, after which the subjects were asked to write as
many of the items as they could remember on a sheet of
paper provided to them.
Results and Discussion
Table 1 shows the overall performance levels for the
GFMT and the three tests described here. Table 2 shows
the association between the tests (Pearson’s r), as well as
the correlation between performance on the test and the
subjects’ ages.
There are a number of points to note from these data.
First, the highest correlation with the GFMT is the MFFT.
This is consistent with the notion that unfamiliar faces
tend to be processed as general visual objects, without
recruiting the perceptual processes that lead to very ro-
bust performance with familiar faces (e.g., Hancock et al.,
1. Recognition memory for faces. For this test, a fur-
ther 40 people’s faces from the same database were used
(20 men and 20 women). Images were prepared in exactly
the same way as described above, were presented to the
subjects in grayscale, at the same size and resolution as
those in the GFMT, and were cropped of background in
the same way.
To test recognition memory, the subjects were shown
images of 20 of the faces, all taken with Camera 1. The
subjects sat in front of a computer screen and were in-
structed to pay close attention to the faces, since they
would be asked to identify them later. The images ap-
peared in sequence for 2 sec each, preceded by a f ixa-
tion cross for 750 msec. Once all 20 images had been
presented, a message appeared instructing the subjects to
wait for further instructions. After a 20-sec interval, test
phase instructions appeared. During test, the viewers were
presented with 40 faces, all taken with Camera 2 (i.e., not
the same camera as that used for images in the first phase).
They were told that they should decide, independently for
each face, whether it had appeared in the earlier phase.
Testing was self-paced.
2. Matching Familiar Figures Test. The MFFT is a com-
mon technique for measuring cognitive style, impulsivity
versus reflexivity (Kagan, 1965). The test consists of 20
standard line drawings of common objects (targets) and
six variants of each object, one of which is identical to the
target image. Performance on this test has been shown to
correlate with performance on unfamiliar-face-matching
Table 2
Correlations Between Tests: Pearson’s r
for Faces
GFMT .285** .420** .050 .090
Recognition memory for faces .158*.186*2.209**
Matching Familiar Figures Test .176*2.023
Visual STM 2.177*
Note—STM, short-term memory; GFMT, Glasgow Face Matching Test.
*p , .01. **p , .001.
Test Score (% Correct)
0102030405060708090 100
Cumulative Percentage
Figure 4. Cumulative frequency of accuracies for the short version of the Glasgow
Face Matching Test.
290 Bu r t o n , wh i t e , a n d Mcne i l l
We have presented a new test for face matching. Un-
like other available tests, the GFMT presents two images
taken in the same pose, minutes apart, with high-quality
cameras. Despite these apparently optimal conditions, this
task is not trivially easy, and we have demonstrated that
there is large interindividual variation in performance.
We note that modern security measures mean that peo-
ple are commonly asked to prove their identity with a
photograph. Correspondingly, there are very many people
whose daily activity requires them to confirm somebody’s
identity in this way. Previous research has established that
unfamiliar face matching is a surprisingly difficult task,
and we have recently demonstrated that matching a live
person to their photo is no easier than matching two pho-
tos (Megreya & Burton, 2008). With this in mind, we have
constructed a test that does not make the task artificially
difficult—for example, by covering people’s hair or re-
quiring a match across different poses. Instead, we have
examined a commonplace match, two full-face views in
good lighting, in an attempt to mimic situations in which
one is trying to optimize the accuracy of a photo ID, not
to make it difficult.
Given the substantial individual differences in face
matching demonstrated here, we anticipate that one po-
tential use of the test may be in personnel selection for
particular tasks requiring face matching. There is clearly
also a potential for use in training: Since almost no one we
tested showed perfect performance, it would be interesting
to use difficult items in training regimes. There is also a
clear potential for neuropsychological use of the test.
This work was supported by Grant 000-23-1348 from the ESRC to
A.M.B. and A.M. The full GFMT and the short version are available for
download from the authors’ Web site at The test
is free for research use, and the download package includes instructions,
scoring sheets, and the norm data presented here. All those who volun-
teered use of their faces for this test have provided written permission
for the images to be used for any research purposes, including scientific
publication. The full database of images (Glasgow Unfamiliar Face Da-
tabase) from which the test was derived is available at the same site.
Correspondence concerning this article should be addressed to A. M.
Burton, Department of Psychology, University of Glasgow, Glasgow
G12 8QQ, Scotland (e-mail:
Benton, A. L., Hamsher, K. S., Varney, N. R., & Spreen, O. (1983).
Contributions to neuropsychological assessment. New York: Oxford
University Press.
Bruce, V., Henderson, Z., Greenwood, K., Hancock, P., Burton,
A. M., & Miller, P. (1999). Verification of face identities from im-
ages captured on video. Journal of Experimental Psychology: Ap-
plied, 5, 339-360.
Bruce, V., Henderson, Z., Newman, C., & Burton, A. M. (2001).
Matching identities of familiar and unfamiliar faces caught on CCTV
images. Journal of Experimental Psychology: Applied, 7, 207-218.
Bruce, V., & Young, A. W. (1986). Understanding face recognition.
British Journal of Psychology, 77, 305-327.
Burton, A. M., Jenkins, R., Hancock, P. J. B., & White, D. (2005).
Robust representations for face recognition: The power of averages.
Cognitive Psychology, 51, 256-284.
2000; Megreya & Burton, 2006). Note that the high as-
sociation between the GFMT and MFFT occurs despite
some large differences in the format of the tests. Notably,
the GFMT involves a yes/no response to pairs of faces,
whereas the MFFT involves a lineup of six options. Fur-
thermore, the MFFT contains only target-present items; a
match always exists. Nevertheless, there is a striking as-
sociation here.
There is a smaller association between face matching
and face memory, using these tests. Nevertheless, there is
a substantial effect here, suggesting some shared process-
ing. Note that the recognition memory test for unfamiliar
faces is very difficult (M 5 62%, with chance being 50%),
in contrast to many similar tests in the literature that use
the identical image at learning and at test. This inevitably
skews the memory data positively and, therefore, may lead
to an underestimation of the correlations with other meas-
ures. Nevertheless, it is noticeable that this is the only
measure that correlates with all the other tests. Perhaps
more interesting is the pattern of associations between the
tests and the subjects’ ages. It is clear that both tests of
memory show a decline in performance with age. This is
the case despite large differences in style between the two
tests of memory (faces or objects, delayed vs. immediate
memory). However, the association with age is completely
absent in the two rather different tests of matching. This
observation appears interesting and will be followed up in
future research.
The full GFMT comprises 168 pairs of faces and is
self-paced. We anticipated that some users would prefer
a briefer test, and so we developed a shortened version
comprising only 40 face pairs. Items for this test were
selected as being the most difficult items from the full
version. Using data from the test of 300 subjects above,
the 20 matching and 20 nonmatching items were chosen
that had resulted in the most errors. Scores on this subset
of items correlated very highly with overall scores on the
full test (r 5 .91), making this a potentially useful version
of the test.
The short version of the GFMT was tested on 194 new
volunteers, none of whom had taken part in the studies
described above. These were young adult subjects with a
mean age of 26 years (range, 18–46). There were 121 men
and 73 women. The test was run self-paced and typically
took between 3 and 4 min to complete, making it appreci-
ably shorter than the full version.
Mean performance on the short test was 81.3%, with
SD 5 9.7 and range 5 51%–100%. This is substantially
lower than performance on the full test, confirming the
choice of difficult items. Mean performance on match
and mismatch trials was 79.8% and 82.5%, respectively.
Figure 4 shows the cumulative distribution of accuracies
and may easily be used to establish the norm of any score
against this population. The test is significantly negatively
skewed (skewness 5 20.45, p , .05), although rather less
so than the full version.
Gl a s G o w Fa c e Ma t c h i n G te s t 291
Megreya, A. M., & Burton, A. M. (2007). Hits and false positives in
face matching: A familiarity-based dissociation. Perception & Psy-
chophysics, 69, 1175-1184.
Megreya, A. M., & Burton, A. M. (2008). Matching faces to photo-
graphs: Poor performance in eyewitness memory (without the mem-
ory). Journal of Experimental Psychology: Applied, 14, 364-372.
Miller, G. A. (1956). The magical number seven, plus or minus two:
Some limits on our capacity for processing information. Psychologi-
cal Review, 63, 81-97.
Rossion, B., & Pourtois, G. (2004). Revisiting Snodgrass and Vander-
wart’s object set: The role of surface detail in basic-level object recog-
nition. Perception, 33, 217-236.
Searcy, J. H., Bartlett, J. C., & Memon, A. (1999). Age differences in
accuracy and choosing in eyewitness identification and face recogni-
tion. Memory & Cognition, 27, 538-552.
Warrington, E. K. (1984). Recognition Memory Test. Windsor, U.K.:
Wells, G. L., & Olson, E. (2003). Eyewitness identification. Annual
Review of Psychology, 54, 277-295.
1. Previous research (Searcy et al., 1999) suggests that adult age may
be more strongly associated with false positives than with hits. However,
that association was not present here: Correlations with age were r 5
.197 and 2.023 for hits and false positives, respectively.
(Manuscript received April 7, 2009;
revision accepted for publication May 24, 2009.)
Clutterbuck, R., & Johnston, R. A. (2002). Exploring levels of face
familiarity by using an indirect face-matching measure. Perception,
31, 985-994.
Clutterbuck, R., & Johnston, R. A. (2004). Matching as an index of
face familiarity. Visual Cognition, 11, 857-869.
Clutterbuck, R., & Johnston, R. A. (2005). Demonstrating how un-
familiar faces become familiar using a face matching task. European
Journal of Cognitive Psychology, 17, 97-116.
Davis, J., & Valentine, T. (2009). CCTV on trial: Matching video im-
ages with the defendant in the dock. Applied Cognitive Psychology,
23, 482-505.
Duchaine, B., & Nakayama, K. (2006). The Cambridge Face Memory
Test: Results for neurologically intact individuals and an investigation
of its validity using inverted face stimuli and prosopagnosic partici-
pants. Neuropsychologia, 44, 576-585.
Hancock, P. J. B., Bruce, V., & Burton, A. M. (2000). Recognition of
unfamiliar faces. Trends in Cognitive Sciences, 4, 330-337.
Jenkins, R., & Burton, A. M. (2008). 100% accuracy in automatic face
recognition. Science, 319, 435.
Kagan, J. (1965). Reflection-impulsivity and reading ability in primary
grade children. Child Development, 36, 609-628.
Lane, S. M., & Meissner, C. A. (2008). A “middle road” approach
to bridging the basic–applied divide in eyewitness identif ication re-
search. Applied Cognitive Psychology, 22, 779-787.
Malpass, R. S., & Devine, P. G. (1981). Eyewitness identif ication:
Lineup instructions and the absence of the offender. Journal of Ap-
plied Psychology, 66, 482-489.
Megreya, A. M., & Burton, A. M. (2006). Unfamiliar faces are not
faces: Evidence from a matching task. Memory & Cognition, 34,
... In the face matching task, observers are simultaneously presented with two faces and have to decide whether these faces depict the same (i.e., identity matches) or two different identities (i.e., identity mismatches) (Bindemann, 2021;Bruce et al., 2001;Burton et al., 2010;Estudillo & Bindemann, 2014;Fysh & Bindemann, 2017;Johnston & Bindemann, 2013). At a theoretical level, the face matching task has contributed to our understanding of different face processing effects, including holistic processing in the perception of faces (Hole, 1994), the role of pictorial and identity codes during face identification (Menon et al., 2015), the cognitive locus of the otherrace effect (Kokje et al., 2018;Megreya et al., 2011) and the effect of changing the viewpoint on face identification performance (Estudillo & Bindemann, 2014;Kramer & Reynolds, 2018), among others. ...
... One of these sources is related to the observers' actual skills to match faces (i.e., the so-called resource limit account). Indeed, research has shown that unfamiliar face matching skills present substantial individual differences across observers, with some individuals performing at chance levels while others performing at ceiling levels (Bruce et al., 2018;Burton et al., 2010;Estudillo & Bindemann, 2014;McCaffery et al., 2018). Thus, this account highlights the importance of using objective face identification tasks during personnel selection for those applied settings whereby the identification of others is demanded (Bobak et al., 2016;Estudillo, 2021;Fysh et al., 2020;Ramon et al., 2019;Robertson et al., 2016). ...
... One hundred and twenty pairs of female and male Caucasian faces from the Glasgow Unfamiliar Face Database (Burton et al., 2010) were used in this study. One face photograph in each pair was taken with a high-quality digital camera, while the other was a still frame from high-quality video. ...
Full-text available
Although the positive effects of congruency between stimuli are well replicated in face memory paradigms, mixed findings have been found in face matching. Due to the current COVID-19 pandemic, face masks are now very common during daily life outdoor activities. Thus, the present study aims to further explore congruency effects in matching faces partially occluded by surgical masks. Observers performed a face matching task consisting of pairs of faces presented in full view (i.e., full-view condition), pairs of faces in which only one of the faces had a mask (i.e., one-mask condition), and pairs of faces in which both faces had a mask (i.e., two-mask condition). Although face masks disrupted performance in identity match and identity mismatch trials, in match trials, we found better performance in the two-mask condition compared to the one-mask condition. This finding highlights the importance of congruency between stimuli on face matching when telling faces together.
... In turn, highsimilarity mismatches are correspondingly more likely to be classed as the same person than low-similarity mismatches (Papesh et al., 2018;Rice et al., 2013). Finally, unfamiliar-face matching also correlates with performance in other visual comparison tasks that require observers to detect similarities or discrepancies between non-face objects (Burton et al., 2010;Megreya & Burton, 2006). ...
... Observers are also good at matching computer-generated faces that vary by only one feature (Ramon & Rossion, 2010), or at recognising changes to individual features in newly learned schematic faces (Tanaka & Farah, 1993). Unfamiliar-face matching also correlates with object-matching tests that require identification of specific features (Burton et al., 2010;Megreya & Burton, 2006). There is therefore converging evidence that the perception of features is important for the identity processing of unfamiliar faces. ...
... This might be appropriate for match pairs for which, as these depict the same person, convergence in similarity ratings across features should be reasonably high. The faces in mismatches, however, typically bear some resemblance in appearance, while also depicting different people (see Burton et al., 2010;Fysh & Bindemann, 2018;Tummon et al., 2019). Thus, these face pairs might be more likely to vary in the facial similarity information that is provided across features. ...
Full-text available
Many security settings rely on the identity matching of unfamiliar people, which has led this task to be studied extensively in Cognitive Psychology. In these experiments, observers typically decide whether pairs of faces depict one person (an identity match) or two different people (an identity mismatch). The visual similarity of the to-be-compared faces must play a primary role in how observers accurately resolve this task, but the nature of this similarity-accuracy relationship is unclear. The current study investigated the association between accuracy and facial similarity at the level of individual items (Experiment 1 and 2) and facial features (Experiment 3 and 4). All experiments demonstrate a strong link between similarity and matching accuracy, indicating that this forms the basis of identification decisions. At a feature level, however, similarity exhibited distinct relationships with match and mismatch accuracy. In matches, similarity information was generally shared across the features of a face pair under comparison, with greater similarity linked to higher accuracy. Conversely, features within mismatching face pairs exhibited greater variation in similarity information. This indicates that identity matches and mismatches are characterised by different similarity profiles, which present distinct challenges to the cognitive system. We propose that these identification decisions can be resolved through the accumulation of convergent featural information in matches and the evaluation of divergent featural information in mismatches.
... Human's ability to process faces is heritable (Wilmer et al., 2010) and varies across people, going from very limited abilities to very developed ones (Burton et al., 2010;Russell et al., 2009;Stantic et al., 2021). The reduced ability to identify faces can have negative consequences on human psychological health. ...
Full-text available
Episodic memory concerns the re-experience of past personal events anchored in their encoding context. These episodic memories are not fixed: their content is influenced by the sensory modality of the recall cue. For example, memories evoked by smells are known to be less frequent, more surprising, vivid, emotional, and older than memories evoked by images or words. These phenomena are commonly explained by the close and direct anatomical links that exist between the primary olfactory, memory, and emotional brain structures. However, odors have rarely been compared to cues that also possess privileged links to memory, such as music and faces, both behaviorally and functionally. This thesis has two main objectives: 1) To identify and characterize the particularities of episodic memory attributable to the sensory modality of the recall cue (Studies 1 and 2); 2) To study the dynamics of the neural networks underlying episodic recall and more specifically the interactions that are modulated differently according to the sensory modality of the recall cue (Study 3). To test the hypothesis that emotion would be an essential factor in the particularity of olfactory cues to recall a memory, the secondary aim of this thesis is to evaluate the differential effect of emotion of the episodic recall cue as a function of its sensory modality. To meet our objectives and to allow for the study of episodic memory in the most ecological conditions possible, we have developed a non-immersive virtual reality protocol that can be declined in several versions allowing the encoding and recall of complex and multisensory episodes experienced in the laboratory. By using neutral stimuli, the first study showed that the sensory modality of the recall cue influenced recognition and episodic memory performance. Faces were very well recognized and very good cues for episodic memory; smells were less well recognized, but were good cues for episodic memory; musical excerpts, although very well recognized were not good cues for episodic memory. By using emotional stimuli, the second study confirmed the previous results, and clarified the effects of emotion on episodic memory performance by showing that the emotional valence of the recall cue favors globally all memory stages. The most pleasant and unpleasant stimuli, compared to the most neutral ones, were associated with better memory performance. In addition, the pronounced effectiveness of odors in evoking episodic recall was associated with participants’ individual motivation to resample the stimulus. This study also highlighted the importance of the ecological relevance of the stimuli, with the virtualization of faces leading to the suppression of their superiority as a memory cue in comparison to odors and music. The third study, still in progress, confirms the memory strength of odors, when they are pleasant, to recall the different dimensions of an episode. Preliminary data suggest that musical and olfactory cues in episodic memory activate autobiographical memory networks. In conclusion, our studies reveal an effect of the sensory modality of the recall cue on episodic recall and suggest that this effect is associated with the emotion carried by these cues. Odors appear to be singular recall cues, associated with average recognition performance, but favoring accurate recollection of episodic memories. This recollection is driven by the motivation the odors have generated. Music, although very well recognized, leads to less correct recall of associated episodic dimensions. Finally, visual stimuli seem to differ according to their ecological relevance, with more efficient cueing and more complete memory being associated with more ecologically relevant stimuli.
... Even unmasked, correctly identifying unfamiliar faces is surprisingly difficult (Bruce et al., 1999;Kemp et al., 1997). When asked to decide whether two simultaneously presented faces show the same person or two different people, the average observer makes errors on approximately 20% of trials under the most ideal circumstances, such as when the two photographs are taken on the same day in controlled studio settings (Burton et al., 2010). However, even slight differences in lighting (Hill & Bruce, 1996), viewpoint (Estudillo & Bindemann, 2014), or the distance between the camera and the model (Noyes & Jenkins, 2017), further impair unfamiliar face matching performance (Fysh & Bindemann, 2017b), as does the amount of time that has passed between capturing the two photographs (Megreya et al., 2013), or whether the images are shown in colour or greyscale (Bobak et al., 2019). ...
Full-text available
To slow the spread of COVID-19, many people now wear face masks in public. Face masks impair our ability to identify faces, which can cause problems for professional staff who identify offenders or members of the public. Here, we investigate whether performance on a masked face matching task can be improved by training participants to compare diagnostic facial features (the ears and facial marks)—a validated training method that improves matching performance for unmasked faces. We show this brief diagnostic feature training, which takes less than two minutes to complete, improves matching performance for masked faces by approximately 5%. A control training course, which was unrelated to face identification, had no effect on matching performance. Our findings demonstrate that comparing the ears and facial marks is an effective means of improving face matching performance for masked faces. These findings have implications for professions that regularly perform face identification.
... For instance, Freud et al. (2020) focused on understanding how a mask disrupts holistic processing, and Dhamecha et al. (2014) and Carragher and Hancock (2020) showed that masks impaired perceptual mechanisms that support the identification of faces in a matching task. However, although performance in a matching task is usually correlated with recognition memory for faces (Burton et al., 2010;Megreya & Burton, 2006), this task minimizes the contribution of memory mechanisms and increases reliance on perceptual mechanisms (Estudillo & Bindemann, 2014;Megreya & Burton, 2008). We believe that to better understand the processes involved in face recognition and identification, we should also attempt to understand how contexts in which a mask usually appears can modulate our memory. ...
Full-text available
Previous research has mostly approached face recognition and target identification by focusing on face perception mechanisms, but memory mechanisms also appear to play a role. Here, we examined how the presence of a mask interferes with the memory mechanisms involved in face recognition, focusing on the dynamic interplay between encoding and recognition processes. We approach two known memory effects: (a) matching study and test conditions effects (i.e., by presenting masked and/or unmasked faces) and (b) testing expectation effects (i.e., knowing in advance that a mask could be put on or taken off). Across three experiments using a yes/no recognition paradigm, the presence of a mask was orthogonally manipulated at the study and the test phases. All data showed no evidence of matching effects. In Experiment 1, the presence of masks either at study or test impaired the correct identification of a target. But in Experiments 2 and 3, in which the presence of masks at study or test was manipulated within participants, only masks presented at test-only impaired face identification. In these conditions, test expectations led participants to use similar encoding strategies to process masked and unmasked faces. Across all studies, participants were more liberal (i.e., used a more lenient criterion) when identifying masked faces presented at the test. We discuss these results and propose that to better understand how people may identify a face wearing a mask, researchers should take into account that memory is an active process of discrimination, in which expectations regarding test conditions may induce an encoding strategy that enables overcoming perceptual deficits.
Following traumatic brain injury in adulthood, Pierrette Sapey (PS) became suddenly unable to recognize the identity of people from their faces. Thanks to her remarkable recovery of general brain function, liveliness, and willingness to be tested, PS's case of prosopagnosia has been extensively studied for more than 20 years. This investigation includes hundreds of hours of behavioral data collection that provide information about the nature of human face identity recognition (FIR). Here a theory-driven extensive review of behavioral and eye movement recording studies performed with PS is presented (part I). The specificity of PS's recognition disorder to the category of faces, i.e., with preserved visual object (identity) recognition, is emphasized, arguing that isolating this impairment is necessary to define prosopagnosia, offering a unique window to understand the nature of human FIR. Studies performed with both unfamiliar and experimentally or naturally familiar faces show that PS, while being able to perceive both detailed diagnostic facial parts and a coarse global facial shape, can no longer build a relatively fine-grained holistic visual representation of a face, preventing its efficient individuation. Her mandatory part-by-part analytic behavior during FIR causes increased difficulties at extracting diagnostic cues from the crowded eye region of the face, but also from relative distances between facial parts and from 3D shape more than from surface cues. PS's impairment is interpreted here for the first time in terms of defective (access to) cortical memories of faces following brain damage, causing her impaired holistic perception of face individuality. Implications for revising standard neurofunctional models of human face recognition and evaluation of this function in neurotypical individuals are derived.
The ability to recognize someone’s voice spans a broad spectrum with phonagnosia on the low end and super-recognition at the high end. Yet there is no standardized test to measure an individual’s ability of learning and recognizing newly learned voices with samples of speech-like phonetic variability. We have developed the Jena Voice Learning and Memory Test (JVLMT), a 22-min test based on item response theory and applicable across languages. The JVLMT consists of three phases in which participants (1) become familiarized with eight speakers, (2) revise the learned voices, and (3) perform a 3AFC recognition task, using pseudo-sentences devoid of semantic content. Acoustic (dis)similarity analyses were used to create items with various levels of difficulty. Test scores are based on 22 items which had been selected and validated based on two online studies with 232 and 454 participants, respectively. Mean accuracy in the JVLMT is 0.51 (SD = .18) with an empirical (marginal) reliability of 0.66. Correlational analyses showed high and moderate convergent validity with the Bangor Voice Matching Test (BVMT) and Glasgow Voice Memory Test (GVMT), respectively, and high discriminant validity with a digit span test. Four participants with potential super recognition abilities and seven participants with potential phonagnosia were identified who performed at least 2 SDs above or below the mean, respectively. The JVLMT is a promising research and diagnostic screening tool to detect both impairments in voice recognition and super-recognition abilities.
A wealth of studies have shown that humans are remarkably poor at determining whether two face images show the same person or not (face matching). Given the prevalence of photo-ID, and the fact that people employed to check photo-ID are typically unfamiliar with the person pictured, there is a need to improve unfamiliar face matching accuracy. One method of improvement is to have participants complete the task in a pair, which results in subsequent improvements in the low performer (“the pairs training effect”). Here, we sought to replicate the original finding, to test the longevity of the pairs training effect, and to shed light on the potential underlying mechanisms. In two experiments, we replicated the pairs training effect and showed it is maintained after a delay (Experiment 1). We found no differences between high and low performers in confidence (Experiment 1) or response times (Experiment 2), and the content of the pairs’ discussions (Experiment 2) did not explain the results. The pairs training effect in unfamiliar face matching is robust, but the mechanisms underlying the effects remain as yet unexplained.
Full-text available
In this thesis, a multimodal biometric, secure encrypted data and encrypted biometric encoded into the QR code-based biometric-passport authentication method is proposed for national security applications. Firstly, using the Extended Profile - Local Binary Patterns (EP-LBP), a Canny edge detector, and the Scale Invariant Feature Transform (SIFT) algorithm with Image File Information (IMFINFO) process, the facial mark size recognition is initially achieved. Secondly, by using the Active Shape Model (ASM) into Active Appearance Model (AAM) to follow the hand and infusion the hand geometry characteristics for verification and identification, hand geometry recognition is achieved. Thirdly, the encrypted biometric passport information that is publicly accessible is encoded into the QR code and inserted into the electronic passport to improve protection. Further, Personal information and biometric data are encrypted by applying the Advanced Encryption Standard (AES) and the Secure Hash Algorithm (SHA) 256 algorithm. It will enhance the biometric passport security system.
Face perception is crucial to social interactions, yet people vary in how easily they can recognize their friends, verify an identification document or notice someone’s smile. There are widespread differences in people’s ability to recognize faces, and research has particularly focused on exceptionally good or poor recognition performance. In this Review, we synthesize the literature on individual differences in face processing across various tasks including identification and estimates of emotional state and social attributes. The individual differences approach has considerable untapped potential for theoretical progress in understanding the perceptual and cognitive organization of face processing. This approach also has practical consequences — for example, in determining who is best suited to check passports. We also discuss the underlying structural and anatomical predictors of face perception ability. Furthermore, we highlight problems of measurement that pose challenges for the effective study of individual differences. Finally, we note that research in individual differences rarely addresses perception of familiar faces. Despite people’s everyday experience of being ‘good’ or ‘bad’ with faces, a theory of how people recognize their friends remains elusive. The ability to recognize identity, emotion and other attributes from faces varies across individuals. In this Review, White and Burton synthesize research on individual differences in face processing and the implications of variability in face processing ability for theory and applied settings.
Full-text available
Over a century of laboratory research has explored the mechanisms of memory using a variety of paradigms and stimuli. In addition, many researchers have taken up Neisser’s (1978) challenge to examine memory under real-world conditions, most prominently including the eyewitness identification problem. Unfortunately, these “high road” and “low road” perspectives rarely communicate with one another, with the eyewitness field largely adopting an approach that focuses on methodological adherence to conditions that mimic real-world situations. In the current paper we advocate for a “middle road” approach that includes a focus on theory development, an emphasis on the interaction between field and laboratory research, and the implementation of convergent approaches to investigating eyewitness identification. We argue that the field would be invigorated by such an approach, with benefits accruing to our understanding of eyewitness identification and to the development of procedures that will ultimately improve eyewitness accuracy.
Full-text available
100 college student eyewitnesses of a staged vandalism received varying lineup instructions under conditions in which the offender was present or absent. Biased instructions implied that Ss were to choose someone, whereas unbiased instructions provided a "no choice" option. Ss viewed corporeal lineups on 1 of 3 evenings following the vandalism. A high rate of choosing occurred under biased instructions, and the lowest rate occurred under unbiased instructions with the vandal absent. Identification errors were highest under biased instructions with the vandal absent. With the vandal present under biased instructions all errors were false identifications, whereas under unbiased instructions all errors were false rejections of the lineup. Confidence ratings were obtained following Ss' identification decision. Ss making a choice had high confidence scores, whereas those rejecting the lineup had low confidence scores. Unbiased instructions reduced choosing and false identifications without decreasing correct identifications. Both identifications and nonidentifications had greater "diagnosticity" under unbiased than under biased instructions. (27 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
People can be inaccurate at matching unfamiliar faces shown in high-quality video images, even when viewpoint and facial expressions are closely matched. However, identification of highly familiar faces appears good, even when video quality is poor. Experiment 1 reported a direct comparison between familiar and unfamiliar faces. Participants who were personally familiar with target items appearing on video were highly accurate at a verification task. Unfamiliar participants doing the same task performed very inaccurately. Familiarity affected discriminability, but not bias. Experiments 2 and 3 showed that brief periods of familiarization have little beneficial effect unless "deep" or "social" processing is encouraged. The results show that video evidence can be used effectively as a probe to identity when the faces shown are highly familiar to observers, but caution should be used where images of unfamiliar people are being compared.
Each of 130 children was given visual-matching problems involving designs and pictures and reading-recognition tests at the end of the first and second grade. Ss with fast response times and high error scores on the visual-matching tests (impulsive children), in contrast to Ss with long decision times and low error scores (reflective children), made more errors in reading English words on both occasions.
An experiment is reported which explores a method of assessing familiarity that does not rely on the overt recognition or identification of faces. Earlier findings (Clutterbuck & Johnston, 2002; Young, Hay, McWeeny, Flude, & Ellis, 1985) have shown that familiar faces can be matched faster on their internal features than unfamiliar faces. This study examines whether familiarization in the form of repeated exposure to novel faces over a 2 day period can facilitate internal feature match performance. Participants viewed each of a set of unfamiliar faces for 1 min in total. At test on the second day previously familiar (famous) faces were matched faster than unfamiliar and familiarized faces. However the familiarized faces were matched faster than the unfamiliar faces. We discuss the use of this task as a means of accessing a measure of familiarity formation and as a means of tracking how faces become familiar.
Two experiments examine a novel method of assessing face familiarity that does not require explicit identification of presented faces. Earlier research (Clutterbuck & Johnston, 2002; Young, Hay, McWeeny, Flude, & Ellis, 1985) has shown that different views of the same face can be matched more quickly for familiar than for unfamiliar faces. This study examines whether exposure to previously novel faces allows the speed with which they can be matched to be increased, thus allowing a means of assessing how faces become familiar. In Experiment 1, participants viewed two sets of unfamiliar faces presented for either many, short intervals or for few, long intervals. At test, previously familiar (famous) faces were matched more quickly than novel faces or learned faces. In addition, learned faces seen on many, brief occasions were matched more quickly than the novel faces or faces seen on fewer, longer occasions. However, this was only observed when participants performed “different” decision matches. In Experiment 2, the similarity between face pairs was controlled more strictly. Once again, matches were performed on familiar faces more quickly than on unfamiliar or learned items. However, matches made to learned faces were significantly faster than those made to completely novel faces. This was now observed for both same and different match decisions. The use of this matching task as a means of tracking how unfamiliar faces become familiar is discussed.
This paper studies the anticipatory nature of perception in relation to subjects' expertise in basketball. The two experiments conducted showed that experts encode game situations by automatically building a representation of the next‐likely state of the observed scene. In Experiment 1, subjects had to quickly compare pairs of configurations presented in succession. The results indicated that expert subjects differentiated the second configuration from the first less accurately and more slowly when the second configuration was the next‐likely state of the first, than when it was a possible previous state. In Experiment 2, subjects had to study game configurations and then perform a recognition task. The results showed that experts more often falsely recognized new configurations when they were the next‐likely state of an already‐encoded configuration than when they represented a possible previous state. Based on these results, we discuss the nature of expert knowledge, the integration of anticipatory components in perceptual processes, and the impact of expert knowledge on visual‐scene encoding and memorization.