ArticlePDF Available

Abstract and Figures

While it is generally accepted that holistic processing facilitates face recognition, recent studies suggest that poor recognition might also arise from imprecise perception of local features in the face. This study aimed to examine to what extent holistic and featural processing relates to individual differences in face recognition ability (FRA), during face learning (Experiment 1) and face recognition (Experiment 2). Participants performed two tasks: (1) The “Cambridge Face Memory Test-Chinese” which measured participants’ FRAs, and (2) an “old/new recognition memory test” encompassing whole faces (preserving holistic and featural processing) and faces revealed through a dynamic aperture (impairing holistic processing but preserving featural processing). Our results showed that participants recognised faces more accurately in conditions when holistic information was preserved, than when it is impaired. We also show that the better use of holistic processing during face learning and face recognition was associated with better FRAs. However, enhanced featural processing during recognition, but not during learning, was related to better FRAs. Together, our findings demonstrate that good face recognition depends on distinct roles played by holistic and featural processing at different stages of face recognition.
This content is subject to copyright. Terms and conditions apply.
1
Vol.:(0123456789)
Scientic Reports | (2023) 13:16869 | https://doi.org/10.1038/s41598-023-44164-w
www.nature.com/scientificreports
Holistic and featural processing’s
link to face recognition varies
by individual and task
Bryan Qi Zheng Leong
1,2*, Alejandro J. Estudillo
1,2* & Ahamed Miah Hussain Ismail
1
While it is generally accepted that holistic processing facilitates face recognition, recent studies
suggest that poor recognition might also arise from imprecise perception of local features in the face.
This study aimed to examine to what extent holistic and featural processing relates to individual
dierences in face recognition ability (FRA), during face learning (Experiment 1) and face recognition
(Experiment 2). Participants performed two tasks: (1) The “Cambridge Face Memory Test-Chinese”
which measured participants’ FRAs, and (2) an “old/new recognition memory test” encompassing
whole faces (preserving holistic and featural processing) and faces revealed through a dynamic
aperture (impairing holistic processing but preserving featural processing). Our results showed that
participants recognised faces more accurately in conditions when holistic information was preserved,
than when it is impaired. We also show that the better use of holistic processing during face learning
and face recognition was associated with better FRAs. However, enhanced featural processing during
recognition, but not during learning, was related to better FRAs. Together, our ndings demonstrate
that good face recognition depends on distinct roles played by holistic and featural processing at
dierent stages of face recognition.
Recognising the identity of an individual by perceiving their face is a fundamental social skill. Most human faces
adhere to a standard template and conguration of facial features such as the eyes, nose, and mouth. While the
isolated processing of dierent facial features is known as “featural processing, the combination of these facial
features and their conguration into a whole is referred to as “holistic processing”1. Although both processes are
believed to contribute to face recognition, the popular view is that holistic processing is relatively more crucial24.
However, the contribution of holistic and featural processing to dierent stages of the face recognition process
(i.e., learning vs. recognition) and their relationship with individual dierences in face recognition are largely
unknown. is study aims to shed light on these questions.
In typical adults, the face inversion, composite face and part-whole tasks are conventionally used to demonstrate
the dominance of holistic processing in face recognition5,6. In the inversion eect, recognition is more accurate for
upright (experimental condition) faces than for inverted faces (control condition), since the latter impairs holistic
processing3,7,8. In the composite eect4,9,10, when the top half of one identity’s face is spatially aligned with the
bottom half of another identity (experimental condition), the two halves are fused to create an illusory identity,
and this impairs recognising the source identity of each half. However, this impairment disappears when the
two halves are misaligned (control condition) and holistic processing is disrupted. In the part-whole eect1113,
recognising an individual part (e.g., a nose) of a previously learnt face is more accurate when it is presented in
the context of a whole face (experimental condition) rather than an isolated part (control condition). Face parts
are believed to be encoded by engaging holistic processes that integrate them into a whole, and therefore part
recognition is best when the same processes can be engaged during recognition (i.e., whole condition). Interest-
ingly, some studies have reported positive correlations between these indexes and face identication1416, pointing
to holistic processing as the underlying mechanism explaining individual dierences in face recognition (but
see Konar etal.17; Verhallen etal.18).
However, there is also emerging evidence suggesting that featural processing is important for face identica-
tion too. For instance, Cabeza and Kato19 found that participants were equally prone to falsely recognise novel
faces (what they called “prototype faces”) that only had either holistic information or featural information
preserved from previously learnt faces. is reects that both holistic and featural information were encoded
and stored, and that they may be equally important in face recognition. More recently, DeGutis etal.14 used
OPEN
1School of Psychology, University of Nottingham Malaysia, Semenyih, Malaysia. 2Department of Psychology,
Bournemouth University, Poole House Talbot Campus, Poole BH12 5BB, UK. *email: bleongqizheng@
bournemouth.ac.uk; aestudillo@bournemouth.ac.uk
Content courtesy of Springer Nature, terms of use apply. Rights reserved
2
Vol:.(1234567890)
Scientic Reports | (2023) 13:16869 | https://doi.org/10.1038/s41598-023-44164-w
www.nature.com/scientificreports/
the part-whole and composite tasks to demonstrate that both holistic and featural processing contributes inde-
pendently and signicantly to face recognition abilities (FRA). First, they obtained an independent measure of
recognition based on featural processing by calculating the accuracy for the control conditions (e.g., part condi-
tion in the part-whole task) where holistic information is disrupted. Second, they regressed the variance of the
control conditions from the experimental conditions (e.g., whole condition in the part-whole task) to obtain an
independent score of holistic processing. ey found signicant positive correlations between these independ-
ent estimates of holistic as well as featural processing and their measures of FRA (scores in the Cambridge Face
Memory Test; CFMT20). Furthermore, it has also been suggested that featural processing is more important for
the recognition of unfamiliar faces than familiar faces21. For example, Lobmaier and Mast22 found that match-
ing two sequentially presented faces is relatively more impaired when the two faces are blurred (i.e., to disrupt
featural processing) than when they are scrambled (i.e., to disrupt holistic processing), but this disadvantage for
blurred faces was more pronounced for novel faces than previously learnt faces.
With conventional measures of holistic processing (i.e., composite, part-whole and inversion eects), the
assumption is that their experimental manipulations (e.g., misaligning faces in the composite task) are meant to
disrupt holistic processing. However, these measures are not free of criticism as there are secondary factors that
could drive the same eects too23. For example, in the part-whole task, faces are always encoded in their whole,
arguing that the part-whole eect could be driven by encoding specicity24. Further, the experimental condition
generally contains more facial information than the control condition. Here, the so-called holistic advantage
measured by the part-whole eect could reect dierences in the amount of featural information contained
between the two conditions. Recent studies have also criticised the functional signicance of the composite face
task6,25. For instance, Fitousi25 showed that aligned composite faces (that are oen used to demonstrate interfer-
ence from holistic processing) were not aected by the Garner interference paradigm. In other words, participants
were perfectly capable of selectively attending to target facial features even when other irrelevant features were
manipulated, casting doubt on the fact that holistic processing may be interfering with perception in aligned
composites. To control for secondary cognitive factors, studies have oen adopted these two holistic measures
with the inversion eect. Following this argument, the pure contribution of holistic processing would be observed
when the part-whole and composite eects are only present with upright faces and disappear for inverted faces23.
With regard to the inversion task, the most common interpretation is that the upright condition facilitates
holistic processing3,8. If that is the case, when observers are forced to view both upright and inverted faces in
a featural manner, the inversion eect should be reduced, or disappear. Murphy and Cook26 used the xed-
trajectory aperture paradigm (FTAP) to examine this hypothesis. is paradigm has two conditions: (1) the
“whole” condition in which the entire face is visible to the observer, and (2) the “aperture” condition in which a
transparent, rectangular window smoothly moves from the top of the face to the bottom, revealing parts of the
face in a sequential order. Murphy and Cook26 found that faces are recognised better in the whole conditions
compared to the aperture conditions (i.e., the “aperture eect”), suggesting that the dynamic aperture successfully
disrupts holistic processing. Interestingly, the magnitude of the inversion eect (i.e., the dierence between the
upright condition and the inverted condition) was comparable in both the whole and aperture conditions (see
also Murphy and Cook27). is is in stark contrast with the holistic accounts of the face inversion eect, which
predicts that an inversion eect should only be observed when the entire face is fully visible.
erefore, Murphy and Cook’s ndings challenge the view that the inversion eect disrupts only holistic
processing, at the same time providing a paradigm that systematically disrupts or facilitates holistic process-
ing. Interestingly, the FTAP is also a good paradigm to measure individual dierences in holistic and featural
processing. For example, Tsantani etal.28 showed that Developmental Prosopagnosics (DPs) are less accurate in
recognising upright faces in both the whole and the aperture conditions, compared to typical adults without face
recognition decits. However, the magnitude of the holistic advantage (i.e., higher accuracy in the whole com-
pared to the aperture condition) was similar between DPs and typical adults. is shows that DPs are impaired
in processing faces featurally but not holistically.
Learning and recognition
Recognising the identity of an unfamiliar face is a product of at least two exposures to the same face. In its
simplest order, the rst exposure results in the observer learning the identity of the face and during the second
exposure, the observer recognises a face they have learnt before. e distinction between these two stages is
supported by neuroimaging evidence that showed dierent brain regions were involved during the learning
and recognition of faces29. Interestingly, most studies attempting to examine the contribution of holistic and
featural processing to face identication do not specically address the role of these processes in the learning
and recognition of faces.
Some studies have used oculomotor behaviour to index the processes involved during visual sampling30.
Measuring xations, Henderson etal.31 found that face recognition is better if observers were allowed to freely
xate on the face during learning, rather than being forced to learn faces with just a single xation. Further, eye
movement patterns during recognition were comparable between conditions in which participants learnt faces
by freely xating them and by means of a single xation. ese ndings suggest that, although recognition abil-
ity depends on how observers sampled facial information during learning, the information sampling strategy
employed by observers during recognition is independent of how faces were learnt. Henderson etal.31 also
reported that when observers freely xated on faces during learning and recognition, they were largely directed
at internal facial features. Although these xations were attributed to processing holistic information, we could
also assume that they served a simpler purpose of separately encoding individual features at high resolution, in
other words, featural processing32,33. Lastly, Henderson etal. also reported that when observers were allowed to
freely explore faces, xations during recognition were much more restricted than those during learning. is
Content courtesy of Springer Nature, terms of use apply. Rights reserved
3
Vol.:(0123456789)
Scientic Reports | (2023) 13:16869 | https://doi.org/10.1038/s41598-023-44164-w
www.nature.com/scientificreports/
could suggest greater reliance on featural processing during learning and/or greater reliance on holistic process-
ing during recognition. While both interpretations are possible, there is no way to be certain of the purpose of
xations, as they can be used, at the best, as indirect measures of these processes33.
A recent study by Dunn etal.34, using a gaze-contingent paradigm, further examined the contributions of
both holistic and featural processing in face recognition at the learning and recognition stages. Faces were viewed
either in full-view or through circular apertures varying in sizes. When observers were allowed to sample faces
freely during face learning and face recognition, super-recognizers (SRs) had a broader gaze distribution and
more exploratory xations than control participants. Most importantly, SRs were consistently better than control
participants regardless of the aperture size. is indicates that the underlying perceptual processes contributing
to superior face recognition can be explained by featural processing. Interestingly, these dierences were more
evident during face learning than during face recognition. In line with Henderson etal.31, these ndings suggest
that broader exploration of the face during face learning facilitates face recognition and could quantitatively
explain individual dierences in face recognition.
The present study
To explore the contribution of holistic and featural processing at learning and recognition and their relationship
with individual dierences in FRA, the present study uses the FTAP in each stage separately. In Experiment 1, to
isolate the contribution of holistic and featural processing during learning, faces were learned either through an
aperture or in full-view. However, during the recognition stage, all faces were viewed in full-view. In Experiment
2, all faces were viewed in their entirety during learning. However, during the recognition stage, some faces were
viewed through an aperture while others were viewed in their entirety. is allowed us to isolate the contribu-
tion of holistic and featural processing during the recognition stage to FRA. In addition, to measure individual
dierences in face recognition, observers performed the Cambridge Face Memory Test-Chinese (CFMT-Chi)35,
a highly reliable and valid measure of individual dierences in face recognition skills36.
Results
Holistic and featural processing abilities were assessed with the FTAP in an old/new recognition memory task
(RMT). e task involved two stages as shown in Fig.1: a “learning” stage where participants learn a series of
faces, and a “recognition” stage, where participants attempt to recognise the learnt face among a set of faces that
contains new faces too. In Experiment 1, faces in the learning stage were presented in their entirety (“whole
condition”) or through the xed-trajectory aperture (“aperture condition”). Faces in the recognition stage were
always presented in their entirety for both conditions, thus, scores here were always computed from the recogni-
tion of full faces. is manipulation was reversed in Experiment 2. Learning stage faces were always presented
in their entirety, whereas recognition stage faces were shown in their entirety or through the aperture. Hence,
scores here were computed from the recognition of either full or aperture faces. Briey, recognition performance
in the aperture condition of the RMT informs us how good our participants are with featural processing. e
Figure1. Chronological procedure and examples of stimuli in the old/new recognition memory task used
in Experiment 1. In the aperture condition (centre right), a dynamic window moves smoothly across the face
image from top to bottom (images from le to right).
Content courtesy of Springer Nature, terms of use apply. Rights reserved
4
Vol:.(1234567890)
Scientic Reports | (2023) 13:16869 | https://doi.org/10.1038/s41598-023-44164-w
www.nature.com/scientificreports/
improvement in performance in the whole condition compared to the aperture condition of the RMT is a measure
of the magnitude of the holistic advantage experienced by participants, i.e., how good they were with holistic
processing. To obtain a standardised measure of FRA, we used the CFMT-Chi35. Correlating the aperture condi-
tions accuracy and the holistic advantage calculated from the old/new recognition task to the CFMT-Chi would
tell us to what extent featural and holistic processing relates to FRA, respectively.
e maximum achievable score (e.g., sum of correct responses) for the CFMT-Chi is 72, in which our current
sample had a mean score of 57.98 (SD = 8.93) in Experiment 1 and 58.28 (SD = 8.53) in Experiment 2. As revealed
by a two-tailed independent-samples t-test, the mean CFMT-Chi scores for both experiments were not signi-
cantly dierent from each other, t(171) = − 0.23, p = 0.820, ηp2 = − 0.035. is shows that our participants’ FRA are
largely similar between the two experiments, as well as with those of previous studies3539. Mean accuracy scores
of the RMT were calculated separately for each of the two viewing conditions: “whole” and “aperture” (Fig.2).
Two-tailed paired samples t-tests were conducted to compare accuracy scores between the two conditions of the
RMT. In Experiment 1, there was a signicant dierence in the mean scores between the conditions, t(86) = 5.67,
p < 0.001, ηp2 = 0.607, in which mean accuracy in the whole condition (M = 0.672, SD = 0.117) was sig nicantly
higher than that of the aperture condition (M = 0.590, SD = 0.104). Similarly, in Experiment 2, we found that there
was a signicant dierence in accuracy between the two conditions, t(85) = 11.21, p < 0.001, ηp2 = 1.209, in which
mean accuracy for the whole condition (M = 0.759, SD = 0.116) was higher than that of the aperture condition
(M = 0.586, SD = 0.120). In both experiments, one-sample t-tests revealed that the accuracy in the aperture condi-
tions were signicantly better than chance (accuracy more than 0.5) at the group level: t(86) = 7.978, p < 0.001 (for
Experiment 1) and t(85) = 6.638, p < 0.001 (for Experiment 2). A further independent-samples t-test conrmed
that these mean accuracies are comparable between experiments, t(172) = 0.222, p = 0.824.
Traditionally, the holistic advantage has been calculated using subtraction methods6. In the case of the FTAP,
this method would involve subtracting the mean accuracy in the aperture condition from the mean accuracy
in the whole condition. However, subtraction methods can be dicult to interpret14, as a lower value for the
aperture eect can indicate close to ceiling performance in the aperture condition, close to oor performance
in the whole condition, or both. us, in the present study, we used the “regression” method6,14 to calculate the
holistic advantage experienced by participants in the whole condition, aer accounting for the variation in per-
formance that the whole condition shares with the aperture condition. Using the equation of the line of best t of
the overall scores, each participant’s expected score on the whole condition (i.e., residual scores) was calculated
based on their performance in the aperture condition. Here, accuracy in the aperture condition is regressed from
the whole condition to compute residuals, which we termed “residuals of aperture eect” (RAE). A higher RAE
score indicates stronger holistic processing.
Next, we ran a number of Pearsons product-moment correlation tests for data obtained from Experiment
1. First, to explore if both tasks are measuring similar constructs, we correlated the accuracies of the whole
condition in the RMT with the CFMT-Chi scores. e test showed a signicant positive correlation between the
two tasks, r(85) = 0.334, p = 0.002. Second, to explore the relationship between featural processing ability and
FRA, we correlated the accuracies of the aperture condition with the CFMT-Chi scores, and the test showed
no signicant correlation between the two, r(85) = − 0.002, p = 0.986 (Fig.3a). ird, to explore the relationship
between holistic processing ability and FRA, we correlated measures of holistic advantage with CFMT-Chi
scores. ere was a signicant positive correlation between the RAE scores and CFMT-Chi scores (Fig.3b),
r(85) = 0.347, p < 0.001. For Experiment 2, we found a positive correlation between the accuracy in the whole
Figure2. Mean accuracies for the whole (blue) and aperture (green) conditions from (a) Experiment 1 and (b)
Experiment 2. Black-lled circles represent accuracy scores from individual participants.
Content courtesy of Springer Nature, terms of use apply. Rights reserved
5
Vol.:(0123456789)
Scientic Reports | (2023) 13:16869 | https://doi.org/10.1038/s41598-023-44164-w
www.nature.com/scientificreports/
condition and their respective scores on the CFMT-Chi, (r(84) = 0.489, p < 0.001). ere was also a strong posi-
tive correlation between the accuracy in the aperture condition and their respective scores on the CFMT-Chi
(Fig.3c), (r(84) = 0.570, p < 0.001). Particularly, the higher the participants’ FRA, the more accurate they were in
the “aperture” condition. Additionally, there was a signicant positive correlation between the RAE and CFMT-
Chi scores (Fig.3d), r(84) = 0.354, p < 0.001. Since some participants were performing at (or close to) chance
level in our experimental conditions (especially for the aperture conditions of both experiments), it is possible
that oor eects can account for some correlations (or the lack of it). To address this, we also correlated whole
accuracy, aperture accuracy and RAE scores with CFMT-Chi scores aer excluding participants who did not
score above chance, as identied by binomial probability tests. Importantly, the pattern of results remained the
same (see Online Supplementary Material).
To compare the strengths of correlations between the two experiments and between the whole and aperture
conditions within each experiment, we transformed the Pearsons correlation coecient values into z scores (i.e.,
Fisher’s r to z transformation)40. We found a signicant dierence in coecients between Experiment 1 and 2 for
the correlations between aperture accuracy and CFMT-Chi (z = − 4.197, p < 0.001), but not for the correlations
between RAE and CFMT-Chi (z = − 0.052, p = 0.479). Specically, the correlation coecient between aperture
accuracy and CFMT-Chi was larger in Experiment 2 than in Experiment 1. Additionally, the correlation coef-
cients between aperture accuracy with CFMT-Chi, and RAE with CFMT-Chi, were signicantly dierent in
Experiment 1 (z = − 2.359, p = 0.009) and Experiment 2 (z = 1.788, p = 0.037). Particularly, the correlation with
CFMT-Chi was stronger for RAE (i.e., holistic processing) in Experiment 1, but the correlation with CFMT-Chi
was stronger for aperture accuracy (i.e., featural processing) in Experiment 2. Lastly, for the correlations between
whole conditions accuracy and CFMT-Chi scores, the coecients were comparable between Experiments 1 and
2 (z = − 1.211, p = 0.113).
Discussion
e purpose of the present study was to examine the role of holistic and featural processing in face recognition
ability (FRA). Both experiments showed that forcing observers to rely on featural processing with a small aperture
reduced recognition accuracy signicantly. is impairment was observed irrespective of whether the aperture
Figure3. Correlation analyses from Experiments 1 (black) and 2 (grey). Black circles and grey annulus
represent scores from individual participants in Experiment 1 and Experiment 2, respectively. Black solid lines
and grey dashed lines are least-squares regression ts to individual data from Experiments 1 and 2, respectively.
Content courtesy of Springer Nature, terms of use apply. Rights reserved
6
Vol:.(1234567890)
Scientic Reports | (2023) 13:16869 | https://doi.org/10.1038/s41598-023-44164-w
www.nature.com/scientificreports/
was applied during face learning or recognition. One unique characteristic of our study is that we measured to
what extent featural and holistic processing can explain FRA at dierent stages of face recognition, separately.
In Experiment 1, we found that accuracy for recognising faces learnt through featural processing was uniform,
albeit poor, across the whole spectrum of FRA. To our knowledge, no past study had systematically restricted
participants along the FRA spectrum to featural processing during face learning. Accordingly, our ndings are
novel in isolating the contribution of featural processing during face learning to face recognition ability. Based
on our ndings, featural processing during face learning does not account for individual dierences in face
recognition abilities.
In Experiment 2, we found that individuals with better FRA were also better at using featural processing
during recognition than individuals with poor FRA. is suggests that featural processing during face recogni-
tion contributes to identifying learnt faces, and it is in support of past ndings showing that good recognisers
make good use of featural processing when attempting to recognise a learnt face. ese past studies have used
various tasks to assess the contribution of featural processing (e.g., part-whole task, familiar face recognition
test) in recognising famous faces as well as recently learnt unfamiliar faces28,41,42. Nonetheless, there are also
some exceptions34,43. One could argue that the lack of correlation between featural processing ability and FRA
in Experiment 1 is a result of oor eects. However, accuracies were comparable across both experiments and
above chance. In addition, individual dierences in the aperture condition were related to FRA in Experiment
2, but not in Experiment 1. erefore, oor eects are unlikely to explain a lack of correlation in Experiment 1.
In line with Dunn etal.34, we found that featural processing is positively associated with FRA. However, we
only found this correlation during face recognition and not face learning (i.e., Experiment 1). ese disparities
could be the result of our viewing manipulations. Dunn etal. allowed observers to actively explore the faces,
whereas the FTAP constraints all observers to learn faces in a similar fashion, which could interfere with unique
perceptual encoding strategies used by good recognizers. For instance, Dunn etal. found that super recognis-
ers (SRs) had broader gaze distributions and more xations than typical observers, but these dierences were
more apparent during face learning. In contrast, Abudarham etal.43 showed that Developmental Prosopagnosics
(DPs) and SRs are similarly good at featural processing. However, DPs tend to be heterogenous in decits, with
some cases having featural processing decits and some not44,45, and decits can be qualitatively dierent from
neurotypicals with poor FRA (e.g., atypical sampling of faces)4648.
Additionally, our RAE scores showed that people’s ability to process faces holistically (but not featurally) dur-
ing face learning could be a strong determinant of their FRA. e relationship found in Experiment 2 further
supports previous ndings showing that higher face recognition abilities are associated with stronger holistic
processing1416. Nonetheless, as we found, why would good recognisers rely more on processing holistic but not
featural representations of a face during face learning? We encounter a large number of faces in everyday life.
Obviously, the more faces we can store in our memory, the better our social interactions would be. However,
storing individual features of every single face we encounter would be very taxing for human memory. Holistic
representations provide a way of reducing this memory load, by allowing us to store more identities in the form of
a simplied gist (see Curby and Gauthier49; Pertzov etal.50). Moreover, holistic information of faces is more stable
in memory than featural information4,7,51. For example, Peters and Kemner52 showed that long-term memory for
faces is better when face identities were learnt from their low spatial frequencies conveying holistic information
than from their high spatial frequencies conveying ne details of features. Given that holistic representations
allow us to eciently utilise memory and form stable traces over time, it would be expected that good recognisers
make better use of holistic processing than featural processing during face learning.
Why would good recognisers rely on both holistic and featural processing during recognition, but not face
learning? Some studies have demonstrated that when we attempt to recognise a face, we follow a course-to-ne
strategy5355. Here, a holistic representation of the to-be-recognised face is initially matched to face representa-
tions in our memory to narrow down the most likely candidate representations53. Next, in an empirical sense,
features of the to-be-recognised face are compared with those selected representations in memory, whereby
identity-specic, distinct features could help to distinguish a learnt identity from other similar-resembling faces.
Extending this explanation to our case, it appears important that we compare a to-be-recognised face to memory
representations both at the holistic and featural levels, and good recognisers might be adept at doing both.
We would also like to emphasise on an interesting nding of our study. In the aperture condition of Experi-
ment 1, when participants’ face learning was restricted to featural processing, even good recognisers failed to use
this information. However, in Experiment 2, when we allowed participants to learn faces freely (i.e., not restrict-
ing the processing), good recognisers were able to recognise these faces better even when holistic processing was
largely interrupted during recognition due to the aperture. As the FRA of participants decreased, this advantage
with featural information diminished. Based on this, we can claim that forming a holistic representation when
learning a face is also important for good recognisers to eectively use featural information during recognition.
If that’s the case, a weak holistic representation formed by poor recognisers during learning may have led to poor
use of both holistic and featural information during recognition (as shown in Experiment 2; Fig.3).
However, our study is not without limitations. First, we did not account for congruency eects between
face learning and face recognition. Previous research has shown the importance of congruency in face
identication5658. For example, faces learned with a ski mask are better recognized when they are also pre-
sented with a ski mask compared to full-view faces57. In our study, there is an incongruence between learning
and recognition, as the aperture was only applied during learning (Experiment 1) or recognition (Experiment
2) stages. However, as all our participants were given the same tasks, it is unlikely that incongruence between
learning and recognition explains any observed relationships between face recognition skills and the dierent
conditions of the FTAP.
Second, it could be argued that the FTAP also disrupts featural processing. For example, the FTAP might
impair the encoding of featural information at the learning stage, which would explain why aperture accuracy
Content courtesy of Springer Nature, terms of use apply. Rights reserved
7
Vol.:(0123456789)
Scientic Reports | (2023) 13:16869 | https://doi.org/10.1038/s41598-023-44164-w
www.nature.com/scientificreports/
was not associated with FRA in Experiment 1. However, this also seems unlikely. Research has shown that holis-
tic processing is mostly engaged by the presence of a whole and intact face12,59,60. Importantly, this whole and
intact face processing is indeed avoided by the aperture. To ensure the serial processing of each facial feature,
the aperture used in this study was created to be large enough to reveal the entire eye and mouth regions, and
approximately 75% of the nose. erefore, although the serial presentation of the features through the aperture
might also impair some featural processing, it seems unplausible that this disruption is comparable to that of
holistic processing. In fact, if such disruptions were comparable, as observers would not be able to use either fea-
tural or holistic processing, performance in the aperture conditions should be at chance levels61. However, as our
results showed, participants’ performance in the aperture condition was above chance levels in both experiments.
ird, we applied the regression method to compute the holistic advantage of participants. While this
approach does control for variance in the aperture condition14, one important limitation of the regression method
is its assumption of a linear relationship between the whole and aperture conditions. In fact, as shown by the
weak correlations, it is possible that a non-linear model could better explain the relationship between the whole
and aperture conditions.
In conclusion, we show that poor FRA arises from the poor encoding of holistic and featural information
during face recognition. We also show that enhanced holistic (but not featural) processing during face learning
contributes to better FRA. In addition, our ndings raise the intriguing possibility that good recognisers’ ability
to eectively utilise featural information during recognition may depend on the extent to which faces are pro-
cessed holistically during learning. We demonstrate these using the FTAP that deals with several limitations of
other paradigms (i.e., inversion, composite and part-whole tasks). Moreover, the FRA of our sample is broad, to
the extent of capturing individuals with FRAs (according to CFMT scores) similar to DPs and SRs identied in
past studies, as well as those in between. erefore, we provide reliable insight into the contribution of holistic
and featural processing during face learning and face recognition.
Methods
Participants
An a-priori power analysis using G*Power62 estimated that a sample size of 82 is required to obtain a moderate
eect size of 0.3 with a statistical power of 80% (α = 0.05), for a Pearsons test of correlation between FRA and
the conditions of the RMT. In Experiment 1, we recruited 87 Malaysian Chinese (44 females) participants with
no known clinical diagnosis of a mental health disorder, with age ranging from 18 to 54 years (M = 25.00 years,
SD = 5.29). For Experiment 2, we recruited 86 healthy typical Malaysian Chinese participants (70 females), with
age ranging from 18 to 47 years (M = 22.34 years, SD = 5.10). Participants were paid 5 Malaysian Ringgits as
compensation for their time. All participants reported normal or corrected-to-normal vision. A digital informed
consent was obtained prior to participation. All experimental procedures were approved by the Science and Engi-
neering Research Ethics Committee of the University of Nottingham Malaysia (approval code: BLQZ210421).
We conrm that all experiments were performed in accordance with relevant guidelines and regulations.
Apparatus
is study was conducted using the online experimental platform Testable (www. testa ble. org) 63. e study com-
prised two tasks: the CFMT-Chi35 and an old/new recognition memory task (RMT) with two viewing conditions
(whole or aperture viewing). Participants used their own computers (laptops or desktops) to complete the two
tasks online in a web browser. To minimise dierences in the visible size of stimuli across dierent computer
screens, participants were required to adjust the length of a horizontal yellow line that appeared on the screen
to match the size of a debit/credit card they possessed. Based on this, the testing platform calculates how many
pixels correspond to one centimetre, and all stimuli within the study were rescaled using this mapping to the
required dimensions in centimetres. All face stimuli were edited and cropped using Abobe Photoshop CS6, while
the dynamic aperture was created in Matlab R2019b (Mathworks).
Stimuli and procedure
Cambridge face memory test-Chinese (CFMT-Chi)
We used the validated Chinese version of the Cambridge Face Memory Test (i.e., CFMT-Chi), and all faces and
procedures were the same as those used in the original paper by McKone etal.35 Face images were those of men in
their 20s and early 30s in neutral expressions, and each individual was photographed in the same range of poses
and lighting conditions. For this task, six unique target identities and 46 unique distractor identities were used.
For each identity, three face images from three dierent viewpoints (one le 1/3 prole, one full-frontal and one
right 1/3 prole) were used. Similar to the original version, only male faces were used because sex dierences in
observers have been reported for recognising female but not male faces64. ese faces did not contain external
features, such as hair and no facial blemishes were visible. ey were greyscale faces (approximately 160 pixels
(px) in width and 195 px in height; assuming participants had a seating distance of 57 cm, the faces subtended
approximately 3.2° and 4° in width and height, respectively) embedded in the centre of a uniformly grey back-
ground that is 200 px wide and 240 px tall (4 × 4.8 cm; see McKone etal.35 for further details).
e CFMT-Chi was presented using the standard procedure which consists of a total of 72 trials presented
over three dierent stages (18 in the Learning, 30 in the Novel and 24 in the Noise stages). In all trials that test
face memory, there were three simultaneously presented faces (one learned target and two distractors) and
participants were required to select which of them was the learnt face, by pressing the keys “1” for the le, “2”
for the middle, “3” for the right image.
Content courtesy of Springer Nature, terms of use apply. Rights reserved
8
Vol:.(1234567890)
Scientic Reports | (2023) 13:16869 | https://doi.org/10.1038/s41598-023-44164-w
www.nature.com/scientificreports/
Old/new recognition memory task (RMT)
Face images were those of Malaysian Chinese males in their early or mid-20s in neutral expressions. All indi-
viduals were photographed in the same range of poses and lighting conditions in the Face Laboratory at the
University of Nottingham Malaysia, wherein the informed consents to publish identifying information/images
were obtained. For each identity, only frontal view face images were used. All external features in the faces were
removed. e faces were then resized to approximately 160 px in width and exactly 195 px in height (subtending
approximately 3.2° × 4° at a viewing distance of 57 cm), converted into greyscale and embedded in the centre of
a uniformly black background of 200 × 250 px (4 × 5 cm).
In Experiment 1, the RMT consisted of four blocks (two whole and two aperture conditions). e four blocks
were randomized across participants. Each block started with an initial “learning” stage, followed by a ller task
and nally a “recognition” stage. In any given block, the learning stage showed the faces of six unique identities
to participants. e recognition stage sequentially presented the same six identities (“old”) randomly intermixed
with 6 new and unique identities that the participants had not seen before (“new”), leading to a total of 12 test
faces. is led to a total of 48 unique faces (e.g., 24 old and 24 new unique identities) that were used throughout
the entire experiment. In the learning stage of the “whole” condition, each trial started with a white central xa-
tion cross (22 × 22 px; 0.4 × 0.4 cm) shown for 500 ms, followed by a fully visible unique face stimulus presented
in the centre of the screen for 1000 ms (Fig.1). Old faces presented in the “whole” condition during the recogni-
tion stage are exactly the same as those in the learning stage. In contrast, in the learning stage of the “aperture
condition, the face image was shown through a dynamic window that moved smoothly from the top of the face
to the bottom, revealing features of the face in a sequential order (Fig.1). e dynamic window started and ended
with a fully black display. e height of the aperture that moved from top to bottom was 12% (i.e., 30 px) of the
overall height of the face and took approximately 6200 ms to move across the entire face (i.e., black-to-black
display). e sequential display and frame rate generated a smooth aperture motion (~ 11 frames per second).
All sequences were constructed from a series of bitmap images and saved as .GIF les. For both conditions, six
of such trials were presented in the learning stage, and participants were asked to learn and memorize all six
faces for a subsequent recognition stage.
Following the learning stage, in both conditions, participants were given a short ller task that involved
mathematical calculations (e.g., “5 6/2 + 10 = ?”), which took less than a minute to complete. is was followed
by the recognition stage. During this stage, the 12 test faces were sequentially presented over 12 trials. Each trial
began with a 500 ms presentation of a white central xation cross. is was followed by the presentation of a
fully visible face that remained on the centre of the screen until a response was recorded. e participants were
required to indicate whether they had previously seen this face in the learning stage, by pressing the key “Q” on
the keyboard if they have seen it and the key “P” if they have not seen it before. Participants were instructed to
respond as quickly and accurately as possible. In both stages, the presentation timing was adopted from previous
studies using the FTAP26,27. In the whole condition, old faces presented during recognition and learning were
both fully visible. However, in the aperture condition, old faces shown during learning were viewed through an
aperture and when the same identities were shown during recognition, they were fully visible.
In Experiment 2, experimental procedures and stimuli used were similar to Experiment 1, except for the
following changes in the old/new RMT. Irrespective of the experimental condition (whole or aperture), partici-
pants were always shown a white central xation cross, followed by fully visible faces for 1000ms in the learning
stage. A total of six unique faces (i.e., old faces) were shown in each block. During the “recognition stage”, they
were shown with the 12 test faces that were either in full-view (for the “whole” condition) or viewed through an
aperture (for the “aperture” condition). Faces to be recognised stayed on screen for the same duration of 6200
ms in both conditions, and this was followed by a black screen that remained until a response was recorded.
Responses could also be provided while the faces were shown or aer the faces were removed from the screen,
either of which terminated the trial. Similar to Experiment 1, participants pressed the key “Q” or “P” to indicate
whether they have seen each test face in the learning stage or not, respectively.
Data availability
e datasets generated during and/or analysed during the current study are available in the Open Science Frame-
work repository, https:// osf. io/ a7evh/? view_ only= 456b5 91ea3 1c472 f8118 bbded db1b5 de.
Received: 6 January 2023; Accepted: 3 October 2023
References
1. Piepers, D. W. & Robbins, R. A. A review and clarication of the terms “holistic”, “congural”, and “relational” in the face percep-
tion literature. Front. Psychol. 3, 559 (2012).
2. Jacques, C. & Rossion, B. Misaligning face halves increases and delays the N170 specically for upright faces: Implications for the
nature of early face representations. Brain Res. 1318, 96–109 (2010).
3. Rossion, B. Picture-plane inversion leads to qualitative changes of face perception. Acta Psychol. 128, 274–289 (2008).
4. Rossion, B. e composite face illusion: A whole window into our understanding of holistic face perception. Vis. Cogn. 21, 139–253
(2013).
5. Lee, J. K., Janssen, S. M. & Estudillo, A. J. A featural account for own-face processing? Looking for support from face inversion,
composite face, and part-whole tasks. i-Perception 13, 20416695221111410 (2022).
6. Rezlescu, C., Susilo, T., Wilmer, J. B. & Caramazza, A. e inversion, part-whole, and composite eects reect distinct perceptual
mechanisms with varied relationships to face recognition. J. Exp. Psychol. Hum. Percept. Perform. 43, 1961 (2017).
7. Tanaka, J. W., Heptonstall, B. & Campbell, A. Part and whole face representations in immediate and long-term memory. Vis. Res.
164, 53–61 (2019).
8. Yin, R. K. Looking at upside-down faces. J. Exp. Psychol. 81, 141 (1969).
Content courtesy of Springer Nature, terms of use apply. Rights reserved
9
Vol.:(0123456789)
Scientic Reports | (2023) 13:16869 | https://doi.org/10.1038/s41598-023-44164-w
www.nature.com/scientificreports/
9. Hole, G. J. Congurational factors in the perception of unfamiliar faces. Perception 23, 65–74 (1994).
10. Young, A. W., Hellawell, D. & Hay, D. C. Congurational information in face perception. Perception 16, 747–759 (1987).
11. Estudillo, A. J., Zheng, B. L. Q. & Wong, H. K. Navon-induced processing biases fail to aect the recognition of whole faces and
isolated facial features. J. Cogn. Psychol. 34, 744–754 (2022).
12. Tanaka, J. W. & Farah, M. J. Parts and wholes in face recognition. Q. J. Exp. Psychol. 46, 225–245 (1993).
13. Tanaka, J. W. & Simonyi, D. e, “parts and wholes” of face recognition: A review of the literature. Q. J. Exp. Psychol. 69, 1876–1889
(2016).
14. DeGutis, J., Wilmer, J., Mercado, R. J. & Cohan, S. Using regression to measure holistic face processing reveals a strong link with
face recognition ability. Cognition 126, 87–100 (2013).
15. Richler, J. J., Cheung, O. S. & Gauthier, I. Holistic processing predicts face recognition. Psychol. Sci. 22, 464–471 (2011).
16. Wang, R., Li, J., Fang, H., Tian, M. & Liu, J. Individual dierences in holistic processing predict face recognition ability. Psychol.
Sci. 23, 169–177 (2012).
17. Konar, Y., Bennett, P. J. & Sekuler, A. B. Holistic processing is not correlated with face-identication accuracy. Psychol. Sci. 21,
38–43 (2010).
18. Verhallen, R. J. et al. General and specic factors in the processing of faces. Vis. Res. 141, 217–227 (2017).
19. Cabeza, R. & Kato, T. Features are also important: Contributions of featural and congural processing to face recognition. Psychol.
Sci. 11, 429–433 (2000).
20. Duchaine, B. & Nakayama, K. e Cambridge Face Memory Test: Results for neurologically intact individuals and an investigation
of its validity using inverted face stimuli and prosopagnosic participants. Neuropsychologia 44, 576–585 (2006).
21. Johnston, R. A. & Edmonds, A. J. Familiar and unfamiliar face recognition: A review. Memory 17, 577–596 (2009).
22. Lobmaier, J. S. & Mast, F. W. Perception of novel faces: e parts have it!. Perception 36, 1660–1673 (2007).
23. McKone, E. et al. Importance of the inverted control in measuring holistic face processing with the composite eect and part-whole
eect. Front. Psychol. 4, 33 (2013).
24. Leder, H. & Carbon, C. C. When context hinders! Learn–test compatibility in face recognition. Q. J. Exp. Psychol. Sect. A 58,
235–250 (2005).
25. Fitousi, D. Composite faces are not processed holistically: Evidence from the Garner and redundant target paradigms. Atten.
Percept. Psychophys. 77, 2037–2060 (2015).
26. Murphy, J. & Cook, R. Revealing the mechanisms of human face perception using dynamic apertures. Cognition 169, 25–35 (2017).
27. Murphy, J., Gray, K. L. & Cook, R. Inverted faces benet from whole-face processing. Cognition 194, 104105 (2020).
28. Tsantani, M., Gray, K. L. & Cook, R. Holistic processing of facial identity in developmental prosopagnosia. Cortex 130, 318–326
(2020).
29. Haxby, J. V. et al. Face encoding and recognition in the human brain. Proc. Natl. Acad. Sci. 93, 922–927 (1996).
30. Holmqvist, K., & Andersson, R. Eye-tracking: A comprehensive guide to methods. Paradigms and measures (2017) (ISBN-13,
978-1979484893).
31. Henderson, J. M., Williams, C. C. & Falk, R. J. Eye movements are functional during face learning. Mem. Cogn. 33, 98–106 (2005).
32. Hills, P. J. Children process the self face using congural and featural encoding: Evidence from eye tracking. Cogn. Dev. 48, 82–93
(2018).
33. Lee, J. K., Janssen, S. M. & Estudillo, A. J. A more featural based processing for the self-face: An eye-tracking study. Conscious.
Cogn. 105, 103400 (2022).
34. Dunn, J. D. et al. Face-information sampling in super-recognizers. Psychol. Sci. 33, 1615–1630 (2022).
35. McKone, E. et al. A robust method of measuring other-race and other-ethnicity eects: e Cambridge Face Memory Test format.
PLoS One 7, e47956 (2012).
36. Estudillo, A. J. Self-reported face recognition abilities for own and other-race faces. J. Crim. Psychol. 11, 105–115 (2021).
37. Estudillo, A. J. & Wong, H. K. Associations between self-reported and objective face recognition abilities are only evident in above-
and below-average recognisers. PeerJ 9, e10629 (2021).
38. Estudillo, A. J., Lee, J. K. W., Mennie, N. & Burns, E. No evidence of other-race eect for Chinese faces in Malaysian non-Chinese
population. Appl. Cogn. Psychol. 34, 270–276 (2020).
39. McKone, E., Wan, L., Robbins, R., Crookes, K. & Liu, J. Diagnosing prosopagnosia in East Asian individuals: Norms for the Cam-
bridge Face Memory Test-Chinese. Cogn. Neuropsychol. 34, 253–268 (2017).
40. Hinkle, D. E., Wiersma, W. & Jurs, S. G. Solutions Manual: Applied Statistics for the Behavioral Sciences (Houghton Miin, 1988).
41. DeGutis, J., Cohan, S., Mercado, R. J., Wilmer, J. & Nakayama, K. Holistic processing of the mouth but not the eyes in developmental
prosopagnosia. Cogn. Neuropsychol. 29, 419–446 (2012).
42. Tardif, J. et al. Use of face information varies systematically from developmental prosopagnosics to super-recognizers. Psychol.
Sci. 30(2), 300–308 (2019).
43. Abudarham, N., Bate, S., Duchaine, B. & Yovel, G. Developmental prosopagnosics and super recognizers rely on the same facial
features used by individuals with normal face recognition abilities for face identication. Neuropsychologia 160, 107963 (2021).
44. Bennetts, R. J. et al. Face specic inversion eects provide evidence for two subtypes of developmental prosopagnosia. Neuropsy-
chologia 174, 108332 (2022).
45. Corrow, S. L., Dalrymple, K. A. & Barton, J. J. Prosopagnosia: Current perspectives. Eye Brain 8, 165 (2016).
46. Barton, J. J. Objects and faces, faces and objects…. Cogn. Neuropsychol. 35, 90–93 (2018).
47. Bobak, A. K., Parris, B. A., Gregory, N. J., Bennetts, R. J. & Bate, S. Eye-movement strategies in developmental prosopagnosia and
“super” face recognition. Q. J. Exp. Psychol. 70, 201–217 (2017).
48. Tian, X. et al. Multi-item discriminability pattern to faces in developmental prosopagnosia reveals distinct mechanisms of face
processing. Cereb. Cortex 30, 2986–2996 (2020).
49. Curby, K. M. & Gauthier, I. A visual short-term memory advantage for faces. Psychon. Bull. Rev. 14, 620–628 (2007).
50. Pertzov, Y., Krill, D., Weiss, N., Lesinger, K. & Avidan, G. Rapid forgetting of faces in congenital prosopagnosia. Cortex 129, 119–132
(2020).
51. Richler, J., Palmeri, T. J. & Gauthier, I. Meanings, mechanisms, and measures of holistic processing. Front. Psychol. 3, 553 (2012).
52. Peters, J. C. & Kemner, C. Procient use of low spatial frequencies facilitates face memory but shows protracted maturation
throughout adolescence. Acta Psychol. 179, 61–67 (2017).
53. Gerlach, C. & Starrfelt, R. Global precedence eects account for individual dierences in both face and object recognition perfor-
mance. Psychon. Bull. Rev. 25, 1365–1372 (2018).
54. McKone, E., Kanwisher, N. & Duchaine, B. C. Can generic expertise explain special processing for faces?. Trends Cogn. Sci. 11,
8–15 (2007).
55. Peters, J. C., Goebel, R. & Goaux, V. From coarse to ne: Interactive feature processing precedes local feature analysis in human
face perception. Biol. Psychol. 138, 1–10 (2018).
56. Estudillo, A. J. & Wong, H. K. Two face masks are better than one: Congruency eects in face matching. Cogn. Res. Princ. Implic.
7, 1–8 (2022).
57. Manley, K. D., Chan, J. C. & Wells, G. L. Do masked-face lineups facilitate eyewitness identication of a masked individual?. J.
Exp. Psychol. Appl. 25, 396 (2019).
Content courtesy of Springer Nature, terms of use apply. Rights reserved
10
Vol:.(1234567890)
Scientic Reports | (2023) 13:16869 | https://doi.org/10.1038/s41598-023-44164-w
www.nature.com/scientificreports/
58. Toseeb, U., Bryant, E. J. & Keeble, D. R. e Muslim headscarf and face perception: “ey all look the same, don’t they?”. PloS One
9, e84754 (2014).
59. Farah, M. J., Wilson, K. D., Drain, M. & Tanaka, J. N. What is “special” about face perception?. Psychol. Rev. 105, 482 (1998).
60. McKone, E. & Yovel, G. Why does picture-plane inversion sometimes dissociate perception of features and spacing in faces, and
sometimes not? Toward a new theory of holistic processing. Psychon. Bull. Rev. 16, 778–797 (2009).
61. Schwaninger, A., Lobmaier, J. S. & Collishaw, S. M. Role of featural and congural information in familiar and unfamiliar face
recognition. Science Direct Working Paper No S1574-034X (04), 70212-4 (2002).
62. Faul, F., Erdfelder, E., Lang, A. G. & Buchner, A. G* Power 3: A exible statistical power analysis program for the social, behavioral,
and biomedical sciences. Behav. Res. Methods 39, 175–191 (2007).
63. Rezlescu, C., Danaila, I., Miron, A. & Amariei, C. More time for science: Using Testable to create and share behavioral experiments
faster, recruit better participants, and engage students in hands-on research. Prog. Brain Res. 253, 243–262 (2020).
64. Lewin, C. & Herlitz, A. Sex dierences in face recognition—Women’s faces make the dierence. Brain Cogn. 50, 121–128 (2002).
Author contributions
Study design and overall theoretical approach: B.Q.Z.L., A.J.E. and A.M.H.I. Experimental design, testing, data
collection, data analysis: B.Q.Z.L. (under the guidance and supervision of A.M.H.I. and A.J.E.). Writing the
manuscript: B.Q.Z.L., A.J.E. and A.M.H.I.
Competing interests
e authors declare no competing interests.
Additional information
Supplementary Information e online version contains supplementary material available at https:// doi. org/
10. 1038/ s41598- 023- 44164-w.
Correspondence and requests for materials should be addressed to B.Q.Z.L.orA.J.E.
Reprints and permissions information is available at www.nature.com/reprints.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional aliations.
Open Access is article is licensed under a Creative Commons Attribution 4.0 International
License, which permits use, sharing, adaptation, distribution and reproduction in any medium or
format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the
Creative Commons licence, and indicate if changes were made. e images or other third party material in this
article are included in the articles Creative Commons licence, unless indicated otherwise in a credit line to the
material. If material is not included in the article’s Creative Commons licence and your intended use is not
permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
© e Author(s) 2023
Content courtesy of Springer Nature, terms of use apply. Rights reserved
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers and authorised users (“Users”), for small-
scale personal, non-commercial use provided that all copyright, trade and service marks and other proprietary notices are maintained. By
accessing, sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of use (“Terms”). For these
purposes, Springer Nature considers academic use (by researchers and students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and conditions, a relevant site licence or a personal
subscription. These Terms will prevail over any conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription
(to the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of the Creative Commons license used will
apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may also use these personal data internally within
ResearchGate and Springer Nature and as agreed share it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not
otherwise disclose your personal data outside the ResearchGate or the Springer Nature group of companies unless we have your permission as
detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial use, it is important to note that Users may
not:
use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at
onlineservice@springernature.com
... Indeed, there is accumulating evidence that individual differences in face identity processing ability are predicted by feature-level processing during initial perceptual encoding. People with better face identity processing ability appear to be able to recognize faces with less visual information available (Dunn et al., 2022;Leong et al., 2023;Royer et al., 2015), exploit featural information more effectively (Tardif et al., 2019) and consistently (Nador et al., 2021) than typical viewers. Critically, the relationship between ability and these information sampling behaviours is strongest during the learning phase when participants first encounter and extract information from the face rather than when they later recognise the face (Dunn et al., 2022). ...
... Together with other work showing that people with high levels of face recognition ability are adept at processing identity from isolated facial regions (Leong et al., 2023;Royer et al., 2015;Tardif et al., 2019), our results provide an important constraint on the theoretical standpoint that holistic processing underpins individual differences in face recognition ability (e.g., DeGutis et al., 2013). It remains possible that the ability to construct a holistic representation of the face can also stem from active processing of discrete samples of local information (e.g., see Avidan & Behrmann, 2021), but this conceptualisation of holistic processing is very different to the traditional view that holistic perception can be inferred from global sampling around the centre of the face (Bombari et al., 2009). ...
Article
Full-text available
Face recognition in humans is often cited as a model example of perceptual expertise that is characterized by an increased tendency to process faces as holistic percepts. However emerging evidence across different domains of expertise points to a critical role of feature-based processing strategies during the initial encoding of information. Here, we examined the eye-movement patterns of super-recognisers—individuals with extremely high face identification ability compared with the average person—using gaze-contingent “spotlight” apertures that restrict visual face information in real time around their point of fixation. As an additional contrast, we also compared their performance with that of facial examiners—highly trained individuals whose superiority has been shown to rely heavily on featural processing. Super-recognisers and facial examiners showed equivalent face matching accuracy in both spotlight aperture and natural viewing conditions, suggesting that they were equally adept at using featural information for face identity processing. Further, both groups sampled more information across the face than controls. Together, these results show that the active exploration of facial features is an important determinant of face recognition ability that generalizes across different types of experts.
... However, recent findings suggest that the contribution of holistic and part-based processing may differ across tasks and stages of processing. For example, while holistic processing is generally more dominant during recognition, partbased processing may play a crucial role during learning, particularly for unfamiliar faces (Leong, Estudillo, & Ismail, 2023;Chua & Gauthier, 2020). Additionally, individual differences in recognition ability are linked to both mechanisms, but not uniformly: some individuals rely more heavily on holistic processing, while others demonstrate superior featural analysis, reflecting distinct underlying strategies rather than a single holistic mechanism (Rezlescu et al., 2017). ...
Article
Full-text available
Visual inference involves using prior knowledge and contextual cues to make educated guesses about incomplete or ambiguous information. This study explores the role of visual inference as a function of expertise in the context of fingerprint examination, where professional examiners need to determine whether two fingerprints were left by the same person, or not, often based on limited or impoverished visual information. We compare expert and novice performance on two tasks: inferring the missing details of a print at an artificial blank spot (Experiment 1) and identifying the missing surrounds of a print given only a small fragment of visual detail (Experiment 2). We hypothesized that experts would demonstrate superior performance by leveraging their extensive experience with global fingerprint patterns. Consistent with our predictions, we found that while both experts and novices performed above chance, experts consistently outperformed novices. These findings suggest that expertise in fingerprint examination involves a heightened sensitivity to gist, or global image properties within a print, enabling experts to make more accurate inferences about missing details. These results align with prior research on perceptual expertise in other expert domains, such as radiology, and extend our understanding of scene and face recognition to fingerprint examination. Our findings show that expertise emerges from an ability to combine local and global visual information—experts skillfully process both the fine details and overall patterns in fingerprints. This research provides insight into how perceptual expertise supports accurate visual discrimination in a high-stakes, real-world task with broader implications for theoretical models of visual cognition.
... If our results generalize to these more naturalistic task demands, we might expect flexibility in the facial features sampled by an individual viewer to facilitate their discovery of diagnostic features as faces become familiar. Overall, it is possible that different task conditions might contribute additional variance in the features that are used in face identity processing (Leong et al., 2023; see also M. L. Smith & Merlusca, 2014). Future work should test flexibility in feature use across more diverse and naturalistic task demands. ...
Article
Full-text available
People prioritize diagnostic features in classification tasks. However, it is not clear whether this priority is fixed or is flexibly applied depending on the specific classification decision, or how feature use behavior contributes to individual differences in performance. Here we examined whether flexibility in features used in a face identification task supports face recognition ability. In Experiment 1, we show that the facial features most useful for identification vary—to a surprising degree—depending on the specific face identity comparison at hand. While the ears and eyes were the most diagnostic for face identification in general, they were the most diagnostic feature for just 22% and 14% of identity decisions, respectively. In three subsequent experiments, we find that flexibility in feature use contributes to an individual’s face identity matching ability. Higher face identification accuracy was associated with being aware of (Experiments 2 and 4) and attending to (Experiments 3 and 4) the most diagnostic features for a specific facial comparison. This conferred an enhanced benefit relative to focusing on features that were diagnostic of face identity decisions in general (Experiment 4). We conclude that adaptability in information sampling supports face recognition ability and discuss theoretical and applied implications.
... However, recent findings suggest that the contribution of holistic and part-based processing may differ across tasks and stages of processing. For example, while holistic processing is generally more dominant during recognition, part-based processing may play a crucial role during learning, particularly for unfamiliar faces (Leong, Estudillo, & Ismail, 2023;Chua & Gauthier, 2020). Additionally, individual differences in recognition ability are linked to both mechanisms, but not uniformly: some individuals rely more heavily on holistic processing, while others demonstrate superior featural analysis, reflecting distinct underlying strategies rather than a single holistic mechanism (Rezlescu et al., 2017). ...
Preprint
Full-text available
Visual inference involves using prior knowledge and contextual cues to make educated guesses about incomplete or ambiguous information. This study explores visual inference as a function of expertise in the context of fingerprint examination, where professional examiners often need to piece together incomplete or partial visual evidence to identify individuals. We compare expert and novice performance on two tasks: inferring the missing details of a print at an artificial blank spot (Experiment 1) and identifying the missing surrounds of a print given only a small fragment of visual detail (Experiment 2). Consistent with our predictions, findings indicate that while both groups performed above chance, experts consistently outperformed novices. These findings suggest that expertise in fingerprint examination involves a heightened sensitivity to gist, or global image properties of a print, enabling experts to make more accurate inferences about missing details. These results align with prior research on perceptual expertise in other domains, such as radiology and face recognition, and extend our understanding of scene-based categorisation to naturalistic object categorisation. Our research highlights the importance of experience and extensive exposure to a wide variety of prints in developing the perceptual skills needed for expert performance. These findings have implications for real-world tasks involving decisions based on limited or impoverished visual information, emphasizing the critical role of experience in detecting global summaries for accurate visual inferences.
... The final 10 models (6 Caucasian and 4 Eastern/Southeastern Asian) were average on all ratings 1 . We combined Eastern/Southeastern Asian faces into a broad ' Asian' category, as previous face recognition research has treated them as a single category [62,63], as do some face databases (e.g., Chicago Face Database) [64]. ...
Article
Full-text available
Background Remote research methods and interventions for mental health disorders have become increasingly important, particularly for conditions like eating disorders (EDs). Embodiment illusions, which induce feelings of ownership over another person?s body or body parts, offer valuable insights into the mechanisms underlying self-perception issues in EDs and potential interventions. However, existing research using these illusions has been limited to face-to-face settings. We illustrate a novel online protocol to induce the enfacement illusion (embodiment illusion principles applied to one’s face) in an ED-based sample. Methods Participants complete a 2-hr virtual session with a researcher. First, baseline trait/state ED psychopathology measures and a self-face recognition task occur. Second, participants experience two testing blocks of the enfacement illusion involving synchronously and asynchronously mimicking a pre-recorded actor’s facial expressions. After each block, subjective and objective enfacement illusion measures occur alongside state ED psychopathology reassessment. Discussion Successfully inducing enfacement illusions online could provide an affordable, accessible virtual approach to further elucidate the mechanistic role of self-perception disturbances across psychopathologies such as EDs. Moreover, this protocol may represent an innovative, remotely-delivered intervention strategy, as ‘enfacement’ over another face could update negative self-representations in a cost-effective, scalable manner.
... If our results generalise to these more naturalistic task demands, we might expect flexibility in the facial features sampled by an individual viewer to facilitate their discovery of diagnostic features as faces become familiar. Overall, it is possible that different task conditions might contribute additional variance in the features that are used in face identity processing (Leong et al., 2023; see also Smith & Merlusca, 2014). Future work should test flexibility in feature use across more diverse and naturalistic task demands. ...
Preprint
Full-text available
People prioritise diagnostic features in classification tasks. However, it is not clear whether this priority is fixed or is flexibly applied depending on the specific classification decision, or how feature use behaviour contributes to individual differences in performance. Here we examined whether flexibility in features used in a face identification task supports face recognition ability. In Experiment 1, we show that the facial features most useful for identification vary – to a surprising degree – depending on the specific face identity comparison at hand. While the ears and eyes were most diagnostic for face identification in general, they were the most diagnostic feature for just 22% and 14% of identity decisions, respectively. In 3 subsequent experiments, we find that flexibility in feature use contributes to an individual’s face identity matching ability. Higher face identification accuracy was associated with being aware of (Experiments 2 & 4) and attending to (Experiments 3 & 4) the most diagnostic features for a specific facial comparison. This conferred an enhanced benefit relative to focusing on features that were diagnostic of face identity decisions in general (Experiment 4). We conclude that adaptability in information sampling supports face recognition ability and discuss theoretical and applied implications.
Article
Objective: The present study aimed to examine facial emotion recognition in a sample from the general population with elevated schizotypal traits, as defined by the four-factor model of schizotypy, and the association of facial emotion recognition and the schizotypal dimensions with psychological well-being. Method: Two hundred and thirty-eight participants were allocated into four schizotypal groups and one control group. Following a cross-sectional study design, facial emotion recognition was assessed with a computerized task that included images from the Radboud Faces Database, schizotypal traits were measured with the Schizotypal Personality Questionnaire, and psychological well-being was evaluated with the Flourishing scale. Results: The results revealed distinct patterns of performance across the schizotypal groups and the application of a dimensional approach that included all participants as one group indicated specific associations between the four schizotypal dimensions and psychological well-being. Specifically, (a) negative schizotypes showed poor identification of sadness and fear potentially due to the activation of coping mechanisms, (b) disorganized schizotypes inaccurately recognized surprise, possibly reflecting the effects of disorganized thought on distinguishing this ambiguous emotion, and (c) psychological well-being was predicted by high cognitive-perceptual along with low negative and disorganized schizotypy as well as the accurate recognition of specific emotional states that are common in daily social interactions. Conclusions: In conclusion, the study findings further advance the identification of emotion-processing difficulties in schizophrenia-vulnerable individuals and further highlight the need for highly personalized early intervention strategies.
Article
Visual comparison is the ability to ‘match’ visual stimuli like fingerprints or faces and decide whether they are from the same source or different sources (e.g., fingerprint‐matching). Limited research has investigated individual differences in this ability. In this paper, we present the results of three studies that explore the generalisability and stability of five visual comparison tasks (fingerprints, faces, artificial‐prints, footwear and toolmarks). We report data from three new studies examining the generalisability and stability of footwear comparison (Exp. 1) and toolmark comparison (Exp. 2), as well as the generalisability of all five comparison tasks (Exp. 3). Our results reveal that visual comparison ability generalises across all five comparison tasks and has stable test–retest reliability over time.
Article
Full-text available
Perceptual processes underlying individual differences in face-recognition ability remain poorly understood. We compared visual sampling of 37 adult super-recognizers—individuals with superior face-recognition ability—with that of 68 typical adult viewers by measuring gaze position as they learned and recognized unfamiliar faces. In both phases, participants viewed faces through “spotlight” apertures that varied in size, with face information restricted in real time around their point of fixation. We found higher accuracy in super-recognizers at all aperture sizes—showing that their superiority does not rely on global sampling of face information but is also evident when they are forced to adopt piecemeal sampling. Additionally, super-recognizers made more fixations, focused less on eye region, and distributed their gaze more than typical viewers. These differences were most apparent when learning faces and were consistent with trends we observed across the broader ability spectrum, suggesting that they are reflective of factors that vary dimensionally in the broader population.
Article
Full-text available
Studies have suggested that the holistic advantage in face perception is not always reported for the own face. With two eye-tracking experiments, we explored the role of holistic and featural processing in the processing and the recognition of self, personally familiar, and unfamiliar faces. Observers were asked to freely explore (Exp.1) and recognize (Exp.2) their own, a friend's, and an unfamiliar face. In Exp.1, self-face was fixated more and longer and there was a preference for the mouth region when seeing the own face and for the nose region when seeing a friend and unfamiliar faces. In Exp.2, the viewing strategies did not differ across all faces, with eye fixations mostly directed to the nose region. These results suggest that task demands might modulate the way that the own face is perceived and highlights the importance of considering the role of the distinct visual experience people have for the own face in the processing and recognition of the self-face.
Article
Full-text available
According to the processing bias account, global Navon-induced processing primes the adoption of a holistic strategy whereas local Navon-induced processing triggers featural processing. As faces are recognised at a holistic level, global Navon-induced processing would increase recognition accuracy of whole faces. In contrast, local Navon-induced processing would enhance the recognition of individual facial features. In two experiments we explored this account using the part/whole task. Observers were asked to recognise facial features presented in isolation or embedded into whole faces, after global or local Navon-induced processing. In both experiments, results showed a whole-over-part advantage whereby facial features were recognised more accurately in the context of the whole face than in isolation. However, Navon-induced processing failed to modulate this effect as well as the magnitude of holistic-featural face processing. These results cast doubts on the reliability of Navon processing to prime the adoption of a particular processing style for face identification.
Article
Full-text available
It is widely accepted that face perception relies on holistic processing. However, this holistic advantage is not always found in the processing of the own face. Our study aimed to explore the role of holistic and featural processing in the identification of the own face, using three standard, but largely independent measures of holistic face processing: the face inversion task, the composite face task, and the part-whole task. Participants were asked to identify their face, a friend’s face, and an unfamiliar face in three different experimental blocks: (a) inverted versus upright; (b) top and bottom halves of the face aligned versus misaligned; and (c) facial features presented in isolation versus whole foil face context. Inverting a face impaired its identification, regardless of the identity. However, alignment effects were only found when identifying a friend or an unfamiliar face. In addition, a stronger feature advantage (i.e., better recognition for isolated features compared to in a whole-face context) was observed for the own face compared to the friend and unfamiliar faces. Altogether, these findings suggest that the own face is processed in a more featural manner but also relies on holistic processing. This work also highlights the importance of taking into consideration that different holistic processing paradigms could tap different forms of holistic processing.
Article
Full-text available
Although the positive effects of congruency between stimuli are well replicated in face memory paradigms, mixed findings have been found in face matching. Due to the current COVID-19 pandemic, face masks are now very common during daily life outdoor activities. Thus, the present study aims to further explore congruency effects in matching faces partially occluded by surgical masks. Observers performed a face matching task consisting of pairs of faces presented in full view (i.e., full-view condition), pairs of faces in which only one of the faces had a mask (i.e., one-mask condition), and pairs of faces in which both faces had a mask (i.e., two-mask condition). Although face masks disrupted performance in identity match and identity mismatch trials, in match trials, we found better performance in the two-mask condition compared to the one-mask condition. This finding highlights the importance of congruency between stimuli on face matching when telling faces together.
Article
Full-text available
Face recognition depends on the ability of the face processing system to extract facial features that define the identity of a face. In a recent study we discovered that altering a subset of facial features changed the identity of the face, indicating that they are critical for face identification. Changing another set of features did not change the identity of a face, indicating that they are not critical for face identification. In the current study, we assessed whether developmental prosopagnosics (DPs) and super recognizers (SRs) rely more heavily on critical features than non-critical features for face identification. To that end, we presented to DPs and SRs faces in which either the critical or the non-critical features were manipulated. In Study 1, we presented SRs with a famous face recognition task. We found that overall SRs recognized famous faces that differ in either critical or non-critical features better than controls. Similar to controls, changes in critical features had a larger effect on SRs’ face recognition than changes in non-critical features. In Study 2, we presented an identity matching task to DPs and SRs. Similar to controls, DPs and SRs perceived faces that differed in critical features as more different than faces that differed in non-critical features. Taken together, our results indicate that SRs and DPs use the same critical features as normal individuals for face identification. These findings emphasize the fundamental role of this subset of features for face identification.
Article
Full-text available
Purpose The other-race effect shows that people are better recognizing faces from their own-race compared to other-race faces. This effect can have dramatic consequences in applied scenarios whereby face identification is paramount, such as eyewitness identification. This paper aims to investigate whether observers have insights into their ability to recognize other-race faces. Design/methodology/approach Chinese ethnic observers performed objective measures of own- and other-race face recognition – the Cambridge Face Memory Test Chinese and the Cambridge Face Memory Test original; the PI20 – a 20-items self-reported measured of general face recognition abilities; and the ORE20 – a new developed 20-items self-reported measure of other-race face recognition. Findings Recognition of own-race faces was better compared to other-race faces. This effect was also evident at a phenomenological level, as observers reported to be worse recognizing other-race faces compared to own-race faces. Additionally, although a moderate correlation was found between own-race face recognition abilities and the PI20, individual differences in the recognition of other-race faces was only poorly associated with observers’ scores in the ORE20. Research limitations/implications These results suggest that observers’ insights to recognize faces are more consistent and reliable for own-race faces. Practical implications Self-reported measures of other-race recognition could produce misleading results. Thus, when evaluating eyewitness’ accuracy identifying other-race faces, objective measures should be used. Originality/value In contrast to own race recognition, people have very limited insights into their recognition abilities for other race faces.
Article
Full-text available
The 20-Item Prosopagnosia Items (PI-20) was recently introduced as a self-report measure of face recognition abilities and as an instrument to help the diagnosis of prosopagnosia. In general, studies using this questionnaire have shown that observers have moderate to strong insights into their face recognition abilities. However, it remains unknown whether these insights are equivalent for the whole range of face recognition abilities. The present study investigates this issue using the Mandarin version of the PI-20 and the Cambridge Face Memory Test Chinese (CFMT-Chinese). Our results showed a moderate negative association between the PI-20 and the CFMT-Chinese. However, this association was driven by people with low and high face recognition ability, but absent in people within the typical range of face recognition performance. The implications of these results for the study of individual differences and the diagnosis of prosopagnosia are discussed.
Article
Many studies have attempted to identify the perceptual underpinnings of developmental prosopagnosia (DP). The majority have focused on whether holistic and configural processing mechanisms are impaired in DP. However, previous work suggests that there is substantial heterogeneity in holistic and configural processing within the DP population; further, there is disagreement as to whether any deficits are face-specific or reflect a broader perceptual deficit. This study used a data-driven approach to examine whether there are systematic patterns of variability in DP that reflect different underpinning perceptual deficits. A group of individuals with DP (N = 37) completed a cognitive battery measuring holistic/configural and featural processing in faces and non-face objects. A two-stage cluster analysis on data from the Cambridge Face Perception Test identified two subgroups of DPs. Across several tasks, the first subgroup (N = 21) showed typical patterns of holistic/configural processing (measured via inversion effects); the second (N = 16) was characterised by reduced or abolished inversion effects compared to age-matched control participants (N = 91). The subgroups did not differ on tasks measuring upright face matching, object matching, non-face holistic processing, or composite effects. These findings indicate two separable pathways to face recognition impairment, one characterised by impaired configural processing and the other potentially by impaired featural processing. Comparisons to control participants provide some preliminary evidence that the deficit in featural processing may extend to some non-face stimuli. Our results demonstrate the utility of examining both the variability between and consistency across individuals with DP as a means of illuminating our understanding of face recognition in typical and atypical populations.
Chapter
A major pain for researchers in all fields is that they have less and less time for actual science activities: reading, thinking, coming up with new theories and hypotheses, testing, analyzing data, writing. In psychology, three of the most time-consuming nonactual science activities are: learning how to program an experiment, recruiting participants, and preparing teaching materials. Testable (www.testable.org) provides a suite of academic tools to speed things up considerably. The Testable software allows the development of most psychology experiments in minutes, using a natural language form and a spreadsheet. Furthermore, any experiment can be easily converted into a social experiment in Testable Arena, with multiple participants interacting and viewing each other's responses. Experiments can then be published to Testable Library, a public repository for demonstration and sharing purposes. Participants can be recruited from Testable Minds, the subject pool with the most advanced participants verification system. Testable Minds employs multiple checks (such as face authentication) to ensure participants have accurate demographics (age, sex, location), are human, unique, and reliable. Finally, the Testable Class module can be used to teach psychology through experiments. It features over 50 ready-made classic psychology experiments, fully customizable, which instructors can add to their classes, together with their own experiments. These experiments can then be made available to students to do, import, modify, and use to collect data as part of their class. These Testable tools, backed up by a strong team of academic advisors and thousands of users, can save psychology researchers and other behavioral scientists valuable time for science.