Examples of stimuli presented in the Model Matching Test across test phases. Images are reproduced from Bate et al. (2018) under a Creative Commons Licence (http:// creativecommons.org/ licenses/by/4.0/).

Examples of stimuli presented in the Model Matching Test across test phases. Images are reproduced from Bate et al. (2018) under a Creative Commons Licence (http:// creativecommons.org/ licenses/by/4.0/).

Source publication
Article
Full-text available
Studies of facial identity processing typically assess perception (via matching) and/or memory (via recognition), with experimental designs differing with respect to one important aspect: Target Prevalence. Some designs include “target absent” (TA) among “target present” (TP) trials. In visual search tasks, TA trials shift an observer’s decisional...

Contexts in source publication

Context 1
... phases differ in the similarity between initially learned target images and the potential matching target stimulus. Similarity can be high, with minor changes between the learned image of a given identity and its matching probe, or low, i.e. entailing greater changes (see Figure 1). Additionally, the MMT includes Target Absent trials at a constant rate of 50% of the trials throughout, and as such cannot assess the effect of varying their prevalence on hit rates. ...
Context 2
... MMT exploits this effect of image changes to systematically increase Target-To-Match Similarity. That is, across the recognition phases, targets' facial appearance changes are initially less, and then more pronounced across Phases 1 and 2 (see Figure 1). Unfortunately, the parallel implementation of target absent trials as a second novel feature of the original MMT is undesirable. ...
Context 3
... phases differed in terms of Target-To-Match Similarity (i.e., similarity between the learned targets and probes presented during recognition phases). As demonstrated in Figure 1, Phases 1 and 2 involved lesser vs. greater changes (change of lighting or viewpoint, vs. change of hairstyle, addition of a beard, glasses, etc), respectively. ...
Context 4
... the previous comparison, the favored model accounted for only the Context, but not the interaction between Context and Target-to-Match Similarity. This was explained by a main effect of Context, due to Group 2 exhibiting significantly better performance than Group 3. If Group 2's observers had highest Hit Rates for both phases compared to those from Group 3, we cannot account for any specific contextual effects regarding Phase 1 or Phase 2. This is because of the relative non-significance of the model 1b including the interaction between Context and Target-to-Match Similarity (Figure 4b; model 1b). Consequently, we can only talk about a general contextual effect between both groups on the Hit Rate. ...

Citations

Article
There is growing interest in how data-driven approaches can help understand individual differences in face identity processing (FIP). However, researchers employ various FIP tests interchangeably, and it is unclear whether these tests 1) measure the same underlying ability/ies and processes (e.g., confirmation of identity match or elimination of identity match) 2) are reliable, 3) provide consistent performance for individuals across tests online and in laboratory. Together these factors would influence the outcomes of data-driven analyses. Here, we asked 211 participants to perform eight tests frequently reported in the literature. We used Principal Component Analysis and Agglomerative Clustering to determine factors underpinning performance. Importantly, we examined the reliability of these tests, relationships between them, and quantified participant consistency across tests. Our findings show that participants' performance can be split into two factors (called here confirmation and elimination of an identity match) and that participants cluster according to whether they are strong on one of the factors or equally on both. We found that the reliability of these tests is at best moderate, the correlations between them are weak, and that the consistency in participant performance across tests and is low. Developing reliable and valid measures of FIP and consistently scrutinising existing ones will be key for drawing meaningful conclusions from data-driven studies.