Figure 4 - available via license: Creative Commons Attribution 4.0 International
Content may be subject to copyright.
Groups' performance and multi-level model results. a. Mean hit rates per group and Target Recognition Phase. Multi-level model results for b. Group 1 vs. Group 3, and c. Group 2 vs. Group 3. Numbers below each model name represent (in order) the Bayesian information criterion (BIC), the Bayes Factor (BF), and the BF's logarithmic expression (Log 10 ). Black font indicates models showing better evidence of explaining the variance among participants in comparison with the inferior level's model. The highest model in black is gathered for further analyses.
Source publication
Studies of facial identity processing typically assess perception (via matching) and/or memory (via recognition), with experimental designs differing with respect to one important aspect: Target Prevalence. Some designs include “target absent” (TA) among “target present” (TP) trials. In visual search tasks, TA trials shift an observer’s decisional...
Contexts in source publication
Context 1
... conducted separate model comparisons between the cohorts who completed modified versions of the MMT (Group 1 and 2), and our original MMT cohort (Group 3) (for details, see Statistical Analyses). Figure 4a displays groups' mean hit rates for Target Recognition Phases 1 and 2 (across which Target-to-Match Similarity decreased); Figure 4b displays the results of the multi-level models detailed below. Specifically, here we sought to determine whether the effect of Context on hit rate depends on the presence of TA trials in Phase 1 or Phase 2. ...
Context 2
... conducted separate model comparisons between the cohorts who completed modified versions of the MMT (Group 1 and 2), and our original MMT cohort (Group 3) (for details, see Statistical Analyses). Figure 4a displays groups' mean hit rates for Target Recognition Phases 1 and 2 (across which Target-to-Match Similarity decreased); Figure 4b displays the results of the multi-level models detailed below. Specifically, here we sought to determine whether the effect of Context on hit rate depends on the presence of TA trials in Phase 1 or Phase 2. ...
Context 3
... begin with, we considered the scenario where Phase 1 was identical between groups, changing only during Phase 2 for Group 1. Having confirmed a general effect of target similarity (Figure 3), we treated this as our zero-order model, and compared it against models including a main effect of Context (Figure 4b; model 1a) and a Context by Similarity interaction (Figure 4b; model 1b). While Model 1a provides no better explanation than Model 0, we find decisive evidence favoring Model 1b over Model 0. Overall, this suggests that the interaction between Target Prevalence and Target-to-Match Similarity best explains observers' pattern of hit rates. ...
Context 4
... begin with, we considered the scenario where Phase 1 was identical between groups, changing only during Phase 2 for Group 1. Having confirmed a general effect of target similarity (Figure 3), we treated this as our zero-order model, and compared it against models including a main effect of Context (Figure 4b; model 1a) and a Context by Similarity interaction (Figure 4b; model 1b). While Model 1a provides no better explanation than Model 0, we find decisive evidence favoring Model 1b over Model 0. Overall, this suggests that the interaction between Target Prevalence and Target-to-Match Similarity best explains observers' pattern of hit rates. ...
Citations
There is growing interest in how data-driven approaches can help understand individual differences in face identity processing (FIP). However, researchers employ various FIP tests interchangeably, and it is unclear whether these tests 1) measure the same underlying ability/ies and processes (e.g., confirmation of identity match or elimination of identity match) 2) are reliable, 3) provide consistent performance for individuals across tests online and in laboratory. Together these factors would influence the outcomes of data-driven analyses. Here, we asked 211 participants to perform eight tests frequently reported in the literature. We used Principal Component Analysis and Agglomerative Clustering to determine factors underpinning performance. Importantly, we examined the reliability of these tests, relationships between them, and quantified participant consistency across tests. Our findings show that participants' performance can be split into two factors (called here confirmation and elimination of an identity match) and that participants cluster according to whether they are strong on one of the factors or equally on both. We found that the reliability of these tests is at best moderate, the correlations between them are weak, and that the consistency in participant performance across tests and is low. Developing reliable and valid measures of FIP and consistently scrutinising existing ones will be key for drawing meaningful conclusions from data-driven studies.