Daniel J. Simons’s research while affiliated with University of Illinois, Urbana-Champaign and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (27)


Estimated consensus for the subjective evidence subscale. The black points show the posterior medians (plus 95% credible interval) of the consensus, including the category thresholds. Items followed by an asterisk reflect items that have been reverse-coded and their labels have been changed for interpretability. The white marker at the bottom reflects the overall median assessment (plus 95% CI) of the subjective evidence subscale.
Estimated consensus for the methodological appropriateness subscale. The black points show the posterior medians (plus 95% credible interval) of the consensus, including the category thresholds. The white marker at the bottom reflects the overall median assessment (plus 95% CI) of the methodological appropriateness subscale.
Prior and final beliefs about the plausibility of the hypothesis. The left side of the figure shows the change in beliefs for each analysis team. Forty-five per cent of the teams considered the hypothesis more likely after having analysed the data than prior to seeing the data, 10% considered the hypothesis less likely having analysed the data, and 45% did not change their beliefs. Plausibility was measured on a 4-point Likert scale ranging from ‘strongly disagree’ to ‘strongly agree’. Points are jittered to enhance visibility. The right side of the figure shows the distribution of the Likert response options before and after having conducted the analyses. The number at the top of the data bar indicates the percentage of teams that agreed that the hypothesis was plausible (in green) and the number at the bottom of the data bar (in brown/orange) indicates the percentage of teams that disagreed that the hypothesis was plausible.
Reported effect sizes (beta coefficients) and subjective beliefs about the likelihood of the hypothesis. Panel (a) shows the relation between effect size and prior beliefs for the research question. Panel (b) shows the relation between effect size and final beliefs for the research question and panel (c) shows the relation between effect size and the analysis teams’ level of scepticism regarding the evidence. In (a,b), points are jittered on the x-axis to enhance visibility. The dashed line represents an effect size of 0. Histograms/density plots at the top represent the distribution of subjective beliefs and the density plots on the right represent the distribution of reported effect sizes.
Subjective evidence evaluation survey for many-analysts studies
  • Article
  • Full-text available

July 2024

·

273 Reads

·

2 Citations

·

·

·

[...]

·

Many-analysts studies explore how well an empirical claim withstands plausible alternative analyses of the same dataset by multiple, independent analysis teams. Conclusions from these studies typically rely on a single outcome metric (e.g. effect size) provided by each analysis team. Although informative about the range of plausible effects in a dataset, a single effect size from each team does not provide a complete, nuanced understanding of how analysis choices are related to the outcome. We used the Delphi consensus technique with input from 37 experts to develop an 18-item subjective evidence evaluation survey (SEES) to evaluate how each analysis team views the methodological appropriateness of the research design and the strength of evidence for the hypothesis. We illustrate the usefulness of the SEES in providing richer evidence assessment with pilot data from a previous many-analysts study.

Download

Reasons why it is challenging to determine whether failures to notice resulted from inattentional blindness
Inattentional blindness in medicine

March 2024

·

94 Reads

·

5 Citations

Cognitive Research Principles and Implications

People often fail to notice unexpected stimuli when their attention is directed elsewhere. Most studies of this “inattentional blindness” have been conducted using laboratory tasks with little connection to real-world performance. Medical case reports document examples of missed findings in radiographs and CT images, unintentionally retained guidewires following surgery, and additional conditions being overlooked after making initial diagnoses. These cases suggest that inattentional blindness might contribute to medical errors, but relatively few studies have directly examined inattentional blindness in realistic medical contexts. We review the existing literature, much of which focuses on the use of augmented reality aids or inspection of medical images. Although these studies suggest a role for inattentional blindness in errors, most of the studies do not provide clear evidence that these errors result from inattentional blindness as opposed to other mechanisms. We discuss the design, analysis, and reporting practices that can make the contributions of inattentional blindness unclear, and we describe guidelines for future research in medicine and similar contexts that could provide clearer evidence for the role of inattentional blindness.


Are Familiar Objects More Likely to Be Noticed in an Inattentional Blindness Task?

February 2024

·

70 Reads

Journal of Cognition

People often fail to notice the presence of unexpected objects when their attention is engaged elsewhere. In dichotic listening tasks, for example, people often fail to notice unexpected content in the ignored speech stream even though they occasionally do notice highly familiar stimuli like their own name (the “cocktail party” effect). Some of the first studies of inattentional blindness were designed as a visual analog of such dichotic listening studies, but relatively few inattentional blindness studies have examined how familiarity affects noticing. We conducted four preregistered inattentional blindness experiments (total N = 1700) to examine whether people are more likely to notice a familiar unexpected object than an unfamiliar one. Experiment 1 replicated evidence for greater noticing of upright schematic faces than inverted or scrambled ones. Experiments 2–4 tested whether participants from different pairs of countries would be more likely to notice their own nation’s flag or petrol company logo than those of another country. These experiments repeatedly found little or no evidence that familiarity affects noticing rates for unexpected objects. Frequently encountered and highly familiar stimuli do not appear to overcome inattentional blindness.


Figure C3
Subjective Evidence Evaluation Survey For Many-Analysts Studies

January 2024

·

275 Reads

Many-analysts studies explore how well an empirical claim withstands plausible alternative analyses of the same data set by multiple, independent analysis teams. Conclusions from these studies typically rely on a single outcome metric (e.g., effect size) provided by each analysis team. Although informative about the range of plausible effects in a data set, a single effect size from each team does not provide a complete, nuanced understanding of how analysis choices are related to the outcome. We used the Delphi consensus technique with input from 37 experts to develop an 18-item Subjective Evidence Evaluation Survey (SEES) to evaluate how each analysis team views the methodological appropriateness of the research design and the strength of evidence for the hypothesis. We illustrate the usefulness of the SEES in providing richer evidence assessment with pilot data from a previous many-analysts study.


Individual differences in inattentional blindness

January 2024

·

97 Reads

·

1 Citation

Psychonomic Bulletin & Review

People often fail to notice unexpected objects and events when they are performing an attention-demanding task, a phenomenon known as inattentional blindness. We might expect individual differences in cognitive ability or personality to predict who will and will not notice unexpected objects given that people vary in their ability to perform attention-demanding tasks. We conducted a comprehensive literature search for empirical inattentional blindness reports and identified 38 records that included individual difference measures and met our inclusion criteria. From those, we extracted individual difference effect sizes for 31 records which included a total of 74 distinct, between-groups samples with at least one codable individual difference measure. We conducted separate meta-analyses of the relationship between noticing/missing an unexpected object and scores on each of the 14 cognitive and 19 personality measures in this dataset. We also aggregated across personality measures reflecting positive/negative affectivity or openness/absorption and cognitive measures of interference, attention breadth, and memory. Collectively, these meta-analyses provided little evidence that individual differences in ability or personality predict noticing of an unexpected object. A robustness analysis that excluded samples with extremely low numbers of people who noticed or missed produced similar results. For most measures, the number of samples and the total sample sizes were small, and larger studies are needed to examine individual differences in inattentional blindness more systematically. However, the results are consistent with the idea that noticing of unexpected objects or events differs from deliberate attentional control tasks in that it is not reliably predicted by individual differences in cognitive ability.


Similarity of an unexpected object to the attended and ignored objects affects noticing in a sustained inattentional blindness task

October 2023

·

37 Reads

·

3 Citations

Attention Perception & Psychophysics

When focusing attention on some objects and ignoring others, people often fail to notice the presence of an additional, unexpected object (inattentional blindness). In general, people are more likely to notice when the unexpected object is similar to the attended items and dissimilar from the ignored ones. Perhaps surprisingly, current evidence suggests that this similarity effect results almost entirely from dissimilarity to the ignored items, and it remains unclear whether similarity to the attended items affects noticing. Other aspects of similarity have not been examined at all, including whether the similarity of the attended and ignored items to each other affects noticing of a distinct unexpected object. We used a sustained inattentional blindness task to examine all three aspects of similarity. Experiment 1 (n = 813) found no evidence that increasing the similarity of the attended and ignored items to each other affected noticing of an unexpected object. Experiment 2 (n = 610) provided some of the first compelling evidence that similarity to the attended items – in addition to the ignored items – affects noticing. Experiment 3 (n = 1,044) replicated that pattern and showed that noticing rates varied with the degree of similarity to the ignored shapes but not to the attended shapes, suggesting that suppression of ignored items functions differently from the enhancement of attended items.


Precision of Memory for Attended and Ignored Colors

September 2023

·

82 Reads

Collabra Psychology

Selective attention can enhance some aspects of our visual world while filtering others from awareness. Given our limited cognitive resources, such filtering is essential when viewing complex scenes, but it also applies to simple scenes. Eitam, Yeshurun, and Hassan (2013) observed better performance for the attended color than the ignored color in a simple, two-colored object even though both colors were salient and the complexity of the display did not tax the capacity of visual memory. Our goal was to replicate this finding while addressing a potential task demand that could have contributed to the results. Specifically, participants might have misread the instructions and mistakenly reported the attended color when asked to report the ignored color first. Experiment 1 (n=67) replicated Eitam et al.’s (2013) finding while measuring memory precision. We found that people had worse memory for the ignored than the attended feature of a single, simple object. Experiment 2 (n=69) replicated the pattern while again addressing the potential task demand, although the effect was smaller. Experiment 3 (n=186) provided visual feedback to eliminate any remaining risk of response error and again replicated the original finding. Attended information was stored with greater precision than unattended information, even for a simple object.


Consensus-based guidance for conducting and reporting multi-analyst studies

November 2021

·

310 Reads

·

40 Citations

eLife

Any large dataset can be analyzed in a number of ways, and it is possible that the use of different analysis strategies will lead to different results and conclusions. One way to assess whether the results obtained depend on the analysis strategy chosen is to employ multiple analysts and leave each of them free to follow their own approach. Here, we present consensus-based guidance for conducting and reporting such multi-analyst studies, and we discuss how broader adoption of the multi-analyst approach has the potential to strengthen the robustness of results and conclusions obtained from analyses of datasets in basic and applied research.


A reproducible systematic map of research on the illusory truth effect

October 2021

·

200 Reads

·

36 Citations

Psychonomic Bulletin & Review

People believe information more if they have encountered it before, a finding known as the illusory truth effect . But what is the evidence for the generality and pervasiveness of the illusory truth effect? Our preregistered systematic map describes the existing knowledge base and objectively assesses the quality, completeness and interpretability of the evidence provided by empirical studies in the literature. A systematic search of 16 bibliographic and grey literature databases identified 93 reports with a total of 181 eligible studies. All studies were conducted at Western universities, and most used convenience samples. Most studies used verbatim repetition of trivia statements in a single testing session with a minimal delay between exposure and test. The exposure tasks, filler tasks and truth measures varied substantially across studies, with no standardisation of materials or procedures. Many reports lacked transparency, both in terms of open science practices and reporting of descriptive statistics and exclusions. Systematic mapping resulted in a searchable database of illusory truth effect studies ( https://osf.io/37xma/ ). Key limitations of the current literature include the need for greater diversity of materials as stimuli (e.g., political or health contents), more participants from non-Western countries, studies examining effects of multiple repetitions and longer intersession intervals, and closer examination of the dependency of effects on the choice of exposure task and truth measure. These gaps could be investigated using carefully designed multi-lab studies. With a lack of external replications, preregistrations, data and code, verifying replicability and robustness is only possible for a small number of studies.


Figure 2 Effect of repetition across interval, cell means (black points, line) plotted against participant means (top row) and stimulus means (bottom row).
Figure 4 Distribution of participants showing an overall effect of the illusory truth effect.
Figure 5 Illusory truth effect by category judgment accuracy.
Planned Comparisons of the Simple Effect of Repetition at Each Interval, with Holm-Bonferroni Correction.
The Trajectory of Truth: A Longitudinal Study of the Illusory Truth Effect

June 2021

·

262 Reads

·

39 Citations

Journal of Cognition

Repeated statements are rated as subjectively truer than comparable new statements, even though repetition alone provides no new, probative information (the illusory truth effect). Contrary to some theoretical predictions, the illusory truth effect seems to be similar in magnitude for repetitions occurring after minutes or weeks. This Registered Report describes a longitudinal investigation of the illusory truth effect (n = 608, n = 567 analysed) in which we systematically manipulated intersession interval (immediately, one day, one week, and one month) in order to test whether the illusory truth effect is immune to time. Both our hypotheses were supported: We observed an illusory truth effect at all four intervals (overall effect: χ 2(1) = 169.91; M repeated = 4.52, M new = 4.14; H1), with the effect diminishing as delay increased (H2). False information repeated over short timescales might have a greater effect on truth judgements than repetitions over longer timescales. Researchers should consider the implications of the choice of intersession interval when designing future illusory truth effect research.


Citations (19)


... As emphasized in the discussion of our estimates of analytical heterogeneity, the question of how to handle "outlier analysis paths" plays a crucial role in estimating meta-effects and heterogeneity in multianalyst settings. It is essential to have methods and procedures in place to ensure quality control of the analysis paths chosen by analysts and to determine whether or not these paths qualify as legitimate to address the hypothesis in question (80). We recommend that multianalyst studies report estimates of the extent of analytical heterogeneity in terms of H (or I 2 ) obtained via a random-effects meta-analysis and in terms of the ratio between the SD of effect size estimates across analysts and the mean SE as introduced by Huntington-Klein et al. (63). ...

Reference:

Heterogeneity in effect size estimates
Subjective evidence evaluation survey for many-analysts studies

... 1 Collectively, phenomena related to LBFTS can be a key source of medical error. 11 The LBFTS related phenomena have been studied extensively in the aviation industry, 12 in the military, 13 and road safety. 14 IB has grown in interest because these industries believe that visual failure rather than mechanical failure can dictate the outcome of an event. ...

Inattentional blindness in medicine

Cognitive Research Principles and Implications

... In fact, noticing under conditions of inattentional blindness might not be an ability at all. Measures of cognitive ability reliably predict individual differences in the sorts of attentional control mechanisms involved in divided or selective attention task performance, but those same measures do not seem to predict noticing of unexpected objects in inattentional blindness tasks (Simons et al., 2024). For example, measures of attentional control and working memory such as OSPAN predict performance on a wide range of attentional control tasks, including the attentional blink (Willems & Martens, 2016), negative priming (Conway et al., 1999), and attention capture (Unsworth et al., 2004). ...

Individual differences in inattentional blindness
  • Citing Article
  • January 2024

Psychonomic Bulletin & Review

... For example, when attending to white or black objects in a sustained inattentional blindness task, about 30% of participants missed an unexpected red cross (Most et al., 2001). Unexpected objects that are more similar to the attended items and more distinct from the ignored ones tend to be noticed more frequently (e.g., Ding et al., 2023;Goldstein & Beck, 2016;Most et al., 2001Most et al., , 2005Simons & Chabris, 1999;Wood & Simons, 2017). For instance, when attending to black objects and ignoring white ones, an unexpected dark gray object, closer in luminance to the attended black objects, was noticed more than an unexpected light gray object, which was more similar to the ignored white objects (Most et al., 2001). ...

Similarity of an unexpected object to the attended and ignored objects affects noticing in a sustained inattentional blindness task
  • Citing Article
  • October 2023

Attention Perception & Psychophysics

... The rapidly growing field of artificial intelligence is unfortunately not immune against irreproducibility issues. 25 Multi-analyst approaches are known to strengthen the robustness of results and conclusions obtained from analysis of datasets 26 and to show that analytical flexibility can have substantial effects on scientific conclusions. 20 Thus, results obtained by the three teams, featuring a number of common aspects but also differences, allow us to formulate more reliable results than a single analysis would do. ...

Consensus-based guidance for conducting and reporting multi-analyst studies

eLife

... Building on this assumption, we test a straightforward implication. It is well-established that repeating information increases the information's perceived truth (Unkelbach et al., 2019;Henderson et al., 2022). Consequently, a source that states repeated information should appear more credible than a source that states unrepeated information. ...

A reproducible systematic map of research on the illusory truth effect

Psychonomic Bulletin & Review

... Even though people figure out correct answers when asking to pay more attention to prior exposed misinformation, their trust in it still increases in the absence of such notice (Fazio et al., 2015). The reason is that the cognitive fluency caused by familiarity can further reinforce the perceived truthfulness of the message, as human beings tend to mistakenly attribute the processing fluency to reliable information rather than to familiarity triggered by repeated exposure (Fragale & Heath, 2004;Henderson et al., 2021). Individuals, as a result, prefer not to change previous knowledge and thus form cognitive bias (Kahnemann, 2011). ...

The Trajectory of Truth: A Longitudinal Study of the Illusory Truth Effect

Journal of Cognition

... This selection process, based on intuition, experience, and assumptions, is used by many when designing interventions to change health behaviors, including clinicians (28), and is known as the "It Seemed Like A Good Idea At The Time" (ISLAGIATT) principle (18). This is not to say this method is ineffective, but it certainly is not transparent or replicable, as advocated for in research (29). Together, this could indicate a need for researchers to use different methods of selecting strategies for enhancing adherence depending on the complexity of behavior in question. ...

A consensus-based transparency checklist

Nature Human Behaviour