Erin Michelle Buchanan’s research while affiliated with Harrisburg University of Science and Technology and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (63)


Measuring the Semantic Priming Effect Across Many Languages
  • Preprint
  • File available

March 2025

·

169 Reads

Erin Michelle Buchanan

·

·

·

[...]

·

Jordan Suchow

Semantic priming has been studied for nearly 50 years across various experimental manipulations and theoretical frameworks. Although previous studies provide insight into the cognitive underpinnings of semantic representations, they have suffered from small sample sizes and a lack of linguistic and cultural diversity. In this Registered Report, we measured the size and the variability of the semantic priming effect across 19 languages (N = 25,163 participants analyzed) by creating the largest available database of semantic priming values based on an adaptive sampling procedure. We found evidence for semantic priming in terms of differences in response latencies between related word-pair conditions and unrelated word-pair conditions. Model comparisons showed that inclusion of a random intercept for language improved model fit, providing support for variability in semantic priming across languages. This study highlights the robustness and variability of semantic priming across languages and provides a rich, linguistically diverse dataset for further analysis.

Download

Measuring the Semantic Priming Effect Across Many Languages

March 2025

·

78 Reads

Semantic priming has been studied for nearly 50 years across various experimental manipulations and theoretical frameworks. Although previous studies provide insight into the cognitive underpinnings of semantic representations, they have suffered from small sample sizes and a lack of linguistic and cultural diversity. In this Registered Report, we measured the size and the variability of the semantic priming effect across 19 languages (N = 25,163 participants analyzed) by creating the largest available database of semantic priming values based on an adaptive sampling procedure. We found evidence for semantic priming in terms of differences in response latencies between related word-pair conditions and unrelated word-pair conditions. Model comparisons showed that inclusion of a random intercept for language improved model fit, providing support for variability in semantic priming across languages. This study highlights the robustness and variability of semantic priming across languages and provides a rich, linguistically diverse dataset for further analysis.


Fig. 1. Overview of the development process of the tool. SK = S. Kerschbaumer; UST = U. S. Tran; EM = E. McGorray; DS = D. Sewell; EMB = E. M. Buchanan.
Fig. 2. PRISMA flowchart of literature review.
Fig. 6. Overview of the results.
Fig. 7. VALID checklist steps.
Literature Search Keywords and Combinations

+3

VALID: A Checklist-Based Approach for Improving Validity in Psychological Research

February 2025

·

403 Reads

·

1 Citation

Advances in Methods and Practices in Psychological Science

In response to the replication and confidence crisis across various empirical disciplines, ensuring the validity of research has gained attention. High validity is crucial for obtaining replicable and robust study outcomes when both exploring new questions and replicating previous findings. In this study, we aimed to address this issue by developing a comprehensive checklist to assist researchers in enhancing and monitoring the validity of their research. After systematically analyzing previous findings on validity, a comprehensive list of potential checklist items was compiled. Over the course of three rounds, more than 30 interdisciplinary and psychological-science experts participated in a Delphi study. Experts rated items on their importance and were given the opportunity to propose novel items as well as improve existing ones. This process resulted in a final set of 91 items, organized according to common stages of a research project. The VALID checklist is accessible online ( https://www.validchecklist.com/ ) and provides researchers with an adaptable, versatile tool to monitor and improve the validity of their research and to suit their specific needs. By focusing on adaptiveness during its development, VALID encompasses 331 unique checklist versions, making it a one-stop solution suitable for a wide range of projects, designs, and requirements.


Mapping and Increasing Error Correction Behaviour in a Culturally Diverse Sample

January 2025

·

291 Reads

Intuition often guides our thinking effectively, but it can also lead to consequential reasoning errors, underpinning poor decisions and biased judgments. Little is known about how people globally self-correct such intuitive reasoning errors and what enhances their correction. Defying prevailing models of reasoning, recent research suggests that people spontaneously correct only a few errors during deliberation; however, enhancing error monitoring and motivating further effort should increase error correction. Here, we study whether these mechanisms apply to reasoning across individualistic and collectivistic cultures (expected N = 33,000 participants from 67 regions). Participants will solve problems that elicit incorrect intuitions twice: first intuitively and then reflectively, allowing them to correct initial errors, in a 2 (feedback: absent vs present) × 2 (answer justification: absent vs present) between-participants design. The study will shed more light on the nature, generalisability, and promotion of corrective behaviour, crucial for understanding and improving reasoning worldwide.


Mapping and Increasing Error Correction Behaviour in a Culturally Diverse Sample

January 2025

·

155 Reads

Intuition often guides our thinking effectively, but it can also lead to consequential reasoning errors, underpinning poor decisions and biased judgments. Little is known about how people globally self-correct such intuitive reasoning errors and what enhances their correction. Defying prevailing models of reasoning, recent research suggests that people spontaneously correct only a few errors during deliberation; however, enhancing error monitoring and motivating further effort should increase error correction. Here, we study whether these mechanisms apply to reasoning across individualistic and collectivistic cultures (expected N = 33,000 participants from 67 regions). Participants will solve problems that elicit incorrect intuitions twice: first intuitively and then reflectively, allowing them to correct initial errors, in a 2 (feedback: absent vs present) × 2 (answer justification: absent vs present) between-participants design. The study will shed more light on the nature, generalisability, and promotion of corrective behaviour, crucial for understanding and improving reasoning worldwide.


Figure 1: A generative directed acyclic graphs model illustrating the effect of a factor (X) on an outcome (Y),
Figure 3: Donut chart computed from d = 0.50. Use visualize_effects(d = .50, circle_color = "blue",
Figure 4: Sensitivity plot for the effect of child neglect on adult internalising problems with various
Figure S2. The surface of the Shiny app with its tabs to navigate through it. 'Example Walkthrough' explains
Mean outcome values reported in Table 1 (Kisely et al., 2018) 415
How large must an associational mean difference be to support a causal effect?

December 2024

·

8 Reads

Methodology European Journal of Research Methods for the Behavioral and Social Sciences

An observational study might support a causal claim if the association found cannot be explained by bias due to unconsidered confounders. This bias depends on how strongly the common predisposition, a summary of unconsidered confounders, is related to the factor and the outcome. For a positive effect to be supported, the product of these two relations must be smaller than the left boundary of the confidence interval for, e.g., a standardised mean difference (d). We suggest means to derive heuristics for how large this product must be to serve as a confirmatory threshold. We also provide non-technical, visual means to express researchers’ assumptions on the two relations to assess whether a finding on d is explainable by omitted confounders. The ViSe tool, available as an R package and Shiny application, allows users to choose between various effect sizes and apply it to their own data or published summary results.


Predicting the replicability of social and behavioural science claims in COVID-19 preprints

December 2024

·

225 Reads

Nature Human Behaviour

Replications are important for assessing the reliability of published findings. However, they are costly, and it is infeasible to replicate everything. Accurate, fast, lower-cost alternatives such as eliciting predictions could accelerate assessment for rapid policy implementation in a crisis and help guide a more efficient allocation of scarce replication resources. We elicited judgements from participants on 100 claims from preprints about an emerging area of research (COVID-19 pandemic) using an interactive structured elicitation protocol, and we conducted 29 new high-powered replications. After interacting with their peers, participant groups with lower task expertise (‘beginners’) updated their estimates and confidence in their judgements significantly more than groups with greater task expertise (‘experienced’). For experienced individuals, the average accuracy was 0.57 (95% CI: [0.53, 0.61]) after interaction, and they correctly classified 61% of claims; beginners’ average accuracy was 0.58 (95% CI: [0.54, 0.62]), correctly classifying 69% of claims. The difference in accuracy between groups was not statistically significant and their judgements on the full set of claims were correlated (r(98) = 0.48, P < 0.001). These results suggest that both beginners and more-experienced participants using a structured process have some ability to make better-than-chance predictions about the reliability of ‘fast science’ under conditions of high uncertainty. However, given the importance of such assessments for making evidence-based critical decisions in a crisis, more research is required to understand who the right experts in forecasting replicability are and how their judgements ought to be elicited.


Visualizemi: Visualization, Effect Size, and Replication of Measurement Invariance for Registered Reports

October 2024

·

1 Read

Latent variable modeling as a lens for psychometric theory is a popular tool for social scientists to examine measurement of constructs. Journals, such as Assessment regularly publish articles supporting measures of latent constructs wherein a measurement model is established. Confirmatory factor analysis can be used to investigate the replicability and generalizability of the measurement model in new samples, while multigroup confirmatory factor analysis is used to examine the measurement model across groups within samples. With the rise of the replication crisis and “psychology’s renaissance,” interest in divergence in measurement has increased, often focused on small parameter differences within the latent model. This article presents visualizemi, an R package that provides functionality to calculate multigroup models, partial invariance, visualizations for (non)-invariance, effect sizes for models and parameters, and potential replication rates compared with random models. Readers will learn how to interpret the impact and size of the proposed non-invariance in models with a focus on potential replication and how to plan for registered reports.


Registered Replication Report: A Large Multilab Cross-Cultural Conceptual Replication of Turri et al. (2015)

October 2024

·

284 Reads

·

4 Citations

Advances in Methods and Practices in Psychological Science

According to the justified true belief (JTB) account of knowledge, people can truly know something only if they have a belief that is both justified and true (i.e., knowledge is JTB). This account was challenged by Gettier, who argued that JTB does not explain knowledge attributions in certain situations, later called “Gettier-type cases,” wherein protagonists are justified in believing something to be true, but their belief was correct only because of luck. Laypeople may not attribute knowledge to protagonists with justified but only luckily true beliefs. Although some research has found evidence for these so-called Gettier intuitions, Turri et al. found no evidence that participants attributed knowledge in a counterfeit-object Gettier-type case differently than in a matched case of JTB. In a large-scale, cross-cultural conceptual replication of Turri and colleagues’ Experiment 1 (N = 4,724) using a within-participants design and three vignettes across 19 geopolitical regions, we did find evidence for Gettier intuitions; participants were 1.86 times more likely to attribute knowledge to protagonists in standard cases of JTB than to protagonists in Gettier-type cases. These results suggest that Gettier intuitions may be detectable across different scenarios and cultural contexts. However, the size of the Gettier intuition effect did vary by vignette, and the Turri et al. vignette produced the smallest effect, which was similar in size to that observed in the original study. Differences across vignettes suggest that epistemic intuitions may also depend on contextual factors unrelated to the criteria of knowledge, such as the characteristics of the protagonist being evaluated.


The Advantage of Big Team Science: Lessons Learned from Cognitive Science

October 2024

·

129 Reads

·

2 Citations

The replication crisis in psychology and related sciences contributed to the adoption of large-scale research initiatives known as Big Team Science (BTS). BTS has made significant advances in addressing issues of replication, statistical power, and diversity through the use of larger samples and more representative cross-cultural data. However, while these collaborations hold great potential, they also introduce unique challenges related to their scale. Drawing on experiences from successful BTS projects, we identified and outlined key strategies for overcoming diversity, volunteering, and capacity challenges. We emphasize the need for the implementation of strong organizational practices and the distribution of responsibility to prevent common pitfalls. More fundamentally, BTS requires a shift in mindset toward prioritizing collaborative effort, diversity, transparency, and inclusivity. Ultimately, we call for reflection on the strengths and limitations of BTS to enhance the quality, generalizability, and impact of research across disciplines.


Citations (21)


... For instance, Machery et al. (2017b) define 'Gettier intuition' as an overall tendency of study participants to deny knowledge in Gettier cases without referring to a tendency to attribute knowledge in other relevant cases (e.g., CMCs). Hall et al. (2024) use this term as relating to both Gettier and control cases: Gettier intuitions are obtained if subjects are significantly more likely to deny knowledge in the Gettier case than in the control case (this allows them to distinguish between 'small' and 'large ' Gettier effects). A similar approach was adopted in studies by Nagel et al. (2013), Colaço et al. (2014), and Ziółkowski (2016Ziółkowski ( , 2021, who used the term 'Gettierization effect.' ...

Reference:

Depressurizing Gettier
Registered Replication Report: A Large Multilab Cross-Cultural Conceptual Replication of Turri et al. (2015)

Advances in Methods and Practices in Psychological Science

... Duration of singing/speaking 79 One methodological challenge is that we cannot guarantee that all participants will speak for at least 20s in 80 the group conversation condition as Ozaki Hebrew). Since the number of acoustic units is the limiting factor for our three proposed features, we 89 simulated the effects of using different numbers of acoustic units (ranging from 2-50 syllables/notes) from 90 Ozaki et al.'s data in order to optimise the amount of annotation needed for reliable results (Fig. 3). 91 92 Simulation analysis of Ozaki et al.'s data (Fig. 2) suggests that effect size estimates from fewer than 10 93 acoustic units each of singing/speaking are not reliable, but that using more than 30 acoustic units each does 94 not substantially increase reliability. ...

The Advantage of Big Team Science: Lessons Learned from Cognitive Science
  • Citing Preprint
  • October 2024

... However, aspects such as data quality, merging data from different sources, creating reproducible processes, and data provenance are equally important. Regarding preprocessing of data, many fields already offer established standards (e.g., for reaction-time data, see Loenneker et al., 2024). ...

We Don’t Know What You Did Last Summer. On the Importance of Transparent Reporting of Reaction Time Data Pre-processing

Cortex

... With these considerations in mind, we also hope that the MFTE will not only make a significant contribution to multivariable corpus linguistics research, but also stimulate ongoing methodological discussions on the transparency, validity, and reliability of the tools and methods used in corpus linguistics research. Ultimately, we hope that, in the near future, making research materials, data, and code available alongside linguistics publications will no longer be the exception (Wieling et al. 2018;Bochynska et al. 2023), but the norm. ...

Reproducible research practices and transparency across linguistics
  • Citing Article
  • November 2023

Glossa Psycholinguistics

... Authors must set aside a proportion of their research projects (in terms of time, money, and resources) to Big Team Science projects and international collaborations across multiple countries. Examples of this are plenty in psychology, including social psychology (see Bago et al., 2022;Klein et al., 2018;Moshontz et al., 2018;Pownall et al., 2021, van Bavel et al., 2022, cognitive psychology (Chen et al., 2023), linguistics (Coretta et al., 2022) and economics (Delios et al., 2022;Tierney et al., 2020Tierney et al., , 2021. Although such studies are very time-and resource-intensive, depending on the study characteristics and the role of the author, this is one benefit to ensure that our findings are universal and generalizable. ...

Investigating Object Orientation Effects Across 18 Languages

... Various factors contribute to students' poor performance in mathematics (Egara & Mosimege, 2023Okeke et al., 2025;Osakwe et al., 2023) among which mathematics anxiety emerges as a significant psychological barrier (Mosimege et al., 2024;Sarfo et al., 2020Sarfo et al., , 2022Sule, 2017;Terry et al., 2023). Mathematics anxiety, characterized by feelings of panic, helplessness, and tension during mathematical activities, hampers students' learning experiences and academic achievement (Alam & Halder, 2018). ...

Data from an International Multi-Centre Study of Statistics and Mathematics Anxieties and Related Variables in University Students (the SMARVUS Dataset)

Journal of Open Psychology Data

... In one seminal study (Kahan, 2013), participants who scored high on the Cognitive Reflection Test, a measure of the ability to suppress intuitive but incorrect answers in favour of more deliberate reasoning (Frederick, 2005), were also more prone to motivated reasoning when the information they received conflicted with their beliefs about climate change. A similar pattern for higher motivated reasoning on different political issues (e.g., effects of gun control, CO2 emissions, immigration) was found for participants high in numerical ability Nurse & Grant, 2020;Sumner et al., 2023). ...

The role of personality, authoritarianism and cognition in the United Kingdom’s 2016 referendum on European Union membership

... Although our statistical methods differ-using more conservative approaches (e.g., generalized linear mixed-effects models and LASSO regression) instead of ANOVA-this reduces the likelihood of false positives, as fewer statistical tests are conducted, making any replicated findings more robust and unified while maintaining the same intent of the original analyses. Because we were provided the original audio stimuli from Sulpizio and McQueen (2012), our replication also serves as an opportunity to reanalyze the acoustics and confirm their results, i.e., increase researcher degrees of freedom (Coretta et al., 2023). We make materials, data, and code available as part of our replication in the interest of promoting open and transparent science. ...

Multidimensional Signals and Analytic Flexibility: Estimating Degrees of Freedom in Human-Speech Analyses

Advances in Methods and Practices in Psychological Science

... Simple surveys and prediction markets provide similar estimates, but survey predictions tend to be less extreme and, therefore, perform less well, when predictions are reasonably good to begin with. What is more, even laypeople (those without a PhD or other equivalent training in research methods) have an above-chance prediction accuracy [17,18]. ...

Predicting the replicability of social and behavioural science claims from the COVID-19 Preprint Replication Project with structured expert and novice groups
  • Citing Preprint
  • February 2023