ArticlePDF Available

The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Headlines Increases Perceived Accuracy of Headlines Without Warnings

Abstract and Figures

What can be done to combat political misinformation? One prominent intervention involves attaching warnings to headlines of news stories that have been disputed by third-party fact-checkers. Here we demonstrate a hitherto unappreciated potential consequence of such a warning: an implied truth effect, whereby false headlines that fail to get tagged are considered validated and thus are seen as more accurate. With a formal model, we demonstrate that Bayesian belief updating can lead to such an implied truth effect. In Study 1 (n = 5,271 MTurkers), we find that although warnings do lead to a modest reduction in perceived accuracy of false headlines relative to a control condition (particularly for politically concordant headlines), we also observed the hypothesized implied truth effect: the presence of warnings caused untagged headlines to be seen as more accurate than in the control. In Study 2 (n = 1,568 MTurkers), we find the same effects in the context of decisions about which headlines to consider sharing on social media. We also find that attaching verifications to some true headlines—which removes the ambiguity about whether untagged headlines have not been checked or have been verified—eliminates, and in fact slightly reverses, the implied truth effect. Together these results contest theories of motivated reasoning while identifying a potential challenge for the policy of using warning tags to fight misinformation—a challenge that is particularly concerning given that it is much easier to produce misinformation than it is to debunk it. This paper was accepted by Elke Weber, judgment and decision making.
Content may be subject to copyright.
A preview of the PDF is not available
... In false news reports, misinformation similarly triggers emotional responses even in contexts where it should not. When misinformation is labelled as potentially inaccurate and contested (e.g., Facebook stating on a page that independent fact-checkers have said the information is false, or Twitter warning users that claims about election fraud are disputed), readers often still believe it and share the misleading claims [115,[162][163][164][165][166]. In fact, even when readers notice the warnings, they often ignore them, so long as the warning does not interrupt the readers' actions [163,167,168]. ...
... Magical effects are incredibly robust: they work even though audiences know that they are being tricked. Similarly, people often accept and disperse misinformation despite warnings that the facts are disputed and potentially false [115,[162][163][164][165][166]. Thus, increasing the awareness of scientific facts has proven ineffective in countering the flow of misinformation [175,176]. ...
Article
Full-text available
When we believe misinformation, we have succumbed to an illusion: our perception or interpretation of the world does not match reality. We often trust misinformation for reasons that are unrelated to an objective, critical interpretation of the available data: Key facts go unnoticed or unreported. Overwhelming information prevents the formulation of alternative explanations. Statements become more believable every time they are repeated. Events are reframed or given “spin” to mislead audiences. In magic shows, illusionists apply similar techniques to convince spectators that false and even seemingly impossible events have happened. Yet, many magicians are “honest liars,” asking audiences to suspend their disbelief only during the performance, for the sole purpose of entertainment. Magic misdirection has been studied in the lab for over a century. Psychological research has sought to understand magic from a scientific perspective and to apply the tools of magic to the understanding of cognitive and perceptual processes. More recently, neuroscientific investigations have also explored the relationship between magic illusions and their underlying brain mechanisms. We propose that the insights gained from such studies can be applied to understanding the prevalence and success of misinformation. Here, we review some of the common factors in how people experience magic during a performance and are subject to misinformation in their daily lives. Considering these factors will be important in reducing misinformation and encouraging critical thinking in society.
... Perceived accuracy of COVID-19 misinformation was measured by asking the respondents to rate their level of perceived accuracy (1 [not at all accurate] to 5 [extremely accurate]) for the 5 claims in the news headlines. The scale is based on previous research on the perceived accuracy of news/misinformation headlines [54,55]. The participants were asked how accurate are the claims that (1) coconut is effective in reducing COVID-19 symptoms; (2) the pH miracle lifestyle healing program of alkaline diet, exercise, and healing foods can cure COVID-19; (3) COVID vaccines are dangerous and ineffective against the Omicron variant; (4) mRNA COVID-19 vaccinations cause magnetism by introducing graphene oxide into the blood; and (5) there is no evidence of the COVID-19 virus and no one has isolated and sequenced SARS-CoV-2 from any patient sample. ...
... However, further probing suggested that the effects of personality traits on sharing intents are driven mainly by low rather than high cognitive social media news users. These results are in line with recent findings where cognitive ability was found to be positively associated with better truth discernment [54,55], weaker belief in false content [17,18,66], and reduced sharing intention of misinformation [56]. In addition, a higher cognitive ability allows individuals to make better risk assessments and filter what information is relevant when placing their trust [67]. ...
Article
Full-text available
Background: Social media is widely used as a source of news and information regarding COVID-19. However, the abundance of misinformation on social media platforms has raised concerns regarding the spreading infodemic. Accordingly, many have questioned the utility and impact of social media news use on users' engagement with (mis) information. Objective: This study offers a conceptual framework for how social media news use influences COVID-19 misinformation engagement. More specifically, we examine how news consumption on social media leads to COVID-19 misinformation sharing by inducing belief in such misinformation. We further explore if the effects of social media news use on COVID-19 misinformation engagement depend on individual differences in cognition and personality traits. Methods: We use data from an online survey panel administered by a survey agency (Qualtrics) in Singapore. The survey was conducted in March 2022, and 500 respondents answered the survey. All participants were above 21 years of age and provided consent before taking part in the study. We use linear regression, mediation, and moderated mediation analyses to explore the proposed relationships between social media news use, cognitive ability, personality traits, and COVID-19 misinformation belief and sharing intentions. Results: The results suggest that those who frequently use social media for news consumption are more likely to believe COVID-19 misinformation and share it on social media. Further probing the mechanism suggests that social media news use translates into sharing intent via the perceived accuracy of misinformation. Simply put, social media news users share COVID-19 misinformation because they believe it to be accurate. We also find those with high levels of extraversion than those with low levels are more likely to perceive the misinformation to be accurate and share them. Those with high levels of neuroticism and openness (than lower levels) are also likely to perceive the misinformation to be accurate. Finally, it is observed that personality traits do not significantly influence misinformation sharing at higher levels of cognitive ability but low cognitive users largely drive misinformation sharing across personality traits. Conclusions: The reliance on social media platforms for news consumption during the COVID-19 pandemic has amplified, with dire consequences for misinformation sharing. This study shows that increased social media news consumption is associated with believing and sharing COVID-19 misinformation with low cognitive users being the most vulnerable. We offer recommendations to newsmakers, social media moderators, and policymakers towards efforts in limiting COVID-19 misinformation propagation and safeguard citizens.
... Literature review Belief in fake news depends on message characteristics and individual factors Pennycook and Rand (2020) Information that mimics the output of the news media in form, but not in organizational process or intent; a subgenre of the broader category of misinformation-of incorrect information about the state of the world. ...
... ticles containing disputed content) as a potential remedy. Relying on dual-process theory,Kahneman and Egan (2011) investigated how people's beliefs in disputed news articles are affected by interventions focusing on either System 1 (automatic evaluation triggered by a stop sign icon) or System 2 (deliberate evaluation triggered by a text warning).Pennycook, Bear, Collins, and Rand (2020) explored the psychological effects of attaching warnings or ratings to the article source. In a similar vein, examined three types of ratings, namely: expert (an expert judges the source), user source (users judge the source), and user article ratings (users judge individual articles). The authors found that expert ratings were the most ...
Article
Full-text available
So far, fake news has been mostly associated with fabricated content that intends to manipulate or shape opinions. In this manuscript, we aim to establish that the perception of information as fake news is influenced by not only fabricated content but also by the rhetorical device used (i.e., how news authors phrase the message). Based on argumentation theory, we advance that fallacies – a subset of well-known deceptive rhetorical devices – share a conceptual overlap with fake news and are therefore suitable for shedding light on the issue’s grey areas. In a first two-by-two, between-subject, best-worst scaling experiment (case 1), we empirically test whether fallacies are related to the perception of information as fake news and to what extent a reader can identify them. In a second two-by-two experiment, we presume that a reader believes that some of a sender’s messages contain fake news and investigate recipients’ subsequent reactions. We find that users distinguish nuances based on the applied fallacies; however, they will not immediately recognise some fallacies as fake news while overemphasising others. Regarding users’ reactions, we observe a more severe reaction when the message identified as fake news comes from a company instead of an acquaintance.
... By contrast, the shorter the statement, the more likely the rumor was false. Our finding about the length of the headline is consistent with prior studies (25, [66][67][68][69]. However, our finding regarding the length of the statement contradicts previous studies (70)(71)(72), which revealed that deceptive statements were longer than truthful ones because deceivers attempt to provide more information to increase the perceived credibility of the information. ...
Article
Full-text available
Rumors regarding COVID-19 have been prevalent on the Internet and affect the control of the COVID-19 pandemic. Using 1,296 COVID-19 rumors collected from an online platform (piyao.org.cn) in China, we found measurable differences in the content characteristics between true and false rumors. We revealed that the length of a rumor's headline is negatively related to the probability of a rumor being true [odds ratio (OR) = 0.37, 95% CI (0.30, 0.44)]. In contrast, the length of a rumor's statement is positively related to this probability [OR = 1.11, 95% CI (1.09, 1.13)]. In addition, we found that a rumor is more likely to be true if it contains concrete places [OR = 20.83, 95% CI (9.60, 48.98)] and it specifies the date or time of events [OR = 22.31, 95% CI (9.63, 57.92)]. The rumor is also likely to be true when it does not evoke positive or negative emotions [OR = 0.15, 95% CI (0.08, 0.29)] and does not include a call for action [OR = 0.06, 95% CI (0.02, 0.12)]. By contrast, the presence of source cues [OR = 0.64, 95% CI (0.31, 1.28)] and visuals [OR = 1.41, 95% CI (0.53, 3.73)] is related to this probability with limited significance. Our findings provide some clues for identifying COVID-19 rumors using their content characteristics.
... Given this, the insights our designers get from social simulacra could have blind spots that are salient but not generated and thus not observed. This is analogous to the implied truth effect [56], where a tool to help identify possible issues falsely increases confidence that there are not other issues. On the other hand, owing to the breadth encoded in the training data of the large language model, prototyping via social simulacra is likely to produce more breadth than what any small collectives of test users would be able to provide. ...
Preprint
Full-text available
Social computing prototypes probe the social behaviors that may arise in an envisioned system design. This prototyping practice is currently limited to recruiting small groups of people. Unfortunately, many challenges do not arise until a system is populated at a larger scale. Can a designer understand how a social system might behave when populated, and make adjustments to the design before the system falls prey to such challenges? We introduce social simulacra, a prototyping technique that generates a breadth of realistic social interactions that may emerge when a social computing system is populated. Social simulacra take as input the designer's description of a community's design -- goal, rules, and member personas -- and produce as output an instance of that design with simulated behavior, including posts, replies, and anti-social behaviors. We demonstrate that social simulacra shift the behaviors that they generate appropriately in response to design changes, and that they enable exploration of "what if?" scenarios where community members or moderators intervene. To power social simulacra, we contribute techniques for prompting a large language model to generate thousands of distinct community members and their social interactions with each other; these techniques are enabled by the observation that large language models' training data already includes a wide variety of positive and negative behavior on social media platforms. In evaluations, we show that participants are often unable to distinguish social simulacra from actual community behavior and that social computing designers successfully refine their social computing designs when using social simulacra.
Chapter
In a so-called post-truth era, research on the subject of the spread of mis- and disinformation is being widely explored across academic disciplines in order to further understand the phenomenon of how information is disseminated by not only humans but also the technology humans have created (Tandoc, Sociol Compass 13(9), 2019). As technology advances rapidly, it is more important than ever to reflect on the effects of the spread of both mis- and disinformation on individuals and wider society, as well as how the impacts can be mitigated to create a more secure online environment. This chapter aims to analyse the current literature surrounding the topic of artificial intelligence (AI) and the spread of mis- and disinformation, beginning with a look through the lens of the meaning of these terms, as well as the meaning of truth in a post-truth world. In particular, the use of software robots (bots) online is discussed to demonstrate the manipulation of information and common malicious intent beneath the surface of everyday technologies. Moreover, this chapter discusses why social media platforms are an ideal breeding ground for malicious technologies, the strategies employed by both human users and bots to further the spread of falsehoods within their own networks, and how human users further the reach of mis- and disinformation. It is hoped that the overview of both the threats caused by and the solutions achievable by AI technology and human users alike will further highlight the requirement for more progress in the area at a time when the spread of falsehoods online continues to be a source of deep concern for many. This chapter also calls into question the use of AI to combat issues arising from the use of advanced Machine Learning (ML) methods. Furthermore, this chapter offers a set of recommendations to help mitigate the risks, seeking to explore the role technology plays in a wider scenario in which ethical foundations of communities and democracies are increasingly being threatened.
Article
The spread of misinformation about COVID-19 vaccines threatens to prolong the pandemic, with prior evidence indicating that exposure to misinformation has negative effects on intent to be vaccinated. We describe results from randomized experiments in the United States (n = 5,075) that allow us to measure the effects of factual corrections on false beliefs about the vaccine and vaccination intent. Our evidence makes clear that corrections eliminate the effects of misinformation on beliefs about the vaccine, but that neither misinformation nor corrections affect vaccination intention. These effects are robust to formatting changes in the presentation of the corrections. Indeed, corrections without any formatting modifications whatsoever prove effective at reducing false beliefs, with formatting variations playing a very minor role. Despite the politicization of the pandemic, misperceptions about COVID-19 vaccines can be consistently rebutted across party lines.
Article
Purpose Counter-knowledge is knowledge learned from unverified sources and can be classified as good (i.e. harmful, for instance, funny jokes) or bad (for example, lies to manipulate others’ decisions). The purpose of this study is to analyse the relationship between these two elements and on the possible reactions they can induce on people and institutions. Design/methodology/approach The relationships between good and bad counter-knowledge and the induced reactions – namely, evasive knowledge hiding and defensive reasoning – are analysed through an empirical study among 151 Spanish citizens belonging to a knowledge-intensive organization during the COVID-19 pandemic. A two-step procedure has been established to assess a causal model with SmartPLS 3.2.9. Findings Results show that good counter-knowledge can lead to bad counter-knowledge. In addition, counter-knowledge can trigger evasive knowledge hiding, which, in turn, fosters defensive reasoning, in a vicious circle, which can negatively affect decision-making and also cause distrust in public institutions. This was evidenced during the covid-19 pandemic in relation to the measures taken by governments. Originality/value This study raises the awareness that counter-knowledge is a complex phenomenon, especially in a situation of serious crisis like a pandemic. In particular, it highlights that even good counter-knowledge can turn into bad and affect people’s decisional capability negatively. In addition, it signals that not all reactions to the proliferation of counter-knowledge by public institutions are positive. For instance, censorship and lack of transparency (i.e. evasive knowledge hiding) can trigger defensive reasoning, which can, in turn, affect people’s decisions and attitudes negatively.
Article
Purpose Recent research has demonstrated that people are more likely to engage with fatty food content online. One way health advocates might facilitate engagement with healthier, calorie-light foods is to alter how people process food media. This research paper aims to investigate the moderating role of viewer mindset on consumer responses to digital food media. Design/methodology/approach Two experiments were conducted by manipulating the caloric density of food media content and/or one’s mindset before viewing. Findings Results show that the relationship between nutrition and engagement is moderated by consumer mindset, where activating a more calculative mindset before exposure can elevate social media engagement for calorie-light food media content. Research limitations/implications These findings contribute to the domain of obesogenic digital environments and the role of nutrition in consuming food media. By examining how mindsets interact with affective evaluations, this work demonstrates that a default mindset based on instinct can be shifted and thus alter subsequent behavioral intentions. Practical implications This work provides insight into what can boost the visibility and engagement of healthy food content on social media. Marketers can help promote healthier food media by cueing consumers to think more deliberately before exposure. Originality/value This research builds on recent work by demonstrating how to boost engagement with healthy foods on social media by cueing a more thoughtful mindset.
Article
Full-text available
Objective Fake news represents a particularly egregious and direct avenue by which inaccurate beliefs have been propagated via social media. We investigate the psychological profile of individuals who fall prey to fake news. Method We recruited 1,606 participants from Amazon's Mechanical Turk for three online surveys. Results The tendency to ascribe profundity to randomly generated sentences – pseudo‐profound bullshit receptivity – correlates positively with perceptions of fake news accuracy, and negatively with the ability to differentiate between fake and real news (media truth discernment). Relatedly, individuals who overclaim their level of knowledge also judge fake news to be more accurate. We also extend previous research indicating that analytic thinking correlates negatively with perceived accuracy by showing that this relationship is not moderated by the presence/absence of the headline's source (which has no effect on accuracy), or by familiarity with the headlines (which correlates positively with perceived accuracy of fake and real news). Conclusion Our results suggest that belief in fake news may be driven, to some extent, by a general tendency to be overly accepting of weak claims. This tendency, which we refer to as reflexive open‐mindedness, may be partly responsible for the prevalence of epistemically suspect beliefs writ large. This article is protected by copyright. All rights reserved.
Preprint
Full-text available
There is an increasing imperative for psychologists and other behavioral scientists to understand how people behave on social media. However, it is often very difficult to execute experimental research on actual social media platforms, or to link survey responses to online behavior in order to perform correlational analyses. Thus, there is a natural desire to use self-reported behavioral intentions in standard survey studies to gain insight into online behavior. But are such hypothetical responses hopelessly disconnected from actual sharing decisions? Or are online survey samples via sources such as Amazon Mechanical Turk (MTurk) so different from the average social media user that the survey responses of one group give little insight into the on-platform behavior of the other? Here we investigate these issues by examining 67 pieces of political news content. We evaluate whether there is a meaningful relationship between (i) the level of sharing (tweets and retweets) of a given piece of content on Twitter, and (ii) the extent to which individuals (total N = 993) in online surveys on MTurk reported being willing to share that same piece of content. We found that the same news headlines that were more likely to be hypothetically shared on MTurk were actually shared more frequently by Twitter users, r = .44. For example, across the observed range of MTurk sharing fractions, a 20 percentage point increase in the fraction of MTurk participants who reported being willing to share a news headline on social media was associated with 10x as many actual shares on Twitter. This finding suggests that self-reported sharing intentions collected in online surveys are likely to provide some meaningful insight into what participants would actually share on social media.
Article
Full-text available
Social media has increasingly enabled “fake news” to circulate widely, most notably during the 2016 U.S. presidential campaign. These intentionally false or misleading stories threaten the democratic goal of a well-informed electorate. This study evaluates the effectiveness of strategies that could be used by Facebook and other social media to counter false stories. Results from a pre-registered experiment indicate that false headlines are perceived as less accurate when people receive a general warning about misleading information on social media or when specific headlines are accompanied by a “Disputed” or “Rated false” tag. Though the magnitudes of these effects are relatively modest, they generally do not vary by whether headlines were congenial to respondents’ political views. In addition, we find that adding a “Rated false” tag to an article headline lowers its perceived accuracy more than adding a “Disputed” tag (Facebook’s original approach) relative to a control condition. Finally, though exposure to the “Disputed” or “Rated false” tags did not affect the perceived accuracy of unlabeled false or true headlines, exposure to a general warning decreased belief in the accuracy of true headlines, suggesting the need for further research into how to most effectively counter false news without distorting belief in true information. © 2019, Springer Science+Business Media, LLC, part of Springer Nature.
Article
Full-text available
Reducing the spread of misinformation, especially on social media, is a major challenge. We investigate one potential approach: having social media platform algorithms preferentially display content from news sources that users rate as trustworthy. To do so, we ask whether crowdsourced trust ratings can effectively differentiate more versus less reliable sources. We ran two preregistered experiments ( n = 1,010 from Mechanical Turk and n = 970 from Lucid) where individuals rated familiarity with, and trust in, 60 news sources from three categories: ( i ) mainstream media outlets, ( ii ) hyperpartisan websites, and ( iii ) websites that produce blatantly false content (“fake news”). Despite substantial partisan differences, we find that laypeople across the political spectrum rated mainstream sources as far more trustworthy than either hyperpartisan or fake news sources. Although this difference was larger for Democrats than Republicans—mostly due to distrust of mainstream sources by Republicans—every mainstream source (with one exception) was rated as more trustworthy than every hyperpartisan or fake news source across both studies when equally weighting ratings of Democrats and Republicans. Furthermore, politically balanced layperson ratings were strongly correlated ( r = 0.90) with ratings provided by professional fact-checkers. We also found that, particularly among liberals, individuals higher in cognitive reflection were better able to discern between low- and high-quality sources. Finally, we found that excluding ratings from participants who were not familiar with a given news source dramatically reduced the effectiveness of the crowd. Our findings indicate that having algorithms up-rank content from trusted media outlets may be a promising approach for fighting the spread of misinformation on social media.
Article
Full-text available
Are citizens willing to accept journalistic fact-checks of misleading claims from candidates they support and to update their attitudes about those candidates? Previous studies have reached conflicting conclusions about the effects of exposure to counter-attitudinal information. As fact-checking has become more prominent, it is therefore worth examining how respondents respond to fact-checks of politicians—a question with important implications for understanding the effects of this journalistic format on elections. We present results to two experiments conducted during the 2016 campaign that test the effects of exposure to realistic journalistic fact-checks of claims made by Donald Trump during his convention speech and a general election debate. These messages improved the accuracy of respondents’ factual beliefs, even among his supporters, but had no measurable effect on attitudes toward Trump. These results suggest that journalistic fact-checks can reduce misperceptions but often have minimal effects on candidate evaluations or vote choice.
Article
Full-text available
Delusion-prone individuals may be more likely to accept even delusion-irrelevant implausible ideas because of their tendency to engage in less analytic and less actively open-minded thinking. Consistent with this suggestion, two online studies with over 900 participants demonstrated that although delusion-prone individuals were no more likely to believe true news headlines, they displayed an increased belief in “fake news” headlines, which often feature implausible content. Mediation analyses suggest that analytic cognitive style may partially explain these individuals’ increased willingness to believe fake news. Exploratory analyses showed that dogmatic individuals and religious fundamentalists were also more likely to believe false (but not true) news, and that these relationships may be fully explained by analytic cognitive style. Our findings suggest that existing interventions that increase analytic and actively open-minded thinking might be leveraged to help reduce belief in fake news.
Article
Full-text available
Can citizens heed factual information, even when such information challenges their partisan and ideological attachments? The “backfire effect,” described by Nyhan and Reifler (Polit Behav 32(2):303–330. https://doi.org/10.1007/s11109-010-9112-2, 2010), says no: rather than simply ignoring factual information, presenting respondents with facts can compound their ignorance. In their study, conservatives presented with factual information about the absence of Weapons of Mass Destruction in Iraq became more convinced that such weapons had been found. The present paper presents results from five experiments in which we enrolled more than 10,100 subjects and tested 52 issues of potential backfire. Across all experiments, we found no corrections capable of triggering backfire, despite testing precisely the kinds of polarized issues where backfire should be expected. Evidence of factual backfire is far more tenuous than prior research suggests. By and large, citizens heed factual information, even when such information challenges their ideological commitments.
Article
Why do people believe blatantly inaccurate news headlines ("fake news")? Do we use our reasoning abilities to convince ourselves that statements that align with our ideology are true, or does reasoning allow us to effectively differentiate fake from real regardless of political ideology? Here we test these competing accounts in two studies (total N = 3446 Mechanical Turk workers) by using the Cognitive Reflection Test (CRT) as a measure of the propensity to engage in analytical reasoning. We find that CRT performance is negatively correlated with the perceived accuracy of fake news, and positively correlated with the ability to discern fake news from real news - even for headlines that align with individuals' political ideology. Moreover, overall discernment was actually better for ideologically aligned headlines than for misaligned headlines. Finally, a headline-level analysis finds that CRT is negatively correlated with perceived accuracy of relatively implausible (primarily fake) headlines, and positively correlated with perceived accuracy of relatively plausible (primarily real) headlines. In contrast, the correlation between CRT and perceived accuracy is unrelated to how closely the headline aligns with the participant's ideology. Thus, we conclude that analytic thinking is used to assess the plausibility of headlines, regardless of whether the stories are consistent or inconsistent with one's political ideology. Our findings therefore suggest that susceptibility to fake news is driven more by lazy thinking than it is by partisan bias per se - a finding that opens potential avenues for fighting fake news.
Article
To what extent do survey experimental treatment effect estimates generalize to other populations and contexts? Survey experiments conducted on convenience samples have often been criticized on the grounds that subjects are sufficiently different from the public at large to render the results of such experiments uninformative more broadly. In the presence of moderate treatment effect heterogeneity, however, such concerns may be allayed. I provide evidence from a series of 15 replication experiments that results derived from convenience samples like Amazon’s Mechanical Turk are similar to those obtained from national samples. Either the treatments deployed in these experiments cause similar responses for many subject types or convenience and national samples do not differ much with respect to treatment effect moderators. Using evidence of limited within-experiment heterogeneity, I show that the former is likely to be the case. Despite a wide diversity of background characteristics across samples, the effects uncovered in these experiments appear to be relatively homogeneous.