Cameron Martel’s research while affiliated with Massachusetts Institute of Technology and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (42)


Psychological Underpinnings of Partisan Bias in Tie Formation on Social Media
  • Article
  • Publisher preview available

October 2024

·

24 Reads

Journal of Experimental Psychology General

·

Cameron Martel

·

David G. Rand

Individuals preferentially reciprocate connections with copartisans versus counter-partisans online. However, the mechanisms underlying this partisan bias remain unclear. Do individuals simply prefer viewing politically congenial content, or do they additionally prefer socially connecting with copartisans? Is this driven by preference for in-party ties or distaste for out-party ties? In a Twitter (now called X) field experiment, we created bot accounts varying by partisanship and whether they identified as bots or humans. We randomly assigned Twitter users (N = 3,013) to be followed by one of these accounts. We found evidence for social motivation—users were much more likely to reciprocate links to copartisan relative to counter-partisan accounts when the accounts identified as humans versus bots. We also found evidence for both in-party preference and out-party dispreference—users were as likely to follow back copartisan accounts as they were unlikely to followback counter-partisan accounts, compared to politically neutral accounts. A follow-up survey experiment (N = 990) provides further evidence for distinct roles of issue polarization, out-party animosity, and in-party affinity in moderating follow-back decisions online.

View access options

Experimental stimuli
a,b, An example false news headline, shown without warning label as in the control condition (a) or with ‘False Information’ warning label in the treatment condition (b). The warning label used in almost all experiments was the ‘False Information’ label, adapted from Facebook’s misinformation label adopted in 2019⁶⁰. For each headline like the one shown above, participants in accuracy experiments were asked whether the headline was accurate (typically 1 = extremely inaccurate, 6 = extremely accurate), whereas participants in sharing experiments were asked how likely they would be to share each headline (typically 1 = extremely unlikely, 6 = extremely likely). Note: the original image (but not the headline) has been replaced with a stock voting image (under the Pexels license) for copyright purposes.
Relationship between TFC, partisanship and individual differences
a, Violin plots of TFC binned by partisanship (1 = strongly Democratic, 6 = strongly Republican). Our data show that strong Democrats are most trusting of fact-checkers—notably more so than weaker Democrats. TFC is low for all Republican bins. N = 955 participants who reported both TFC and partisanship. Box plots: partisanship = 1 (min = 0.042, Q1 = 0.625, median = 0.760, Q3 = 0.927, max = 1, 10th percentile = 0.490, 90th percentile = 1); partisanship = 2 (min = 0.167, Q1 = 0.552, median = 0.656, Q3 = 0.75, max = 1, 10th percentile = 0.404, 90th percentile = 0.829); partisanship = 3 (min = 0, Q1 = 0.422, median = 0.562, Q3 = 0.688, max = 1, 10th percentile = 0.333, 90th percentile = 0.763); partisanship = 4 (min = 0, Q1 = 0.273, median = 0.453, Q3 = 0.583, max = 0.896, 10th percentile = 0.082, 90th percentile = 0.708); partisanship = 5 (min = 0, Q1 = 0.333, median = 0.479, Q3 = 0.635, max = 1, 10th percentile = 0.104, 90th percentile = 0.781); partisanship = 6 (min = 0, Q1 = 0.25, median = 0.5, Q3 = 0.75, 10th percentile = 0.0312, 90th percentile = 0.917). b, Correlation between TFC and individual differences (procedural news knowledge, CRT, web-use skill, digital media literacy) by partisanship (centre-split by party lean). Partisanship is binarized here for visualization—but is analysed as a continuous (z-scored) measure in corresponding analyses as pre-registered (Supplementary Tables 4, 5, 7 and 8). Grey bands reflect 95% CI. Measure of centre reflects ordinary least squares (OLS) linear regression estimate.
Warnings reduce perceived accuracy even for participants who strongly distrust fact-checkers
a, Meta-analytic estimate of warning label effect on false headlines, relative to control false headlines. b, Meta-analytic estimate of interaction between warning label effect and TFC. c, Meta-analytic estimate of warning label effect, filtering for lowest quartile TFC participants only. d, Meta-analytic estimate of warning label effect for maximally low TFC participants (without demographic controls). Individual experiment estimates are from models predicting accuracy (scaled 0 to 1) by headline condition, TFC, demographic controls (gender, education, age) and interactions between headline condition and individual differences. Error bars reflect 95% CI. Study-level point estimates reflect effect estimates for each study. Random effects (RE) model point estimates reflect random effects meta-analytic effects. Study-level point estimate box sizes reflect study weights in meta-analyses.
Warning label effects on accuracy by TFC decile
Warning label effects on accuracy perceptions of false headlines by TFC decile. Bolded line reflects overall meta-analytic estimates of warning effect (intercept) and TFC moderation (slope). Dotted lines reflect 95% CI upper and lower bounds for warning effects and moderations. Point estimates reflect warning effects by TFC decile, with 95% CI. Accuracy perceptions scaled 0 to 1; y-axis warning effects may be interpreted as percentage point changes in accuracy perceptions. TFC (x-axis) scaled 0 to 1. Total N = 7,720.
Meta-analyses of warning effect on sharing intentions by TFC
a, Meta-analytic estimate of warning label effect on false headlines, relative to control false headlines. b, Meta-analytic estimate of interaction between warning label effect and TFC. c, Meta-analytic estimate of warning label effect, filtering for lowest quartile TFC participants only. d, Meta-analytic estimate of warning label effect for maximally low TFC participants (without demographic controls). Individual experiment estimates are from models predicting sharing intention (scaled 0 to 1) by headline condition, TFC, gender, education, age and interactions between headline condition and individual differences. Error bars reflect 95% CI. Study-level point estimates reflect effect estimates for each study. RE model point estimates reflect random effects meta-analytic effects. Study-level point estimate box sizes reflect study weights in meta-analyses.

+1

Fact-checker warning labels are effective even for those who distrust fact-checkers

September 2024

·

49 Reads

·

6 Citations

Nature Human Behaviour

Warning labels from professional fact-checkers are one of the most widely used interventions against online misinformation. But are fact-checker warning labels effective for those who distrust fact-checkers? Here, in a first correlational study (N = 1,000), we validate a measure of trust in fact-checkers. Next, we conduct meta-analyses across 21 experiments (total N = 14,133) in which participants evaluated true and false news posts and were randomized to either see no warning labels or to see warning labels on a high proportion of the false posts. Warning labels were on average effective at reducing belief in (27.6% reduction), and sharing of (24.7% reduction), false headlines. While warning effects were smaller for participants with less trust in fact-checkers, warning labels nonetheless significantly reduced belief in (12.9% reduction), and sharing of (16.7% reduction), false news even for those most distrusting of fact-checkers. These results suggest that fact-checker warning labels are a broadly effective tool for combatting misinformation.


Americans’ attitudes toward advancements in generative artificial intelligence

June 2024

·

7 Reads

Rapid advancements in generative AI may transform many aspects of society. However, little research has examined the public’s attitudes and opinions towards generative AI. This is critical for developing democratically informed policy. In a nationally representative survey of Americans (n = 1,497; July-August 2023), we query participants across three topics relevant to new AI tools: (a) their familiarity and usage with these technologies, (b) whether they perceived these tools to be positive or negative across a variety of applications, and (c) the extent of their agreement with several important policy positions towards AI. First, we find overall low reported usage of generative AI technologies – 60% of participants reported they have never used any AI tool. Second, we observe substantial differences in the perceived positivity of AI on society across specific applications. Americans were positive towards using AI to predict extreme weather events and write software, but thought that using AI to write news articles, draft legal documents, and classify misinformation would be net negative for society. Democrats and individuals with greater familiarity and usage of AI perceived generative AI impacts as more positive across applications. Third, we find overall low trust in new AI – and we also find overall support, on average, for policies endorsing transparency and caution when proceeding with developing new AI tools. Americans’ views towards generative AI vary across potential use cases and individuals’ familiarity with AI, demographics, and politics – all of which should be considered when developing new policies towards developing and regulating AI.


Partisan consensus and divisions on content moderation of misinformation

June 2024

·

5 Reads

Cameron Martel

·

Adam J. Berinsky

·

Paul Resnick

·

[...]

·

David Gertler Rand

Debates on how tech companies ought to oversee the circulation of content on their platforms are increasingly pressing. In the U.S., questions surrounding what, if any, action should be taken by social media companies to moderate harmfully misleading content on topics such as vaccine safety and election integrity are now being hashed out from corporate boardrooms to federal courtrooms. But where does the American public stand on these issues? Here we discuss the findings of a recent nationally representative poll of Americans’ views on content moderation of harmfully misleading content.


Understanding Americans’ perceived legitimacy of harmful misinformation moderation by expert and layperson juries

June 2024

·

2 Reads

Content moderation is a critical aspect of platform governance on social media and of particular relevance to addressing the belief in and spread of misinformation. However, current content moderation practices have been criticized as unjust. This raises an important question – who do Americans want deciding whether online content is harmfully misleading? We conducted a nationally representative conjoint survey experiment (N=3,000) in which U.S. participants evaluated the legitimacy of hypothetical content moderation juries tasked with evaluating whether online content was harmfully misleading. These moderation juries varied on whether they were described as consisting of experts (e.g., domain experts), laypeople (e.g., social media users), or non-juries (e.g., computer algorithm). We also randomized features of jury composition (size, necessary qualifications) and whether juries engaged in discussion during content evaluation. Overall, participants evaluated expert juries as more legitimate than layperson juries or a computer algorithm. However, modifying layperson jury features helped increase legitimacy perceptions – nationally representative or politically balanced composition enhanced legitimacy, as did increased size, individual juror knowledge qualifications, and enabling juror discussion., Maximally legitimate layperson juries were comparably legitimate to expert panels. Republicans perceived experts as less legitimate compared to Democrats, but still more legitimate than baseline layperson juries. Conversely, larger lay juries with news knowledge qualifications who engaged in discussion were perceived as more legitimate across the political spectrum. Our findings shed light on the foundations of procedural legitimacy in content moderation and have implications for the design of online moderation systems.


Blocking of counter-partisan accounts drives political assortment on Twitter

April 2024

·

5 Reads

·

3 Citations

PNAS Nexus

There is strong political assortment of Americans on social media networks. This is typically attributed to preferential tie formation (i.e., homophily) amongst those with shared partisanship. Here we demonstrate an additional factor beyond homophily driving assorted networks: preferential prevention of social ties. In two field experiments on Twitter, we created human looking bot accounts that identified as Democrats or Republicans, and then randomly assigned users to be followed by one of these accounts. In addition to preferentially following-back co-partisans, we found that users were 11 times more likely to block counter-partisan accounts compared to co-partisan accounts in the first experiment, and 5 times more likely to block counter-partisan accounts relative to a neutral account or a co-partisan account in the second experiment. We then replicated these findings in a survey experiment and found evidence of a key motivation for blocking: wanting to avoid seeing any content posted by the blocked user. Additionally, we found that Democrats preferentially blocked counter-partisans more than Republicans, and that this asymmetry was likely due to blocking accounts who post low-quality or politically slanted content (rather than an asymmetry in identity-based blocking). Our results demonstrate that preferential blocking of counter-partisans is an important phenomenon driving political assortment on social media.


On the Efficacy of Accuracy Prompts Across Partisan Lines: An Adversarial Collaboration

March 2024

·

111 Reads

·

8 Citations

Psychological Science

The spread of misinformation is a pressing societal challenge. Prior work shows that shifting attention to accuracy increases the quality of people’s news-sharing decisions. However, researchers disagree on whether accuracy-prompt interventions work for U.S. Republicans/conservatives and whether partisanship moderates the effect. In this preregistered adversarial collaboration, we tested this question using a multiverse meta-analysis ( k = 21; N = 27,828). In all 70 models, accuracy prompts improved sharing discernment among Republicans/conservatives. We observed significant partisan moderation for single-headline “evaluation” treatments (a critical test for one research team) such that the effect was stronger among Democrats than Republicans. However, this moderation was not consistently robust across different operationalizations of ideology/partisanship, exclusion criteria, or treatment type. Overall, we observed significant partisan moderation in 50% of specifications (all of which were considered critical for the other team). We discuss the conditions under which moderation is observed and offer interpretations.


A New Framework for Understanding and Intervening on False News Sharing

March 2024

·

20 Reads

·

2 Citations

False news can manipulate public opinion, stir up fear and hatred, and undermine the credibility of legitimate news sources. Although many studies have examined false news sharing, there has been no comprehensive, comparative, and computational investigation of the interventions that can reduce this harmful behavior. To do so, we introduce a novel experimental method, the Dynamic Semi-Integrative Approach (DSIA). DSIA involves testing multiple interventions, individual- and item-level moderators, and computational choice modeling (drift–diffusion modeling) in a single framework. By applying DSIA to false news, we find that warning labels and media literacy interventions are particularly effective at increasing news sharing accuracy, followed by a social norm intervention. Accuracy prompts were least effective. Intervention effects were consistent across individual- and item-level characteristics, such as age, analytical thinking, and the political-lean of news items, suggesting wide applicability. The interventions operated via different decision-making processes, suggesting that each intervention engages distinct mental processes to attenuate false news sharing. By developing and applying DSIA, we provide a uniquely detailed insight into false news interventions and establish DSIA as a promising, scalable approach for future experimental research.


On the Efficacy of Accuracy Prompts Across Partisan Lines: An Adversarial Collaboration

November 2023

·

19 Reads

·

1 Citation

The spread of misinformation is a pressing societal challenge. Prior work shows that shifting attention to accuracy increases the quality of people’s news sharing decisions. However, researchers disagree whether accuracy prompt interventions work for U.S. Republicans/conservatives and whether partisanship moderates the effect. In this pre-registered adversarial collaboration, we tested this question using a multiverse meta-analysis (k=21; N=27,828). In all 70 models, accuracy prompts improved sharing discernment among Republicans/conservatives. We observed significant partisan moderation for single-headline “evaluation” treatments (a critical test for one research team), such that the effect was stronger among Democrats than Republicans. However, this moderation was not consistently robust across different operationalizations of ideology/partisanship, exclusion criteria, or treatment type. Overall, we observed significant partisan moderation in 50% of specifications (all of which were considered critical for the other team). We discuss the conditions under which moderation is observed and offer interpretations.


Fact-checker warning labels are effective even for those who distrust fact-checkers

November 2023

·

57 Reads

·

2 Citations

Warning labels from professional fact-checkers are one of the most widely used interventions against online misinformation. Prior work suggests that such warning labels are effective at reducing the belief and spread of false content on average. However, there is substantial distrust of fact-checkers, particularly among those on the political right. Does this distrust undermine the effectiveness of fact-checks? In the current work, we investigate this question empirically. In a first correlational study (N=1,000), we establish and validate a measure of trust in fact-checkers. We confirm that more Republican-favoring participants are less trusting of fact-checkers, and also find that skill-based traits such as procedural news knowledge and analytic thinking exacerbate this partisan asymmetry. Next, we conduct meta-analyses across 21 experiments (total N=14,133) in which participants evaluated actual true and false news posts and were randomized to either see no warning labels (control) or to see warning labels on a high proportion of the false posts. We find that warning labels were on average effective at reducing belief in (27.6% reduction), and sharing of (24.7% reduction), false headlines. While warning effects were somewhat smaller for participants with less trust in fact-checkers, warning labels nonetheless significantly reduced belief in, and sharing of, false news even for those most distrusting of fact-checkers. Together, these results suggest that fact-checker warning labels are a broadly effective tool for combatting misinformation.


Citations (28)


... or to design experiments that closely mirror the social-media interface [cf. 21,57 ]. ...

Reference:

Question framing affects accurate-inaccurate discrimination in responses to sharing questions, but not in responses to accuracy questions
Examining accuracy-prompt efficacy in combination with using colored borders to differentiate news and social content online
  • Citing Article
  • February 2023

... Second, our model provides insight into the cognitive processes associated with belief change. Experimental and observational studies have demonstrated the significance of a source's credibility for persuasion (53)(54)(55) and debunking (52,(56)(57)(58)(59). An inverse planning approach allows us to unbundle two components of credibility from an observer's perspective (epistemic motivation and bias) and then demonstrate how these components interact with belief uncertainty to produce nuanced effects of the authority's reputation on belief updating. ...

Fact-checker warning labels are effective even for those who distrust fact-checkers

Nature Human Behaviour

... This may be attributable to partisanship being highly correlated with other factors predicting social tie formation, such as similar interests (Aiello et al., 2012;DellaPosta et al., 2015), or could be due to an actual causal preference to associate with copartisans rather than counter-partisans based on partisanship per se (Huber & Malhotra, 2017). In support of the latter, recent field experiments on Twitter (now called X) have demonstrated that there are indeed strong causal effects of shared partisanship on the formation and prevention of online social ties-politically active Twitter users were substantially more likely to reciprocally follow back copartisans compared to counter-partisans (Ajzenman et al., 2023b;Mosleh et al., 2021), and were more likely to block counter-partisans than copartisans (Ajzenman et al., 2023b;Martel et al., 2024). ...

Blocking of counter-partisan accounts drives political assortment on Twitter
  • Citing Article
  • April 2024

PNAS Nexus

... This underscores the importance of relying on the RoMB version of the t test for most applications as it will substantially increase robustness when outliers are present but come at little cost when they are absent (as in this case, the most weight will be given to the models assuming the absence of outliers). Note that we do not take this example to refute the effectiveness of accuracy nudges in general, as this would require a broader reanalysis of all relevant papers (e.g., Martel et al., 2024;Pennycook et al., 2020) using meta-analytic techniques, which is outside the scope of the current manuscript. ...

On the Efficacy of Accuracy Prompts Across Partisan Lines: An Adversarial Collaboration
  • Citing Article
  • March 2024

Psychological Science

... Including reaction time data in future studies could offer deeper insights into the cognitive mechanisms behind news veracity judgments (e.g., via a drift-diffusion modelling approach; Alvarez-Zuzek et al., 2024;Gollwitzer, Tump et al., 2024). ...

A New Framework for Understanding and Intervening on False News Sharing
  • Citing Preprint
  • March 2024

... Participant P2 followed a simple rule, for example, looking at "how much of a following the original creator has -the more followers, the more influence, potentially the more damage they can do). " The view count and number of followers are also used in both the labeling and the community notes moderation [48,63]. However, the crucial difference lies in response to the associated misinformation content -the labels and the community notes are less viral [27], but a 'Debunk-It-Yourself ' video has no such a constrain. ...

Misinformation warning labels are widely effective: A review of warning effects and their moderating features
  • Citing Article
  • October 2023

Current Opinion in Psychology

... News personnel and domain experts can provide informative and authoritative content. However, there are inherent limitations of professional fact-checking, particularly regarding coverage and speed (Martel et al., 2024). In contrast, the feasibility of laypersonbased debunking has been preliminarily validated (Bhuiyan et al., 2020;Pennycook and Rand, 2019;Wineburg and McGrew, 2019), implying the promise of organized public engagement as a supplementary strategy. ...

Crowds Can Effectively Identify Misinformation at Scale
  • Citing Article
  • August 2023

Perspectives on Psychological Science

... Deviations in simple patterns tend to be devalued (Gollwitzer, Marshall, Wang, & Bargh, 2017). As deviancy aversion in simple patterns is associated to negative attitudes towards individuals in statistical minorities or social deviancy, it is considered a domain-general mechanism (Gollwitzer et al., 2017(Gollwitzer et al., , 2022. The uncanny valley has been related to deviations in familiar categories driven by a higher sensitivity to anomalies due to specialized processing (Diel & Lewis, 2022b;2022c;MacDorman et al., 2009;Matsuda, Okamoto, Ida, Okanoya, & Myowa-Yamakoshi, 2012). ...

Deviancy Aversion and Social Norms

Personality and Social Psychology Bulletin

... To overcome these challenges, researchers have proposed to fact-check and identify misleading posts on social media platforms at scale based on the non-expert fact-checkers from the crowd [2,63,70]. While the individual assessments can have bias and noise [3], it has been shown that the accuracy of aggregated judgments, even from relatively small crowds, is reliable and comparable to the accuracy of expert fact-checkers [8,36,70,80]. Additionally, the crowdsourced assessments are perceived as more trustworthy by the users than fact-checks from experts [2,30,103]. ...

Birds of a Feather Don't Fact-check Each Other: Partisanship and the Evaluation of News in Twitter's Birdwatch Crowdsourced Fact-checking Program
  • Citing Article
  • April 2022

... This could, for example, be achieved in part by nudging users to consider the concept of accuracy while scrolling through their newsfeed (21,24,26) or redesigning how social cues are displayed (20). A related concern is the context collapse of social media (27) whereby many audiences and types of content are flattened into a single context, which could be mitigated by organizing content and audiences thematically to delineate spaces where accuracy is (e.g., news) versus is not (e.g., family photos) central (28). Alternatively, platforms could emphasize the building of connections between content rather than directly sharing content with an audience. ...

Examining accuracy-prompt efficacy in conjunction with visually differentiating news and social online content
  • Citing Preprint
  • October 2022