Adam J. Berinsky’s research while affiliated with Massachusetts Institute of Technology and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (117)


Short-term exposure to filter-bubble recommendation systems has limited polarization effects: Naturalistic experiments on YouTube
  • Article

February 2025

·

23 Reads

·

1 Citation

Proceedings of the National Academy of Sciences

Naijia Liu

·

Xinlan Emily Hu

·

Yasemin Savas

·

[...]

·

Brandon M Stewart

An enormous body of literature argues that recommendation algorithms drive political polarization by creating “filter bubbles” and “rabbit holes.” Using four experiments with nearly 9,000 participants, we show that manipulating algorithmic recommendations to create these conditions has limited effects on opinions. Our experiments employ a custom-built video platform with a naturalistic, YouTube-like interface presenting real YouTube videos and recommendations. We experimentally manipulate YouTube’s actual recommendation algorithm to simulate filter bubbles and rabbit holes by presenting ideologically balanced and slanted choices. Our design allows us to intervene in a feedback loop that has confounded the study of algorithmic polarization—the complex interplay between supply of recommendations and user demand for content—to examine downstream effects on policy attitudes. We use over 130,000 experimentally manipulated recommendations and 31,000 platform interactions to estimate how recommendation algorithms alter users’ media consumption decisions and, indirectly, their political attitudes. Our results cast doubt on widely circulating theories of algorithmic polarization by showing that even heavy-handed (although short-term) perturbations of real-world recommendations have limited causal effects on policy attitudes. Given our inability to detect consistent evidence for algorithmic effects, we argue the burden of proof for claims about algorithm-induced polarization has shifted. Our methodology, which captures and modifies the output of real-world recommendation algorithms, offers a path forward for future investigations of black-box artificial intelligence systems. Our findings reveal practical limits to effect sizes that are feasibly detectable in academic experiments.


Tracking Truth with Liquid Democracy

January 2025

·

7 Reads

·

1 Citation

Management Science

The dynamics of random transitive delegations on a graph are of particular interest when viewed through the lens of an emerging voting paradigm: liquid democracy. This paradigm allows voters to choose between directly voting and transitively delegating their votes to other voters so that those selected cast a vote weighted by the number of delegations that they received. In the epistemic setting, where voters decide on a binary issue for which there is a ground truth, previous work showed that a few voters may amass such a large amount of influence that liquid democracy is less likely to identify the ground truth than direct voting. We quantify the amount of permissible concentration of power and examine more realistic delegation models, showing that they behave well by ensuring that (with high probability) there is a permissible limit on the maximum number of delegations received. Our theoretical results demonstrate that the delegation process is similar to well-known processes on random graphs that are sufficiently bounded for our purposes. Along the way, we prove new bounds on the size of the largest component in an infinite Pólya urn process, which may be of independent interest. In addition, we empirically validate the theoretical results, running six experiments (for a total of N = 168 participants, 62 delegation graphs, and over 11,000 votes collected). We find that empirical delegation behaviors meet the conditions for our positive theoretical guarantees. Overall, our work alleviates concerns raised about liquid democracy and bolsters the case for the applicability of this emerging paradigm. This paper was accepted by Martin Bichler, market design, platform, and demand analytics. Funding: This work was supported by the Michael Hammer Fellowship, the Office of Naval Research [2016 Vannevar Bush Faculty Fellowship, 2020 ONR Vannevar Bush Faculty Fellowship, and Grant N00014-20-1-2488], the Office of Secretary of Defense [Grants ARO MURI W911NF-19-0217 and ARO W911NF-17-1-0592], Simons [Investigator Award 622132], the Open Philanthropy Foundation, and the National Science Foundation [Grants IIS-178108, CCF-1733556, CCF-1918421, CCF-2007080, IIS-1703846, and IIS-2024287]. J. Y. Halpern was supported by a grant from the Cooperative AI Foundation [ARO Grant W911NF-22-1-0061]. Supplemental Material: The online appendix and data files are available at https://doi.org/10.1287/mnsc.2023.02470 .


Can Biased State Media Change Minds? Null Evidence from Poland

December 2024

·

7 Reads

We investigate the impact of exposure to biased media content on political attitudes, focusing on a specific government-controlled broadcaster in Poland. In an experiment (N = 1,943), we randomly assigned participants to either a treatment condition, where they read excerpts from a representative sample of articles published by this media outlet on divisive political topics, or to a control condition. We do not find evidence that exposure to these excerpts influenced political attitudes (such as distrust in the opposition, perceptions of outsider threat, and meta-perceptions of these attitudes in society). Additionally, there was no evidence that mentioning the source of the information affected the persuasiveness of the news articles, nor that party preferences or political interest moderates any of these effects. These findings raise questions about the causal role of biased media in political polarization during periods of high political discord.


Experiment 1 ratings by condition, headline veracity and emotional manipulativeness
a,b, Average adjusted predictions (horizontal lines) and 95% confidence intervals (vertical lines) are shown for manipulativeness of headlines (a) and trustworthiness of headlines (b). Each dot is one participant’s mean rating (n = 1,030). The predictions were obtained from ordinary least squares (OLS) regression analysis with robust standard errors clustered on participant and item (two-sided tests without adjustments for multiple comparisons).
Perceived accuracy ratings from Experiment 2 by condition, headline veracity and emotional manipulativeness
Average adjusted predictions (horizontal lines) and 95% confidence intervals (vertical lines) are shown. Each dot is one participant’s mean rating (n = 2,033). The predictions were obtained from OLS regression analysis with robust standard errors clustered on participant and item (two-sided tests without adjustments for multiple comparisons).
Perceived accuracy ratings from Experiment 3 by condition and headline type
Average adjusted predictions (horizontal lines) and 95% confidence intervals (vertical lines) are shown. Each dot is one participant’s mean rating (n = 1,211). The predictions were obtained from OLS regression analysis with robust standard errors clustered on participant and item (two-sided tests without adjustments for multiple comparisons).
Perceived accuracy ratings from Experiment 4 by condition and headline type
Average adjusted predictions (horizontal lines) and 95% confidence intervals (vertical lines) are shown. Each dot is one participant’s mean rating (n = 1,211). The predictions were obtained from OLS regression analysis with robust standard errors clustered on participant and item (two-sided tests without adjustments for multiple comparisons).
Perceived accuracy ratings from Experiment 5 by condition and headline type
Average adjusted predictions (horizontal lines) and 95% confidence intervals (vertical lines) are shown. Each dot is one participant’s mean rating (n = 1,804). The predictions were obtained from OLS regression analysis with robust standard errors clustered on participant and item (two-sided tests without adjustments for multiple comparisons).
Inoculation and accuracy prompting increase accuracy discernment in combination but not alone
  • Article
  • Publisher preview available

November 2024

·

157 Reads

·

3 Citations

Nature Human Behaviour

Misinformation is a major focus of intervention efforts. Psychological inoculation—an intervention intended to help people identify manipulation techniques—is being adopted at scale around the globe. Yet the efficacy of this approach for increasing belief accuracy remains unclear, as prior work uses synthetic materials that do not contain claims of truth. To address this issue, we conducted five studies with 7,286 online participants using a set of news headlines based on real-world true/false content in which we systematically varied the presence or absence of emotional manipulation. Although an emotional manipulation inoculation did help participants identify emotional manipulation, there was no improvement in participants’ ability to tell truth from falsehood. However, when the inoculation was paired with an intervention that draws people’s attention to accuracy, the combined intervention did successfully improve truth discernment (by increasing belief in true content). These results provide evidence for synergy between popular misinformation interventions.

View access options


Labeling AI-Generated Media Online

June 2024

·

53 Reads

·

1 Citation

Recent advancements in generative artificial intelligence (AI) have raised widespread concern about the use of this technology to spread audio and visual misinformation. In response, there has been a major push among policymakers and technology companies to label AI-generated media appearing online. It remains unclear, however, what labels are most effective for this purpose. Using two pre-registered survey experiments (total N = 7,579 Americans), we evaluate the consequences of different labeling strategies for viewers' beliefs and behavior. Overall, we find that all the labels we tested significantly decreased participants' belief in the presented claims. When it comes to engagement intentions, however, labels that merely informed participants that content was AI-generated tended to have limited impact, whereas labels emphasizing the content's potential to mislead more strongly influenced self-reported behavior, especially in the first study. Together, these results shed light on the relative advantages and disadvantages of different approaches to labeling AI-generated media.


Partisan consensus and divisions on content moderation of misinformation

June 2024

·

6 Reads

Debates on how tech companies ought to oversee the circulation of content on their platforms are increasingly pressing. In the U.S., questions surrounding what, if any, action should be taken by social media companies to moderate harmfully misleading content on topics such as vaccine safety and election integrity are now being hashed out from corporate boardrooms to federal courtrooms. But where does the American public stand on these issues? Here we discuss the findings of a recent nationally representative poll of Americans’ views on content moderation of harmfully misleading content.


Combating misinformation: A megastudy of nine interventions designed to reduce the sharing of and belief in false and misleading headlines

June 2024

·

227 Reads

·

7 Citations

Researchers have tested a variety of interventions to combat misinformation on social media (e.g., accuracy nudges, digital literacy tips, inoculation, debunking). These interventions work via different psychological mechanisms, but all share the goals of increasing recipients’ ability to distinguish between true and false information and/or increasing the veracity of news shared on social media. The current megastudy with 33,233 US-based participants tests nine prominent misinformation interventions in an identical setting using true, false, and misleading health and political news headlines. We find that a wide variety of interventions can improve discernment between true versus false or misleading information during accuracy and sharing judgments. Reducing misinformation belief and sharing is a goal that is accomplishable through multiple strategies targeting different psychological mechanisms.


Understanding Americans’ perceived legitimacy of harmful misinformation moderation by expert and layperson juries

June 2024

·

3 Reads

Content moderation is a critical aspect of platform governance on social media and of particular relevance to addressing the belief in and spread of misinformation. However, current content moderation practices have been criticized as unjust. This raises an important question – who do Americans want deciding whether online content is harmfully misleading? We conducted a nationally representative conjoint survey experiment (N=3,000) in which U.S. participants evaluated the legitimacy of hypothetical content moderation juries tasked with evaluating whether online content was harmfully misleading. These moderation juries varied on whether they were described as consisting of experts (e.g., domain experts), laypeople (e.g., social media users), or non-juries (e.g., computer algorithm). We also randomized features of jury composition (size, necessary qualifications) and whether juries engaged in discussion during content evaluation. Overall, participants evaluated expert juries as more legitimate than layperson juries or a computer algorithm. However, modifying layperson jury features helped increase legitimacy perceptions – nationally representative or politically balanced composition enhanced legitimacy, as did increased size, individual juror knowledge qualifications, and enabling juror discussion., Maximally legitimate layperson juries were comparably legitimate to expert panels. Republicans perceived experts as less legitimate compared to Democrats, but still more legitimate than baseline layperson juries. Conversely, larger lay juries with news knowledge qualifications who engaged in discussion were perceived as more legitimate across the political spectrum. Our findings shed light on the foundations of procedural legitimacy in content moderation and have implications for the design of online moderation systems.


Toolbox of individual-level interventions against online misinformation

May 2024

·

408 Reads

·

77 Citations

Nature Human Behaviour

The spread of misinformation through media and social networks threatens many aspects of society, including public health and the state of democracies. One approach to mitigating the effect of misinformation focuses on individual-level interventions, equipping policymakers and the public with essential tools to curb the spread and influence of falsehoods. Here we introduce a toolbox of individual-level interventions for reducing harm from online misinformation. Comprising an up-to-date account of interventions featured in 81 scientific papers from across the globe, the toolbox provides both a conceptual overview of nine main types of interventions, including their target, scope and examples, and a summary of the empirical evidence supporting the interventions, including the methods and experimental paradigms used to test them. The nine types of interventions covered are accuracy prompts, debunking and rebuttals, friction, inoculation, lateral reading and verification strategies, media-literacy tips, social norms, source-credibility labels, and warning and fact-checking labels.


Citations (79)


... te the exposure to one-sided messages and contribute to a drift towards more extreme opinions (Baumann et al., 2020). However, other results question the influence of content curation on the formation of extreme attitudes and emphasize the importance of endogenous information search, where users choose what to engage with themselves (Garrett, 2017;N. Liu et al., 2025). ...

Reference:

Sampling and processing of climate change information and disinformation across three diverse countries
Short-term exposure to filter-bubble recommendation systems has limited polarization effects: Naturalistic experiments on YouTube
  • Citing Article
  • February 2025

Proceedings of the National Academy of Sciences

... However, while these issues represent very real and serious challenges, they do not seem insurmountable, with many scholars suggesting that they will admit of technical and practical solutions (e.g. Berinsky et al., 2025;Blum & Zuber, 2016;Christoff & Grossi, 2017;Geissel, 2022;Halpern et al., 2021;Kahng et al., 2018;Mendoza, 2015;Paulin, 2019;Revel et al., 2022;Valsangiacomo, 2022). ...

Tracking Truth with Liquid Democracy
  • Citing Article
  • January 2025

Management Science

... Therefore, we suggest that it is possible an inoculation that both highlights the risks of engaging with affectively polarised content generated by others and suggests a more active role for users in moderating the content they generate themselves could be more effective in reducing the use of affectively polarised language in user-generated content. This aligns with previous research showing that technique-focused inoculation interventions, while effective for technique detection, need to be supplemented with additional elements to transfer to related tasks like accuracy judgments 79 . It, therefore, remains an open question whether inoculation effects on intentions to engage with affectively polarising content would generalise to social media users' production of text on those platforms. ...

Inoculation and accuracy prompting increase accuracy discernment in combination but not alone

Nature Human Behaviour

... We utilize 40 real news headlines that are related to US politics, balanced in terms of partisanship, believability, and the likelihood of being shared. These headlines were generated for another study (79). Half were true and half false. ...

Combating misinformation: A megastudy of nine interventions designed to reduce the sharing of and belief in false and misleading headlines
  • Citing Preprint
  • June 2024

... Lastly, as we investigate the effect of prebunks and debunks and cannot credibly debunk true statements, we opted to only include misinformation in our set of stimuli, prohibiting us from measuring truth discernment 75 . Thus, we acknowledge the concern that debunking may increase general scepticism (as has been shown for game-based inoculation 76,77 ) and that decreases in belief may not be specific to the claim that is being assessed. While we attempt to capture the possibility of a 'chilling effect' of the interventions by also measuring the effect on intention to share misinformation to disagree with it, it does not eliminate this concern. ...

Technique-based inoculation and accuracy prompts must be combined to increase truth discernment online
  • Citing Preprint
  • August 2023

... Boosts can be classified according to the kinds of competences they build or enhance. Digital literacy boosts involve strategies like lateral reading, modelled after the methods used by professional fact checkers to efficiently and effectively assess the credibility of unfamiliar websites, posts or information (Kozyreva et al., 2024). Risk literacy boosts include experienced simulations of risks that help people understand the temporal and cumulative nature of health risks (Wegwarth et al., 2022). ...

Toolbox of individual-level interventions against online misinformation
  • Citing Article
  • May 2024

Nature Human Behaviour

... As aggregators use different panels and do not actually own them, quality will vary significantly from one study to another. In the current research environment, there unfortunately seems to be a trade-off between response quality and participant representativeness ( 10 ). ...

Representativeness versus Response Quality: Assessing Nine Opt-In Online Survey Samples
  • Citing Preprint
  • February 2024

... Online surveys are commonly used by researchers to examine public views on a variety of topics. As the use of online surveys has grown, so has the research on survey data quality [1,[2][3][4]. For example, research on the advantages and drawbacks of using online platforms has led to the development of recommendations [5,6] and a call for the use of reporting standards [7]. ...

Measuring Attentiveness in Self-Administered Surveys
  • Citing Article
  • March 2024

Public Opinion Quarterly

... Digital interventions, which take advantage of advanced technologies and the wide reach of the internet, have emerged as important tools for disseminating accurate information, countering false narratives, and fostering critical thinking among the public (e.g. Horta Ribeiro et al., 2023;Katsaros et al., 2024;Kiili et al., 2024;Lin et al., 2024;Matias, 2019). One type of novel application in the field of digital counter-misinformation campaigns is the online, gamified version of inoculation interventions originally developed in social psychology (McGuire, 1961). ...

Reducing misinformation sharing at scale using digital accuracy prompt ads