Anastasia Kozyreva’s research while affiliated with Max Planck Institute for Human Development and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (32)


Real-time assessment of motives for sharing and creating content among highly active Twitter users
  • Preprint

July 2024

·

38 Reads

·

1 Citation

·

Ruben C. Arslan

·

Anastasia Kozyreva

·

[...]

·

What motivates people to share and create content online? In real time, we linked each of N=2,762 individual posts (retweets and newly created content) with the self-reported motives from a sample of N=137 highly active US Twitter users over the course of one week. We also captured their total activity of N=48,419 posts over 10 weeks (March-May 2022). Our results reveal that sharing (retweeting) political content stemmed mostly from motives related to expression and identity. When creating content, participants were more likely to be motivated by the goals of informing and persuading others, for which they used negative language and expressed outrage. In contrast, entertaining content and positive language was used for socializing and attention. Original and political content featuring outrage and anger was more likely to be subsequently retweeted by others. These findings may denote adaptive strategies in the incentive structure of social media that rewards such content.


Toolbox of individual-level interventions against online misinformation

May 2024

·

409 Reads

·

79 Citations

Nature Human Behaviour

The spread of misinformation through media and social networks threatens many aspects of society, including public health and the state of democracies. One approach to mitigating the effect of misinformation focuses on individual-level interventions, equipping policymakers and the public with essential tools to curb the spread and influence of falsehoods. Here we introduce a toolbox of individual-level interventions for reducing harm from online misinformation. Comprising an up-to-date account of interventions featured in 81 scientific papers from across the globe, the toolbox provides both a conceptual overview of nine main types of interventions, including their target, scope and examples, and a summary of the empirical evidence supporting the interventions, including the methods and experimental paradigms used to test them. The nine types of interventions covered are accuracy prompts, debunking and rebuttals, friction, inoculation, lateral reading and verification strategies, media-literacy tips, social norms, source-credibility labels, and warning and fact-checking labels.


The Online Misinformation Engagement Framework.
The Online Misinformation Engagement Framework
  • Literature Review
  • Full-text available

November 2023

·

228 Reads

·

16 Citations

Current Opinion in Psychology

Research on online misinformation has evolved rapidly, but organizing its results and identifying open research questions is difficult without a systematic approach. We present the Online Misinformation Engagement Framework, which classifies people’s engagement with online misinformation into four stages: selecting information sources, choosing what information to consume or ignore, evaluating the accuracy of the information and/or the credibility of the source, and judging whether and how to react to the information (e.g., liking or sharing). We outline entry points for interventions at each stage and pinpoint the two early stages—source and information selection—as relatively neglected processes that should be addressed to further improve people’s ability to contend with misinformation.

Download

Blinding to Circumvent Human Biases: Deliberate Ignorance in Humans, Institutions, and Machines

September 2023

·

58 Reads

·

6 Citations

Perspectives on Psychological Science

Inequalities and injustices are thorny issues in liberal societies, manifesting in forms such as the gender-pay gap; sentencing discrepancies among Black, Hispanic, and White defendants; and unequal medical-resource distribution across ethnicities. One cause of these inequalities is implicit social bias-unconsciously formed associations between social groups and attributions such as "nurturing," "lazy," or "uneducated." One strategy to counteract implicit and explicit human biases is delegating crucial decisions, such as how to allocate benefits, resources, or opportunities, to algorithms. Algorithms, however, are not necessarily impartial and objective. Although they can detect and mitigate human biases, they can also perpetuate and even amplify existing inequalities and injustices. We explore how a philosophical thought experiment, Rawls's "veil of ignorance," and a psychological phenomenon, deliberate ignorance, can help shield individuals, institutions, and algorithms from biases. We discuss the benefits and drawbacks of methods for shielding human and artificial decision makers from potentially biasing information. We then broaden our discussion beyond the issues of bias and fairness and turn to a research agenda aimed at improving human judgment accuracy with the assistance of algorithms that conceal information that has the potential to undermine performance. Finally, we propose interdisciplinary research questions.


Building blocks of science-based policy for the case of misinformation
Cognitive foundations and signatory commitments under the strengthened Code of Practice on Disinformation 2022¹
Incorporating Psychological Science Into Policy Making

July 2023

·

103 Reads

·

13 Citations

The spread of false and misleading information in online social networks is a global problem in need of urgent solutions. It is also a policy problem because misinformation can harm both the public and democracies. To address the spread of misinformation, policymakers require a successful interface between science and policy, as well as a range of evidence-based solutions that respect fundamental rights while efficiently mitigating the harms of misinformation online. In this article, we discuss how regulatory and nonregulatory instruments can be informed by scientific research and used to reach EU policy objectives. First, we consider what it means to approach misinformation as a policy problem. We then outline four building blocks for cooperation between scientists and policymakers who wish to address the problem of misinformation: understanding the misinformation problem, understanding the psychological drivers and public perceptions of misinformation, finding evidence-based solutions, and co-developing appropriate policy measures. Finally, through the lens of psychological science, we examine policy instruments that have been proposed in the EU, focusing on the strengthened Code of Practice on Disinformation 2022.


Resolving content moderation dilemmas between free speech and harmful misinformation

February 2023

·

228 Reads

·

74 Citations

Proceedings of the National Academy of Sciences

In online content moderation, two key values may come into conflict: protecting freedom of expression and preventing harm. Robust rules based in part on how citizens think about these moral dilemmas are necessary to deal with this conflict in a principled way, yet little is known about people's judgments and preferences around content moderation. We examined such moral dilemmas in a conjoint survey experiment where US respondents (N = 2, 564) indicated whether they would remove problematic social media posts on election denial, antivaccination, Holocaust denial, and climate change denial and whether they would take punitive action against the accounts. Respondents were shown key information about the user and their post as well as the consequences of the misinformation. The majority preferred quashing harmful misinformation over protecting free speech. Respondents were more reluctant to suspend accounts than to remove posts and more likely to do either if the harmful consequences of the misinformation were severe or if sharing it was a repeated offense. Features related to the account itself (the person behind the account, their partisanship, and number of followers) had little to no effect on respondents' decisions. Content moderation of harmful misinformation was a partisan issue: Across all four scenarios, Republicans were consistently less willing than Democrats or independents to remove posts or penalize the accounts that posted them. Our results can inform the design of transparent rules for content moderation of harmful misinformation.


Fig. 1 | Structure of the toolbox and map of evidence. The upper part of the figure summarizes the structure and composition of the toolbox, which is available at https://interventionstoolbox.mpib-berlin.mpg.de. The world map of evidence shows the studies from the evidence toolbox by the country in which they were conducted and by intervention type. Circle size denotes the number of studies. For the interactive version of the map, see https://interventionstoolbox. mpib-berlin.mpg.de/toolbox_map.html. Map made with Natural Earth.
Toolbox of Interventions Against Online Misinformation and Manipulation

December 2022

·

611 Reads

·

42 Citations

The spread of misinformation through media and social networks threatens many aspects of society, including public health and the state of democracies. A wide range of individual-focused interventions aimed at reducing harm from online misinformation have been developed in the behavioral and cognitive sciences. We, an international group of 26 experts, introduce and analyze our toolbox of interventions against misinformation, which includes an up-to-date account of the interventions featured in 42 scientific papers. A resource for scientists, policy makers, and the public, the toolbox delivers both a conceptual overview of the breadth of interventions, including their target and scope, and a summary of the empirical evidence supporting the interventions, including the methods and experimental paradigms used to test them. The toolbox covers 10 types of interventions: accuracy prompts, debunking, friction, inoculation, lateral reading, media-literacy tips, rebuttals of science denialism, self-reflection tools, social norms, and warning and fact-checking labels.


Blinding to circumvent human biases: Deliberate ignorance in humans, institutions, and machines

November 2022

·

31 Reads

·

1 Citation

Persistent inequalities and injustices are a blight on modern liberal societies. Examples abound, from the gender gap in pay to sentencing disparities between Black, Hispanic, and White defendants to allocation disparities in medical resources between Black and White patients. One cause of these and other inequalities is implicit social biases. In a process thought to be outside conscious control, human cognition is assumed to make associations between social groups and attributions such as "nurturing," "lazy," or "uneducated." Such associations can result in explicit unequal treatment. In theory, one way to circumvent implicit but, of course, also explicit biases is to delegate important decisions—for instance, on allocating benefits, resources, or opportunities—to algorithms, which are assumed to be free of human biases. However, evidence shows that algorithms can perpetuate and even amplify existing inequalities and injustices. We discuss how a philosophical thought experiment, Rawls's "veil of ignorance," and a psychological phenomenon, "deliberate ignorance" (the choice to not know), can shield people, institutions, and algorithms from biases. We explore the advantages and ways of blinding human and artificial decision makers to information that could result in biases and consider the practical problems of successfully blinding algorithms.


Critical Ignoring as a Core Competence for Digital Citizens

November 2022

·

148 Reads

·

80 Citations

Current Directions in Psychological Science

Low-quality and misleading information online can hijack people’s attention, often by evoking curiosity, outrage, or anger. Resisting certain types of information and actors online requires people to adopt new mental habits that help them avoid being tempted by attention-grabbing and potentially harmful content. We argue that digital information literacy must include the competence of critical ignoring—choosing what to ignore and where to invest one’s limited attentional capacities. We review three types of cognitive strategies for implementing critical ignoring: self-nudging, in which one ignores temptations by removing them from one’s digital environments; lateral reading, in which one vets information by leaving the source and verifying its credibility elsewhere online; and the do-not-feed-the-trolls heuristic, which advises one to not reward malicious actors with attention. We argue that these strategies implementing critical ignoring should be part of school curricula on digital information literacy. Teaching the competence of critical ignoring requires a paradigm shift in educators’ thinking, from a sole focus on the power and promise of paying close attention to an additional emphasis on the power of ignoring. Encouraging students and other online users to embrace critical ignoring can empower them to shield themselves from the excesses, traps, and information disorders of today’s attention economy.


Figure 4 . Coefficients for individual-level privacy calculus (Scenario: Radar COVID = Spanish; Scenario: NHS COVID-19 = UK; Scenario: CORONA-WARN = German; Scenario: COVIDSafe = Australia). Dependent variable: CTT acceptance. Note that some error bars are hidden by their markers.
Generalized Linear Model of CTT Acceptance
COVID-19, national culture, and privacy calculus: factors predicting the cross-cultural acceptance and uptake of contact-tracing technologies

October 2022

·

184 Reads

The use of information technologies for the public interest, such as COVID-19 tracking apps that aim to reduce the spread of COVID-19 during the pandemic, involve a dilemma between public interest benefits and privacy concerns. Critical in resolving this conflict of interest are citizens’ trust in the government and the risks posed by COVID-19. How much can the government be trusted to access private information? Furthermore, to what extend do the health benefits posed by the technology outweigh the personal risks to one's privacy? We hypothesize that citizens’ acceptance of the technology can be conceptualized as a calculus of privacy concerns, government trust, and the public benefit of adopting a potentially privacy-encroaching technology. The importance that citizens place on their privacy and the extent to which they trust their governments vary though out the world. The present study examined the public’s privacy calculus across nine countries (Australia, Germany, Italy, Japan, Spain, Switzerland, Taiwan, the United Kingdom, and the United States) focusing on social acceptance of contact-tracing technologies during the COVID-19 pandemic. We found that across countries, privacy concerns were negatively associated with citizens’ acceptance of the technology, while government trust, perceived effectiveness of the technology, and the health threats of COVID-19 were positively associated. National cultural orientations moderate the effects of the basic factors of privacy calculus. In particular, individualism (value of the individual) amplified the effect of privacy concerns, whereas general trust (trust in the wider public) amplified the effect of government trust. National culture therefore requires careful attention in resolving public policy dilemmas of privacy, trust, and public interest.


Citations (23)


... Kozyreva et al. [7,8] discuss interventions aimed at combating misinformation and emphasize the importance of timely cognitive support. Pennycook et al. [9] examine the role of cognitive reflection in discerning truth and demonstrate that nudges to encourage reflective thinking can mitigate the spread of misinformation. ...

Reference:

Factually: Exploring Wearable Fact-Checking for Augmented Truth Discernment
Toolbox of individual-level interventions against online misinformation
  • Citing Article
  • May 2024

Nature Human Behaviour

... This and other work highlight the role of community dynamics of social media in information exchange; health communication might leverage these relationships to disseminate accurate health information. Structural-and individual-level misinformation interventions across stages of engagement (eg, content labels, introducing friction that encourages the viewer to pause before sharing information) will also likely improve the quality of decision-making [69] about vaping. ...

The Online Misinformation Engagement Framework

Current Opinion in Psychology

... A key critical AI competence is therefore to assess whether a model is biased, for example, against marginalized social groups (e.g., racialized groups; Obermeyer et al., 2019). In the simplest case, this means being able to ascertain whether a model uses objectionable features (e.g., protected attributes including ethnicity or gender) or proxies thereof (Hertwig et al., 2023;Yeom et al., 2018). ...

Blinding to Circumvent Human Biases: Deliberate Ignorance in Humans, Institutions, and Machines
  • Citing Article
  • September 2023

Perspectives on Psychological Science

... For these reasons, it is crucial to address these challenges before they have a significant impact on users and potentially spill over into the real world. the importance of interdisciplinary approach shall be emphasized (Kozyreva et al., 2023). ...

Incorporating Psychological Science Into Policy Making

... Public support for such moderation is well-documented in debates surrounding digital speech. While there is variation between countries in what is perceived as severely harmful content (Jiang et al., 2021), there is general support to moderate harmful content on online platforms (Kozyreva et al., 2023;Pradel et al., 2024). Thus, the safety-focused moderation of AI systems might appear to be a comparable case. ...

Resolving content moderation dilemmas between free speech and harmful misinformation
  • Citing Article
  • February 2023

Proceedings of the National Academy of Sciences

... Understanding and knowledge in the real world are limited both by our own finite cognitive capabilities and by the complexity of the environment-a principle that Herbert Simon called "bounded rationality." [20,21] The consequence for experts is that they are masters of a narrower and narrower furrow of understanding [22]. No longer does the undergraduate study a degree in biology but rather a degree in immunology or molecular genetics. ...

Bounded Rationality: A Vision of Rational Choice in the Real World
  • Citing Chapter
  • December 2021

... Learning to scale back the use of unwanted apps, or going further to abandon the use of a smartphone or mobile digital device altogether in favor of analog, print-based, or other "low-tech" alternatives, can help students practice the skills of "critical ignoring" that may be necessary to maintaining focus and wellbeing in a sea of mis/information. Kozyreva et al. (2022) define "critical ignoring" as the disposition of "choosing what to ignore and where to invest one's limited attentional capacities" in an informationsaturated attention economy. ...

Critical Ignoring as a Core Competence for Digital Citizens
  • Citing Article
  • November 2022

Current Directions in Psychological Science

... Research shows the positive contributions of social media (Adebisi et al., 2021;Gregory et al., 2021;Limaye et al., 2020;Lovari, 2020;Pang et al., 2021;Wang et al., 2021), but also highlights the double-edged nature of these platforms and how they can hinder effective responses to Covid-19 (Hyland-Wood et al., 2021;Jimenez-Sotomayor et al., 2020;Karakoç et al., 2020;Liao et al., 2020). In this regard, a wealth of studies examines how social media have contributed to the diffusion of fake news and disinformation (Adebisi et al., 2021;Ahmed et al., 2020;Chou et al., 2021;Gerosa et al., 2021;Jimenez-Sotomayor et al., 2020;Leuker et al., 2022;Wonodi et al., 2022). Chou et al. (2021) note that characteristics unique to the social media environment, namely the absence of information gatekeepers and the prominence of eco-chambers reinforced by users' self-curated feeds and platform algorithms, may contribute to the spread of Covid misinformation. ...

Misinformation in Germany During the Covid-19 Pandemic: A Cross-Sectional Survey on Citizens’ Perceptions and Individual Differences in the Belief in False Information

European Journal of Health Communication