Jason Reifler’s research while affiliated with University of Exeter and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (2)


Resolving content moderation dilemmas between free speech and harmful misinformation
  • Article

February 2023

·

230 Reads

·

80 Citations

Proceedings of the National Academy of Sciences

Anastasia Kozyreva

·

·

·

[...]

·

Jason Reifler

In online content moderation, two key values may come into conflict: protecting freedom of expression and preventing harm. Robust rules based in part on how citizens think about these moral dilemmas are necessary to deal with this conflict in a principled way, yet little is known about people's judgments and preferences around content moderation. We examined such moral dilemmas in a conjoint survey experiment where US respondents (N = 2, 564) indicated whether they would remove problematic social media posts on election denial, antivaccination, Holocaust denial, and climate change denial and whether they would take punitive action against the accounts. Respondents were shown key information about the user and their post as well as the consequences of the misinformation. The majority preferred quashing harmful misinformation over protecting free speech. Respondents were more reluctant to suspend accounts than to remove posts and more likely to do either if the harmful consequences of the misinformation were severe or if sharing it was a repeated offense. Features related to the account itself (the person behind the account, their partisanship, and number of followers) had little to no effect on respondents' decisions. Content moderation of harmful misinformation was a partisan issue: Across all four scenarios, Republicans were consistently less willing than Democrats or independents to remove posts or penalize the accounts that posted them. Our results can inform the design of transparent rules for content moderation of harmful misinformation.


Figure 1: Conjoint Scenario Design
Figure 2: Proportion of choices to remove posts and to suspend accounts. All numeric values represent percentages. Panel A: Choices to remove posts or do nothing by misinformation topic (all cases). Panel B: Choices to remove posts or do nothing, by topic and respondents' party affiliation. Panel C: Choices to penalize account by misinformation topic (all cases). Panel D: Choices to penalize account by topic and respondents' party affiliation. N = 40, 845 evaluated in total. (Cases evaluated by Democrats n = 19, 338; by Independents n = 8, 229; by Republicans n = 13, 278.)
Figure 3: Preferences for content moderation. The figure reports average marginal component effects (AMCEs) plotted with 95% confidence intervals. In each row, effect sizes show an impact of each attribute level (on the right) relative to the reference attribute level (on the left), aggregated over all other attributes. Panel A: AMCEs are converted to percentage points and represent effects on probability to remove the posts. Panel B: AMCEs represent effects on rating to penalize the account. For marginal means, see Appendix Figure A1. For all AMCE and marginal means estimates, see Appendix Tables A3, A4, A5, A6.
Figure 4: Respondent subgroup analyses: Differences by political affiliation. Marginal means point estimates and average marginal component effects (AMCEs) plotted with 95% confidence intervals. Panel A: Marginal means represent the average likelihood of decisions to remove the posts for each attribute level for three respondent subgroups: Republicans, Independents, and Democrats. Dashed line represents the mean value for a binary decision (0.5). Panel B: AMCEs represent effects on probability to remove the posts for each attribute level, faceted by three subgroups: Republicans, Independents, and Democrats. Dashed lines represent the null effect. See Appendix Figure A4 for the subgroup analysis for the rating to penalize accounts.
Figure 5: Respondent subgroup analyses: Differences by attitude toward free speech. Marginal means point estimates and average marginal component effects (AMCEs) plotted with 95% confidence intervals. Panel A: Marginal means represent the average likelihood of decisions to remove the posts for each attribute level for two respondent subgroups: pro-freedom of expression and pro-mitigating misinformation. Dashed line represents the mean value for a binary decision (0.5). Panel B: AMCEs represent effects on probability to remove the posts for each attribute level, faceted by two subgroups: pro-freedom of expression and pro-mitigating misinformation. Dashed lines represent the null effect. See Appendix Figure A5 for the subgroup analysis for the rating to penalize accounts.

+9

Free speech vs. harmful misinformation: Moral dilemmas in online content moderation
  • Preprint
  • File available

June 2022

·

484 Reads

·

7 Citations

When moderating content online, two key values may come into conflict: protecting freedom of expression and preventing harm. Robust rules based in part on how citizens think about these moral dilemmas are necessary to deal with the unprecedented scale and urgency of this conflict in a principled way. Yet little is known about people's judgments and preferences around content moderation. We examined such moral dilemmas in a conjoint survey experiment where respondents (N = 2,564) indicated whether they would remove problematic social media posts on election denial, anti-vaccination, Holocaust denial, and climate change denial and whether they would take punitive action against the accounts. Respondents were shown key information about the user and their post, as well as the consequences of the misinformation. The majority preferred quashing harmful misinformation over protecting free speech. Respondents were more likely to remove posts and suspend accounts if the consequences were severe and if it was a repeated offence. Features related to the account itself (the person behind the account, their partisanship, and number of followers) had little to no effect on respondents' decisions. Content moderation of harmful misinformation was a partisan issue: Across all four scenarios, Republicans were consistently less willing than Democrats or Independents to delete posts or penalize the accounts that posted them. Our results can inform the design of transparent rules of content moderation for human and algorithmic moderators.

Download

Citations (2)


... (Cf. [84,203,204,[85][86][87][88][89][205][206][207][208][209] [192][193][194][195][196][197], and e.g. "shouting 'fire' in a crowded theater".) ...

Reference:

Cooperative Evolutionary Pressure and Diminishing Returns Might Explain the Fermi Paradox: On What Super-AIs Are Like
Resolving content moderation dilemmas between free speech and harmful misinformation
  • Citing Article
  • February 2023

Proceedings of the National Academy of Sciences