ArticlePublisher preview available

Tweetment Effects on the Tweeted: Experimentally Reducing Racist Harassment

To read the full-text of this research, you can request a copy directly from the author.

Abstract and Figures

I conduct an experiment which examines the impact of group norm promotion and social sanctioning on racist online harassment. Racist online harassment de-mobilizes the minorities it targets, and the open, unopposed expression of racism in a public forum can legitimize racist viewpoints and prime ethnocentrism. I employ an intervention designed to reduce the use of anti-black racist slurs by white men on Twitter. I collect a sample of Twitter users who have harassed other users and use accounts I control (“bots”) to sanction the harassers. By varying the identity of the bots between in-group (white man) and out-group (black man) and by varying the number of Twitter followers each bot has, I find that subjects who were sanctioned by a high-follower white male significantly reduced their use of a racist slur. This paper extends findings from lab experiments to a naturalistic setting using an objective, behavioral outcome measure and a continuous 2-month data collection period. This represents an advance in the study of prejudiced behavior.
This content is subject to copyright. Terms and conditions apply.
Tweetment Effects on the Tweeted: Experimentally
Reducing Racist Harassment
Kevin Munger
Published online: 11 November 2016
ÓSpringer Science+Business Media New York 2016
Abstract I conduct an experiment which examines the impact of group norm pro-
motion and social sanctioning on racist online harassment. Racist online harassment
de-mobilizes the minorities it targets, and the open, unopposed expression of racism
in a public forum can legitimize racist viewpoints and prime ethnocentrism. I employ
an intervention designed to reduce the use of anti-black racist slurs by white men on
Twitter. I collect a sample of Twitter users who have harassed other users and use
accounts I control (‘‘bots’’) to sanction the harassers. By varying the identity of the
bots between in-group (white man) and out-group (black man) and by varying the
number of Twitter followers each bot has, I find that subjects who were sanctioned by
a high-follower white male significantly reduced their use of a racist slur. This paper
extends findings from lab experiments to a naturalistic setting using an objective,
behavioral outcome measure and a continuous 2-month data collection period. This
represents an advance in the study of prejudiced behavior.
Keywords Online harassment Social media Randomized field experiment
Social identity
Electronic supplementary material The online version of this article (doi:10.1007/s11109-016-9373-5)
contains supplementary material, which is available to authorized users.
Replication materials are available on the author’s website,
&Kevin Munger
Department of Politics, New York University, 19 West 4th Street, 2nd floor, New York, NY,
Polit Behav (2017) 39:629–649
DOI 10.1007/s11109-016-9373-5
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
... This is very possible because perceiving more similarities with the messenger is likely to reduce reactance and increase the chances that a person will comply with the message [140]. For example, Munger [108] finds that white male Twitter users targeting black users with racist harassment reduce their use of hateful language when they are confronted by a bot (purporting to be a real user), but only if the bot assumes the identity of a white man and also has high authority (high number of followers). On the contrary, a subset of harassers who do not attempt to conceal their identity increase their use of hateful language if they are confronted by a low-authority bot which assumes the identity of a black man. ...
Full-text available
Previous work suggests that people's preference for different kinds of information depends on more than just accuracy. This could happen because the messages contained within different pieces of information may either be well-liked or repulsive. Whereas factual information must often convey uncomfortable truths, misinformation can have little regard for ve-racity and leverage psychological processes which increase its attractiveness and proliferation on social media. In this review, we argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation by reducing, rather than increasing, the psychological cost of doing so. We cover how attention may often be shifted away from accuracy and towards other goals, how social and individual cognition is affected by misinformation and the cases under which debunking it is most effective, and how the formation of online groups affects information consumption patterns, often leading to more polarization and rad-icalization. Throughout, we make the case that polarization and misinformation adherence are closely tied. We identify ways in which the psychological cost of adhering to misinfor-mation can be increased when designing anti-misinformation interventions or resilient affordances, and we outline open research questions that the CSCW community can take up in further understanding this cost.
... Rather than being made of glass, this window is manufactured and shaped by the collective choices and language of billions of people. Online behavior is shaped by a community's language [1], norms [2], moderation policies [3], initial posts [4], and the perceived demographic and social status of the participants [5]. ...
Full-text available
The language used in online discussions affects who participates in them and how they respond, which can influence perceptions of public opinion. This study examines how the term white privilege affects these dimensions of online communication. In two lab experiments, US residents were given a chance to respond to a post asking their opinions about renaming college buildings. Using the term white privilege in the question decreased the percentage of whites who supported renaming. In addition, those whites who remained supportive when white privilege was mentioned were less likely to create an online post, while opposing whites and non-whites showed no significant difference. The term also led to more low-quality posts among both whites and non-whites. The relationship between question language and the way participants framed their responses was mediated by their support or opposition for renaming buildings. This suggests that the effects of the term white privilege on the content of people’s responses is primarily affective. Overall, mention of white privilege seems to create internet discussions that are less constructive, more polarized, and less supportive of racially progressive policies. The findings have the potential to support meaningful online conversation and reduce online polarization.
... Research on discussions in cross-cutting social networks indicates that counter-attitudinal contributions (similar to disapproving evaluations) can undercut the participants' willingness to express themselves in the conversation (Lu & Gall Myrick, 2016;McDevitt et al., 2003;Mutz, 2002;Wojcieszak & Price, 2012). This is also supported by Munger (2017), who, arguing along social norm theory, shows that disapproving responses can reduce Twitter authors' future racist tweets. ...
Full-text available
In online discussions, users often evaluate comments from other users. On the basis of face theory, the present study analyzed the effects of evaluative replies on the evaluated comment authors. The investigation complements existing research, which has mainly focused on effects of comments on uninvolved readers. In the experimental study presented here, disapproving evaluations provoked negative and less positive emotions, and the evaluated authors were less willing to participate in the online discussion further. The authors’ perception of face threat mediated these effects. The results contribute to face theory in computer-mediated interactions and to our understanding of online discussions with dissonant standpoints.
Full-text available
The unique feature of the Internet is that individual negative attitudes toward minoritized and racialized groups and more extreme, hateful ideologies can find their way onto specific platforms and instantly connect people sharing similar prejudices. The enormous frequency of hate speech/cyberhate within online environments creates a sense of normalcy about hatred and the potential for acts of intergroup violence or political radicalization. While there is some evidence of effective interventions to counter hate speech through television, radio, youth conferences, and text messaging campaigns, interventions for online hate speech have only recently emerged. This review aimed to assess the effects of online interventions to reduce online hate speech/cyberhate. We systematically searched 2 database aggregators, 36 individual databases, 6 individual journals, and 34 websites, as well as bibliographies of published reviews of related literature, and scrutiny of annotated bibliographies of related literature. We included randomized and rigorous quasi‐experimental studies of online hate speech/cyberhate interventions that measured the creation and/or consumption of hateful content online and included a control group. Eligible populations included youth (10–17 years) and adult (18+ years) participants of any racial/ethnic background, religious affiliation, gender identity, sexual orientation, nationality, or citizenship status. The systematic search covered January 1, 1990 to December 31, 2020, with searches conducted between August 19, 2020 and December 31, 2020, and supplementary searches undertaken between March 17 and 24, 2022. We coded characteristics of the intervention, sample, outcomes, and research methods. We extracted quantitative findings in the form of a standardized mean difference effect size. We computed a meta‐analysis on two independent effect sizes. Two studies were included in the meta‐analysis, one of which had three treatment arms. For the purposes of the meta‐analysis we chose the treatment arm from the Álvarez‐Benjumea and Winter (2018) study that most closely aligned with the treatment condition in the Bodine‐Baron et al. (2020) study. However, we also present additional single effect sizes for the other treatment arms from the Álvarez‐Benjumea and Winter (2018) study. Both studies evaluated the effectiveness of an online intervention for reducing online hate speech/cyberhate. The Bodine‐Baron et al. (2020) study had a sample size of 1570 subjects, while the Álvarez‐Benjumea and Winter (2018) study had a sample size of 1469 tweets (nested in 180 subjects). The mean effect was small (g = −0.134, 95% confidence interval [−0.321, −0.054]). Each study was assessed for risk of bias on the following domains: randomization process, deviations from intended interventions, missing outcome data, measurement of the outcome, and selection of the reported results. Both studies were rated as “low risk” on the randomization process, deviations from intended interventions, and measurement of the outcome domains. We assessed the Bodine‐Baron et al. (2020) study as “some” risk of bias regarding missing outcome data and “high risk” for selective outcome reporting bias. The Álvarez‐Benjumea and Winter (2018) study was rated as “some concern” for the selective outcome reporting bias domain. The evidence is insufficient to determine the effectiveness of online hate speech/cyberhate interventions for reducing the creation and/or consumption of hateful content online. Gaps in the evaluation literature include the lack of experimental (random assignment) and quasi‐experimental evaluations of online hate speech/cyberhate interventions, addressing the creation and/or consumption of hate speech as opposed to the accuracy of detection/classification software, and assessing heterogeneity among subjects by including both extremist and non‐extremist individuals in future intervention studies. We provide suggestions for how future research on online hate speech/cyberhate interventions can fill these gaps moving forward.
How do local citizens publicly converse online about the protests that follow when police kill Black residents? And do participants reflect local publics? Here we examine racial justice protests in Baton Rouge after police killed Alton Sterling in 2016. Local news streamed the protests on Facebook Live. In comments appearing below the video, locals supported and attacked each other in real-time while watching protests unfold. We assess a representative sample of these comments. First, we find surprising demographic and political representativeness in comments compared to census data and a local survey. We also document extensive hostile rhetoric corresponding with commenter traits and expressed views. Finally, we find more “likes” for comments by women, college-educated people, and locals. Violent and racially derogatory comments by Blacks received fewer likes, but similar comments by whites went unpenalized. The results illuminate social media functions in local politics, racial disparities in contentious digital dialogues, and political communication’s dual roles in strengthening and undermining multiracial democracy.
The lack of consent or debriefing in online research has attracted widespread public distrust. How can designers create systems to earn and maintain public trust in large-scale online research? Procedural theories inform processes that enable individuals to make decisions about their participation. Substantive theories focus on the normative judgments that researchers and participants make about specific studies in context. Informed by these theories, we designed Bartleby, a system for debriefing participants and eliciting their views about studies that involved them. We evaluated this system by using it to debrief thousands of participants in a series of observational and experimental studies on Twitter and Reddit. We find that Bartleby addresses procedural concerns by creating new opportunities for study participants to exercise autonomy. We also find that participants use Bartleby to contribute to substantive, value-driven conversations about participant voice and power. We conclude with a critical reflection on the strengths and limitations of reusable software to satisfy values from both procedural and substantive ethical theories.
Despite the substantial amount of literature concerning adolescent bystanders of online hate and cyberbullying, relatively little attention has been devoted to studying the same issue in adults. Similarly, the determinants of the effectiveness of different messages to support the victims or counter hate have also been understudied. The existing pieces of empirical research on these topics remained scattered and no systematic review was performed to check if there are any patterns with regard to determinants and consequences of adult bystanders intervening against hate online. To fill these gaps, we performed a literature review in accordance with the guidelines of the Cochrane Collaboration Handbook for Systematic Reviews. The results of the literature search and analysis yielded three important findings. First, personal and contextual factors determining bystander action in adults largely overlap with the factors identified in adolescent populations: empathy, prior victimisation, feelings of responsibility, severity, social norms, relationship with the victim and number of bystanders. Second, personal factors promoting bystander action seem to be interconnected via empathy and social norms, both of which can be facilitated through psycho-education. Third, there is a critical lack of studies on the effectiveness of different bystander interventions.
News commenting is a prevalent form of online interaction, but it is fraught with issues, such as a low quality of discussion that often takes place. While various moderation methods can be used to maintain online discussion quality, one moderation strategy that is underexplored is for professional moderators to mark high-quality posts that are further highlighted in the interface. In this work, we look at the impact of New York Times (NYT) Picks. We present an analysis of more than 13 million NYT comments, examining the quality and frequency of commenting on the site in response to NYT Picks. The findings offer evidence that NYT Picks are associated with an increase in the quality of first-time receivers’ next approved comment, as well as the commenting frequency during commenters’ early tenure on the site. The quality boost associated with receiving a Pick attenuates after subsequent picks and diminishes over time as the user continues commenting but is still higher than commenters who do not receive Picks. Visible comment quality has a relatively small but significant positive correlation with the quality of the next comment, and exposure to Pick badges is also positively correlated with subsequent higher-quality approved comments, albeit to a lesser extent. Our results underscore the potential for news organizations to adopt the moderation strategy of highlighting professionally selected high-quality comments to improve overall community quality. We discuss the implications of our findings and offer design opportunities for comment sections that could further enhance quality in online discourse.
Full-text available
Research shows that group conflict sets ethnocentric thinking into motion. However, when group threat is not salient, can ethnocentrism still influence people’s political decision-making? In this paper, I argue that anger, unrelated to racial and ethnic groups, can activate the attitudes of ethnocentric whites and those that score low in ethnocentrism thereby causing these attitudes to be a stronger predictor of racial and immigration policy opinions. Using an adult national experiment over two waves, I induced several emotions to elicit anger, fear, or relaxation (unrelated to racial or ethnic groups). The experimental findings show that anger increases opposition to racial and immigration policies among whites that score high in ethnocentrism and enhances support for these policies among those that score low in ethnocentrism. Using data from the American National Election Study cumulative file, I find a similar non-racial/ethnic anger effect. The survey findings also demonstrate that non-racial/ethnic fear increases opposition to immigration among whites that don’t have strong out-group attitudes.
Full-text available
This study rigorously compares the effectiveness of online mobilization appeals via two randomized field experiments conducted over the social microblogging service Twitter. In the process, we demonstrate a methodological innovation designed to capture social effects by exogenously inducing network behavior. In both experiments, we find that direct, private messages to followers of a nonprofit advocacy organization’s Twitter account are highly effective at increasing support for an online petition. Surprisingly, public tweets have no effect at all. We additionally randomize the private messages to prime subjects with either a “follower” or an “organizer” identity but find no evidence that this affects the likelihood of signing the petition. Finally, in the second experiment, followers of subjects induced to tweet a link to the petition are more likely to sign it—evidence of a campaign gone “viral.” In presenting these results, we contribute to a nascent body of experimental literature exploring political behavior in online social media.
Some commentators claim that white Americans put prejudice behind them when evaluating presidential candidates in 2008. Previous research on the question of white discrimination against black candidates has yielded mixed results, and suffers from such methodological limitations as hypothetical candidates, local samples of respondents, and racial attitude measures that fail to account for social desirability bias. Fortunately, the presidential candidacy of Barack Obama, combined with a methodological innovation in the measurement of racial stereotypes in the 2008 American National Election Studies, provides an unprecedented opportunity to examine more rigorously whether prejudice disadvantages black candidates. I find that negative stereotypes about blacks significantly eroded white support for Barack Obama; indeed, the effect of stereotypes may have been sufficient to cost Obama the popular vote among whites. Further, racial stereotypes do not predict support for previous presidential candidates or current prominent white Democrats, indicating that white voters punished Obama for his race rather than his party affiliation or policy platform. This finding indicates that white Americans have not put prejudice behind them after all.
Theories of human behavior suggest that individuals attend to the behavior of certain people in their community to understand what is socially normative and adjust their own behavior in response. An experiment tested these theories by randomizing an anticonflict intervention across 56 schools with 24,191 students. After comprehensively measuring every school's social network, randomly selected seed groups of 20-32 students from randomly selected schools were assigned to an intervention that encouraged their public stance against conflict at school. Compared with control schools, disciplinary reports of student conflict at treatment schools were reduced by 30% over 1 year. The effect was stronger when the seed group contained more "social referent" students who, as network measures reveal, attract more student attention. Network analyses of peer-to-peer influence show that social referents spread perceptions of conflict as less socially normative.
Campus racial harassment provided the context for an experiment, replicated over 3 different campus samples, regarding the effects of social influence on Whites' reactions to racism. Hearing some-one condemn racism led Ss to express significantly stronger antiracist opinions than occurred following exposure to a no-influence control condition. Furthermore, hearing someone condone racism led Ss to adopt significantly less strong antiracist positions than when no other opinions were introduced. The robust social influence effects were obtained regardless of whether the source was White or Black or whether Ss responded publicly or privately. A social context approach to interracial settings is discussed.
Drawing on theories of social norms, we study the relative influence of female and male students using a year-long, network-based field experiment of an anti-harassment intervention program in a high school. A randomly selected subset of highly connected students participated in the intervention. We test whether these highly connected females and males influenced other students equally when students and teachers considered the problem of “drama”—peer conflict and harassment—to be associated with girls more than with boys. Exposure to male, but not female, intervention students caused decreased perceptions of the acceptability of harassment and decreased participation in negative behavior. Status beliefs became activated through the intervention program: gender differences in influence stem from higher levels of respect afforded to highly connected males in the program. The results support an account of social influence as it occurs across time in conjunction with other group processes.