Conference Paper

Slash(dot) and burn: Distributed moderation in a large online conversation space

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Can a system of distributed moderation quickly and consistently separate high and low quality comments in an online conversation? Analysis of the site Slashdot.org suggests that the answer is a qualified yes, but that important challenges remain for designers of such systems. Thousands of users act as moderators. Final scores for comments are reasonably dispersed and the community generally agrees that moderations are fair. On the other hand, much of a conversation can pass before the best and worst comments are identified. Of those moderations that were judged unfair, only about half were subsequently counterbalanced by a moderation in the other direction. And comments with low scores, not at top-level, or posted late in a conversation were more likely to be overlooked by moderators.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... A growing body of research in CSCW examines how users moderate their own content in online communities and increasingly leverage automation to more efficiently control bad behavior [66,93,106]. These studies describe the challenges of moderation in different platforms (e.g. ...
... For example, Jhaver et al. find that moderation transparency matters -offering removal explanations on Reddit reduces the likelihood of future post removals [55]. In [66], Lampe and Resnick observe that timeliness trades off with accuracy in distributed moderation systems like Slashdot. And while automated moderation systems scale well in removing obviously undesirable content (e.g. ...
... In open source, maintainers' and moderators' stances toward automation are likely to differ as open source contributors are more habituated to using tooling for increasing productivity and efficiency, whereas the efficiency of moderation has been found to trade off with quality [59,66]. The second part of RQ2 aims to provide insights on how well current moderation bots support human maintainers in open source contexts and what improvements are needed to reduce friction and concerns in adoption. ...
Article
Full-text available
Much of our modern digital infrastructure relies critically upon open sourced software. The communities responsible for building this cyberinfrastructure require maintenance and moderation, which is often supported by volunteer efforts. Moderation, as a non-technical form of labor, is a necessary but often overlooked task that maintainers undertake to sustain the community around an OSS project. This study examines the various structures and norms that support community moderation, describes the strategies moderators use to mitigate conflicts, and assesses how bots can play a role in assisting these processes. We interviewed 14 practitioners to uncover existing moderation practices and ways that automation can provide assistance. Our main contributions include a characterization of moderated content in OSS projects, moderation techniques, as well as perceptions of and recommendations for improving the automation of moderation tasks. We hope that these findings will inform the implementation of more effective moderation practices in open source communities.
... A growing body of research in CSCW examines how users moderate their own content in online communities and increasingly leverage automation to more efficiently control bad behavior [66,93,106]. These studies describe the challenges of moderation in different platforms (e.g. ...
... For example, Jhaver et al. find that moderation transparency matters -offering removal explanations on Reddit reduces the likelihood of future post removals [55]. In [66], Lampe and Resnick observe that timeliness trades off with accuracy in distributed moderation systems like Slashdot. And while automated moderation systems scale well in removing obviously undesirable content (e.g. ...
... In open source, maintainers' and moderators' stances toward automation are likely to differ as open source contributors are more habituated to using tooling for increasing productivity and efficiency, whereas the efficiency of moderation has been found to trade off with quality [59,66]. The second part of RQ2 aims to provide insights on how well current moderation bots support human maintainers in open source contexts and what improvements are needed to reduce friction and concerns in adoption. ...
Preprint
Full-text available
Much of our modern digital infrastructure relies critically upon open sourced software. The communities responsible for building this cyberinfrastructure require maintenance and moderation, which is often supported by volunteer efforts. Moderation, as a non-technical form of labor, is a necessary but often overlooked task that maintainers undertake to sustain the community around an OSS project. This study examines the various structures and norms that support community moderation, describes the strategies moderators use to mitigate conflicts, and assesses how bots can play a role in assisting these processes. We interviewed 14 practitioners to uncover existing moderation practices and ways that automation can provide assistance. Our main contributions include a characterization of moderated content in OSS projects, moderation techniques, as well as perceptions of and recommendations for improving the automation of moderation tasks. We hope that these findings will inform the implementation of more effective moderation practices in open source communities.
... For example, although top-down, centralized approaches to moderation, such as banning users, can reduce hate speech [44], bans may also disproportionately impact people who have been historically marginalized, [36,90], and civility moderation algorithms have been found to perpetuate racism and misogynoir [14,40,58,76]. While bottom-up, decentralized approaches, such as voting, can involve more people in moderation decisions [52,53], they can also silence people who are marginalized and promote dominant viewpoints [60]. In response, scholars have called for alternative content moderation models that center harm reduction [37,68,77,78]. ...
... Distributed moderation systems are typically easy to use, and therefore widely adopted by users. They also provide information about what content is accepted by and interesting to a community [52,64]. Distributed moderation may be thought of as more "democratic," since a greater number of users participate in moderation. ...
... However, while these systems have advantages, they can also lead to biased content management [60]. For example, in distributed moderation comments with lower scores may receive slower moderation, and incorrect moderation may not be reversed [52]. Distributed moderation can also propagate misinformation when voters are not subject matter experts [33], reinforce echo chambers [64], and push marginalized users further to the margins [25]. ...
Preprint
Full-text available
Shortcomings of current models of moderation have driven policy makers, scholars, and technologists to speculate about alternative models of content moderation. While alternative models provide hope for the future of online spaces, they can fail without proper scaffolding. Community moderators are routinely confronted with similar issues and have therefore found creative ways to navigate these challenges. Learning more about the decisions these moderators make, the challenges they face, and where they are successful can provide valuable insight into how to ensure alternative moderation models are successful. In this study, I perform a collaborative ethnography with moderators of r/AskHistorians, a community that uses an alternative moderation model, highlighting the importance of accounting for power in moderation. Drawing from Black feminist theory, I call this "intersectional moderation." I focus on three controversies emblematic of r/AskHistorians' alternative model of moderation: a disagreement over a moderation decision; a collaboration to fight racism on Reddit; and a period of intense turmoil and its impact on policy. Through this evidence I show how volunteer moderators navigated multiple layers of power through care work. To ensure the successful implementation of intersectional moderation, I argue that designers should support decision-making processes and policy makers should account for the impact of the sociotechnical systems in which moderators work.
... Chief among these is the problem of scale: the large amount of content being generated on major online platforms makes it infeasible for moderators to handle all content needing review in a timely manner [29], and results in a high workload and stress for the moderators [66]. Though the platform-driven approach may be dominant in today's Web, its ascendancy was by no means a foregone conclusion: early online communities, with their decentralized ethos, tended to instead prefer a bottom-up, community-driven model [21,56]. As of late, community-driven moderation has seen a renewed surge in interest in light of the shortcomings of platform-driven moderation [6,67], and it remains the method of choice in smaller, interest-specific communitiesfor example, Twitch livestream communities [9,58] and the topical groups on Reddit known as "subreddits" [12,22,27]. ...
... While the risk of misuse necessarily implies that end-user tools cannot be as authoritative as moderators' tools (e.g., end users should not have the ability to remove someone else's content), platforms have managed to innovate various softer approaches that have met with some success. A particularly common end-user tool is the ability to vote on whether a piece of content constitutes a valuable contribution to the community; content that receives too many negative votes can then be automatically de-prioritized or hidden [12,56,59]. An even softer end-user tool is the personalized blocklist [24,44], which shifts the goal from removing objectionable content from the platform to simply removing it from an individual user's feed. ...
... One initial response to this concern is to point out that a similar premise of good faith underlies a number of user-facing moderation tools that already see widespread, large scale use-for example, both community voting [56,59] and flagging/reporting systems [18] only work to counteract incivility if they are used by users who actually desire civility, and are theoretically vulnerable to abuse by bad-faith users [65]. This has not stopped such systems from becoming a common part of platforms' moderation toolboxes-they are simply not the only tools in those toolboxes [67]. ...
Preprint
Incivility remains a major challenge for online discussion platforms, to such an extent that even conversations between well-intentioned users can often derail into uncivil behavior. Traditionally, platforms have relied on moderators to -- with or without algorithmic assistance -- take corrective actions such as removing comments or banning users. In this work we propose a complementary paradigm that directly empowers users by proactively enhancing their awareness about existing tension in the conversation they are engaging in and actively guides them as they are drafting their replies to avoid further escalation. As a proof of concept for this paradigm, we design an algorithmic tool that provides such proactive information directly to users, and conduct a user study in a popular discussion platform. Through a mixed methods approach combining surveys with a randomized controlled experiment, we uncover qualitative and quantitative insights regarding how the participants utilize and react to this information. Most participants report finding this proactive paradigm valuable, noting that it helps them to identify tension that they may have otherwise missed and prompts them to further reflect on their own replies and to revise them. These effects are corroborated by a comparison of how the participants draft their reply when our tool warns them that their conversation is at risk of derailing into uncivil behavior versus in a control condition where the tool is disabled. These preliminary findings highlight the potential of this user-centered paradigm and point to concrete directions for future implementations.
... User-generated content moderation has been a focus of social computing research for over 25 years [12], and many researchers have studied the general effects that content moderation has on social media communities [16,41,47]. ...
... There are a variety of reasons organizations moderate content, including setting norms [26,41], mitigating legal risks [10], and protecting users from harmful content [15]. However, the scale and breadth of social media has created contexts that push the limits of how effective automated content moderation approaches can be. ...
Preprint
Full-text available
In recent years, social media companies have grappled with defining and enforcing content moderation policies surrounding political content on their platforms, due in part to concerns about political bias, disinformation, and polarization. These policies have taken many forms, including disallowing political advertising, limiting the reach of political topics, fact-checking political claims, and enabling users to hide political content altogether. However, implementing these policies requires human judgement to label political content, and it is unclear how well human labelers perform at this task, or whether biases affect this process. Therefore, in this study we experimentally evaluate the feasibility and practicality of using crowd workers to identify political content, and we uncover biases that make it difficult to identify this content. Our results problematize crowds composed of seemingly interchangeable workers, and provide preliminary evidence that aggregating judgements from heterogeneous workers may help mitigate political biases. In light of these findings, we identify strategies to achieving fairer labeling outcomes, while also better supporting crowd workers at this task and potentially mitigating biases.
... The centralized approach uses paid or unpaid moderators or externally contracted companies by the platform to moderate according to the platform policies [6]. In the distributed approach, users down-vote and report the undesirable content [23]. Reddit, Stack Overflow and Yik Yak use distributed moderation [23]. ...
... In the distributed approach, users down-vote and report the undesirable content [23]. Reddit, Stack Overflow and Yik Yak use distributed moderation [23]. Automated approaches use machine learning-based models to detect abusive content [6]. ...
Conference Paper
Full-text available
Even though emails are identified as a prominent source of exchanging abusive behaviors, very little work has explored abuse over emails. In our accepted paper in NSysS 2021, we explore perceptions of users on types of abuse detection systems for emails, revealing privacy concerns and lack of control in human-moderator-based systems and a noteworthy demand for an automated system. Motivated by the findings, we iteratively develop an automated abuse detection system "Citadel" for emails in two sequential phases and evaluate in both phases - first over 39 participants through in-person demonstrations, and second over 21 participants through a 3-day field study and over 63 participants through a video demonstration. Evaluation results portray efficacy, efficiency, and user acceptance of "Citadel" in detecting and preventing abusive emails.
... This process is instrumental in structuring conversations that are not only informative but also conducive to a productive learning environment. Cliff Lampe and Paul Resnick explore distributed moderation in large online conversation spaces like Slashdot, revealing how community-based curation can influence the dynamics of discussion [16]. The study offers a perspective on how distributed curation can be employed in educational settings, potentially enhancing student engagement and the depth of discourse. ...
Preprint
Social annotation platforms enable student engagement by integrating discussions directly into course materials. However, in large online courses, the sheer volume of comments can overwhelm students and impede learning. This paper investigates community-based design interventions on a social annotation platform (NB) to address this challenge and foster more meaningful online educational discussions. By examining student preferences and reactions to different curation strategies, this research aims to optimize the utility of social annotations in educational contexts. A key emphasis is placed on how the visibility of comments shapes group interactions, guides conversational flows, and enriches learning experiences. The study combined iterative design and development with two large-scale experiments to create and refine comment curation strategies, involving thousands of students. The study introduced specific features of the platform, such as targeted comment visibility controls, which demonstrably improved peer interactions and reduced discussion overload. These findings inform the design of next-generation social annotation systems and highlight opportunities to integrate Large Language Models (LLMs) for key activities like summarizing annotations, improving clarity in student writing, and assisting instructors with efficient comment curation.
... A key approach to managing the problem of online harassment is by developing moderation and blocking mechanisms (Crawford and Gillespie, 2016;Lampe and Resnick, 2004;Geiger, 2016). Our findings add nuance to our understanding of the challenges of this undertaking. ...
Preprint
In this paper, we use mixed methods to study a controversial Internet site: The Kotaku in Action (KiA) subreddit. Members of KiA are part of GamerGate, a distributed social movement. We present an emic account of what takes place on KiA who are they, what are their goals and beliefs, and what rules do they follow. Members of GamerGate in general and KiA in particular have often been accused of harassment. However, KiA site policies explicitly prohibit such behavior, and members insist that they have been falsely accused. Underlying the controversy over whether KiA supports harassment is a complex disagreement about what "harassment" is, and where to draw the line between freedom of expression and censorship. We propose a model that characterizes perceptions of controversial speech, dividing it into four categories: criticism, insult, public shaming, and harassment. We also discuss design solutions that address the challenges of moderating harassment without impinging on free speech, and communicating across different ideologies.
... Antisocial behavior can be commonly observed in online public discussions, whether on news websites or on social media. Methods of combating such behavior include comment ranking [39], moderation [53,67], early troll identification [14,18], and interface redesigns that encourage civility [51,52]. Several sites have even resorted to completely disabling comments [28]. ...
Preprint
In online communities, antisocial behavior such as trolling disrupts constructive discussion. While prior work suggests that trolling behavior is confined to a vocal and antisocial minority, we demonstrate that ordinary people can engage in such behavior as well. We propose two primary trigger mechanisms: the individual's mood, and the surrounding context of a discussion (e.g., exposure to prior trolling behavior). Through an experiment simulating an online discussion, we find that both negative mood and seeing troll posts by others significantly increases the probability of a user trolling, and together double this probability. To support and extend these results, we study how these same mechanisms play out in the wild via a data-driven, longitudinal analysis of a large online news discussion community. This analysis reveals temporal mood effects, and explores long range patterns of repeated exposure to trolling. A predictive model of trolling behavior shows that mood and discussion context together can explain trolling behavior better than an individual's history of trolling. These results combine to suggest that ordinary people can, under the right circumstances, behave like trolls.
... To address this issue, current online communities use methods such as user flagging and human moderation to filter comments. Further research in comment moderation suggests filtering for usefulness and sentiment to enhance user engagement [41,42]. For that reason, we ranked peer comments in order of usefulness and placed the most relevant comments at the top ( Figure 3E). ...
Preprint
This paper presents a description and evaluation of the ROC Speak system, a platform that allows ubiquitous access to communication skills training. ROC Speak (available at rocspeak.com) enables anyone to go to a website, record a video, and receive feedback on smile intensity, body movement, volume modulation, filler word usage, unique word usage, word cloud of the spoken words, in addition to overall assessment and subjective comments by peers. Peer comments are automatically ranked and sorted for usefulness and sentiment (i.e., positive vs. negative). We evaluated the system with a diverse group of 56 online participants for a 10-day period. Participants submitted responses to career oriented prompts every other day. The participants were randomly split into two groups: 1) treatment - full feedback from the ROC Speak system; 2) control - written feedback from online peers. When judged by peers (p<.001) and independent raters (p<.05), participants from the treatment group demonstrated statistically significant improvement in overall speaking skills rating while the control group did not. Furthermore, in terms of speaking attributes, treatment group showed an improvement in friendliness (p<.001), vocal variety (p<.05) and articulation (p<.01).
... Even though we cannot disentangle the effects of status and experience, we can still define features that capture aspects of a submitter's previous behavior within a community. Such features have previously been used in studies on Reddit [2,34] and Slashdot [40], among others. ...
Preprint
The content of today's social media is becoming more and more rich, increasingly mixing text, images, videos, and audio. It is an intriguing research question to model the interplay between these different modes in attracting user attention and engagement. But in order to pursue this study of multimodal content, we must also account for context: timing effects, community preferences, and social factors (e.g., which authors are already popular) also affect the amount of feedback and reaction that social-media posts receive. In this work, we separate out the influence of these non-content factors in several ways. First, we focus on ranking pairs of submissions posted to the same community in quick succession, e.g., within 30 seconds, this framing encourages models to focus on time-agnostic and community-specific content features. Within that setting, we determine the relative performance of author vs. content features. We find that victory usually belongs to "cats and captions," as visual and textual features together tend to outperform identity-based features. Moreover, our experiments show that when considered in isolation, simple unigram text features and deep neural network visual features yield the highest accuracy individually, and that the combination of the two modalities generally leads to the best accuracies overall.
... In contrast, community-driven platforms such as Reddit, Stack Overflow, and Slashdot often rely on explicit user feedback (e.g., upvotes) and the recency of posts [43] to determine what content is shown most saliently to users (this is often done with simple, deterministic algorithms, like Reddit's "Hot" algorithm). Distributed content moderation is effective in filtering content that is accepted and appreciated by a community [38]; but previous research has found that it may propagate misinformation [19] and (further) marginalize minorities [14,43]. Automated content moderation. ...
Preprint
Full-text available
Effective content moderation in online communities is often a delicate balance between maintaining content quality and fostering user participation. In this paper, we introduce post guidance, a novel approach to community moderation that proactively guides users' contributions using rules that trigger interventions as users draft a post to be submitted. For instance, rules can surface messages to users, prevent post submissions, or flag posted content for review. This uniquely community-specific, proactive, and user-centric approach can increase adherence to rules without imposing additional burdens on moderators. We evaluate a version of Post Guidance implemented on Reddit, which enables the creation of rules based on both post content and account characteristics, via a large randomized experiment, capturing activity from 97,616 posters in 33 subreddits over 63 days. We find that Post Guidance (1) increased the number of ``successful posts'' (posts not removed after 72 hours), (2) decreased moderators' workload in terms of manually-reviewed reports, (3) increased contribution quality, as measured by community engagement, and (4) had no impact on posters' own subsequent activity, within communities adopting the feature. Post Guidance on Reddit was similarly effective for community veterans and newcomers, with greater benefits in communities that used the feature more extensively. Our findings indicate that post guidance represents a transformative approach to content moderation, embodying a paradigm that can be easily adapted to other platforms to improve online communities across the Web.
... To better support the identification of tone in non-verbal content, such as comments, we suggest providing pre-designed comment templates that instruct the audience to craft straightforward and friendly questions for BlindTokers. We also recommend that TikTok implement crowd-sourcing mechanisms like "down-votes" [42] to allow other TikTokers to help identify unfriendly tones in comments collectively. Comments with significant down-votes should be hidden or displayed with a mark that reminds BlindTokers that these comments might involve harassment. ...
Article
Full-text available
Identity work in Human-Computer Interaction (HCI) has examined the asset-based design of marginalized groups who use technology to improve their quality of life. Our study illuminates the identity work of people with disabilities, specifically, visual impairments. We interviewed 45 BlindTokers (blind users on TikTok) from various backgrounds to understand their identity work from a positive design perspective. We found that BlindTokers leverage the affordance of the platform to create positive content, express their identities, and build communities with the desire to flourish. We proposed flourishing labor to present the work conducted by BlindTokers for their community's flourishing with implications to support the flourishing labor. This work contributes to understanding blind users' experience in short video platforms and highlights that flourishing is not just an activity for any single blind user but also a collective effort that necessitates serious and committed contributions from platforms and the communities they serve.
... Prior work into distributed moderation often focuses on Slashdot. Lampe and Resnick [53] and Lampe et al. [54] show the power distributed moderation on Slashdot has to identify high and low quality contributions, while also reporting some of the challenges associated with the approach. Jiang et al. [39] and Grimmelmann [31] similarly identify trade-offs between various moderation practices, including comparisons of centralized and distributed moderation. ...
Preprint
Full-text available
Social media platform design often incorporates explicit signals of positive feedback. Some moderators provide positive feedback with the goal of positive reinforcement, but are often unsure of their ability to actually influence user behavior. Despite its widespread use and theory touting positive feedback as crucial for user motivation, its effect on recipients is relatively unknown. This paper examines how positive feedback impacts Reddit users and evaluates its differential effects to understand who benefits most from receiving positive feedback. Through a causal inference study of 11M posts across 4 months, we find that users who received positive feedback made more frequent (2% per day) and higher quality (57% higher score; 2% fewer removals per day) posts compared to a set of matched control users. Our findings highlight the need for platforms and communities to expand their perspective on moderation and complement punitive approaches with positive reinforcement strategies.
... Pan et al. [35] found that decisions made by expert panels had greater perceived legitimacy than algorithms or general juries, implying that decisions made by a group of mods may be perceived more legitimately than by automated tools or by juries of community members. Agreement also plays a role in ensuring that moderation decisions are "fair, " both to community members and mods [31]. ...
Preprint
There are three common stages in the moderation process employed by platforms like Reddit: rule creation, reporting/triaging, and report resolution. While the first two stages are well-studied in HCI, the third stage remains under-explored. Directly observing report resolution is challenging, since it requires using invasive tracking tools that moderators may feel uncomfortable with. However, evaluating the current state of this stage is crucial to improve moderation outcomes, especially as online communities continue to grow. In this paper, we present a non-invasive methodology to study report resolution via modeling and simulations. Using agent-based modeling, we analyze the performance of report resolution on Reddit using theory-driven measures and use our results to motivate interventions. We then highlight potential improvements that can be gained by adopting these interventions. We conclude by discussing how modeling and simulations can be used to navigate processes like report resolution and inform the design of new moderation interventions.
... At the heart of content moderation is a classification task, where platforms organize a vast array of user-generated content (UGC)from hate speech to misinformation -into categories defined in platforms' moderation policies (e.g., YouTube [96], Facebook [33]) to determine its appropriateness for the given platforms (Roberts, 2019). This classification is underscored by the moderation processes discussed in prior literature (e.g., [13,36,42]), where moderation policies establish the criteria for content classification to identify and flag policy violations [57]. Platforms typically employ human moderators for this classification task (Roberts, 2019) or develop complex algorithms to detect and categorize policy violations [45]. ...
Conference Paper
Full-text available
Protecting children’s online privacy is paramount. Online platforms seek to enhance child privacy protection by implementing new classification systems into their content moderation practices. One prominent example is YouTube’s “made for kids” (MFK) classification. However, traditional content moderation focuses on managing content rather than users’ privacy; little is known about how users experience these classification systems. Thematically analyzing online discussions about YouTube’s MFK classification system, we present a case study on content creators’ and consumers’ experiences. We found that creators and consumers perceived MFK classification as misaligned with their actual practices, creators encountered unexpected consequences of practicing labeling, and creators and consumers identified MFK classification’s intersections with other platform designs. Our findings shed light on an interwoven network of multiple classification systems that extends the original focus on child privacy to encompass broader child safety issues; these insights contribute to the design principles of child-centered safety within this intricate network.
... This can be done through voting mechanics, like upvotes or account score (sometimes called karma). Early examples of such systems include Slashdot, an early large online forum with distributed moderation, who have a voting mechanism and a karma system (Lampe & Resnick, 2004). Slashdot describe their approach to moderation as 'like jury duty' (Slashdot, 2024) in that moderators are selected randomly and are called upon to be moderators for a limited time. ...
Thesis
Stack Exchange is a global knowledge sharing platform centred around programming, computer science, and a variety of other topics. It is a ubiquitous resource for coders and programmers. Knowledge sharing platforms, like Stack Exchange, are increasingly part of informal professional learning, and make professional knowledge accessible to people across the world. However, the platform has several persistent issues, like the under-participation of women and gender minorities. Given the ubiquity of the platform, and its positioning in recognising the expertise of programmers, there is an urgent need to understand how and why gendered participation patterns are reproduced in this environment. Female participation in computer science and engineering has long been a subject of academic research. This thesis extends this line of research to cover female, non-binary, and trans experiences of participating in the online production of programming and coding knowledge. The title of the thesis, Unicorns in Moderation, has multiple meanings: it refers to the ‘unicorn’ success of a technology platform; the unique way in which of Stack Exchange’s approach to moderation combines platform affordances, volunteer moderation, elected moderation, and automation; and the relatively low participation of female and non-binary members. Using a hybrid approach to digital ethnography, drawing on a mixture of interview, observation, document analysis and data analysis, I explore the gendered issues that are produced and reproduced on Stack Exchange. I find that the language policies on Stack Exchange are central to the reproduction of gendered discrimination and find that this is exacerbated by the gamified approach to content moderation. I also find that it is difficult for users to have measured discussions about gender-based discrimination on the platform due to the lack of recognition for embodied knowledge. From this, there is great potential to understand how online professional learning and knowledge sharing environments might avoid reproducing gender-based discrimination. Future research could extend this by observing how communities on emerging user-coordinated platforms, such as Slack and Discord, manage professional knowledge creation and documentation practices and how these practices are institutionally coordinated. The thesis has three main contributions. The first a theoretical contribution, by applying contemporary social epistemologies, such as epistemic ignorance, to digital contexts. The second is in the methodological design, which brings together a mixture of digital and conventional methods under the banner of institutional ethnography. The third is an empirical contribution, shedding new light on the discourses of gender on platforms. This compilation thesis comprises an extended history of Stack Overflow, three empirical papers, and one methodological paper. Paper 1, Writing the Social Web, argues for how digital platforms can be understood as institutional settings. Paper 2, Gaming Expertise Metrics, explores how the platform mechanics on Stack Overflow reinforce existing masculine hierarchies in programming. Paper 3, No Room for Kindness, examines the codification of communication on Stack Overflow, using interviews, policy texts, and social media data to explore the relations that prevent politeness on the platform. Paper 4, Silencing Tactics, discusses how queer issues are discussed in the Stack Exchange community, and how these issues are minimised through the mechanisms of epistemic ignorance https://hdl.handle.net/2077/79513 Parts of work Osborne, T. (2023). Writing the Social Web: Toward an Institutional Ethnography for the Internet. In P. C. Luken & S. Vaughan (Eds.), Critical Commentary on Institutional Ethnography: IE Scholars Speak to Its Promise (pp. 231–246). Springer International Publishing. https://doi.org/10.1007/978-3-031-33402-3_12 Osborne, T., Nivala, M., Seredko, A., & Hillman, T. (2023). Gaming Expertise Metrics: A Sociological Examination of Online Knowledge Creation Platforms. The American Sociologist. https://doi.org/10.1007/s12108-023-09607-x Tanya Osborne (2024), No Room for Kindness: Gender and Communication Conventions on Stack Overflow. [Unpublished manuscript] Osborne, T. (2023). Silencing Tactics: Pronoun Controversies in a Community Questions and Answers Site. Journal of Digital Social Research, 5(1), https://doi.org/10.33621/jdsr.v5i1.122
... To find relevant results, I employed three distinctive questions: (a) what are the data telling me, (b) what is it I want to know, and (c) what is the dialectical relationship between what the data are telling me and what I want to know, used in iterative analysis by Srivastava and Hopwood (2009) for data analysis, from which meaningful and reoccurring themes emerged (Morgan and Nica, 2020). The research investigation required close observations of weekly posts "jodels" that appeared in the most commented and loudest section on Jodel, representing the most engaged posts since evaluations by others frequently serve as a reliable gauge for identifying which messages are deserving attention (Lampe and Resnick, 2004). Therefore, posts with the highest interactions meant widely accepted by the community since Jodel relies on a moderation and filtering system guided by the community to safeguard against negative content (Reelfs et al., 2022b), empowering the community to define acceptability via various affordances. ...
Article
Several outcomes have been documented on the effect of anonymity on behaviors and norms. However, unlike similar Complete Anonymous Applications (CAPs) that have been discontinued due to their anonymous affordance and associated drawbacks, Jodel has been able to withstand the test of time since its inception. As a result, this study examines how the Jodel platform in Ghana, specifically Accra, constructs its community jointly to ensure positivity by utilizing the social construction framework. Three themes emerge from the data using iterative analysis. It was found that the community’s communal construction was primarily based on user needs, later categorized as social solidarity, which influenced the other constructions: legalism and culture. Using them as a foundation, the community upheld social order through their constructions and Jodel’s predefined affordances. Emphasizing the possibility of social order even on CAPs.
... editorial boards for The Atlantic and the New York Times curate which submitted op-eds to publish, administrators at popular social news sites such as Slashdot manually select a small number of submitted tech news stories per day to publish [45], museum curators decide on which pieces of art to showcase and arrange into shows, teachers pick examples of student work to share with the class, and online publishers decide which news or comments to highlight [21,78]. As seen in these examples and others, a curation metaphor empowers community leaders to select which content is shared with the community. ...
Preprint
How can online communities execute a focused vision for their space? Curation offers one approach, where community leaders manually select content to share with the community. Curation enables leaders to shape a space that matches their taste, norms, and values, but the practice is often intractable at social media scale: curators cannot realistically sift through hundreds or thousands of submissions daily. In this paper, we contribute algorithmic and interface foundations enabling curation at scale, and manifest these foundations in a system called Cura. Our approach draws on the observation that, while curators' attention is limited, other community members' upvotes are plentiful and informative of curators' likely opinions. We thus contribute a transformer-based curation model that predicts whether each curator will upvote a post based on previous community upvotes. Cura applies this curation model to create a feed of content that it predicts the curator would want in the community. Evaluations demonstrate that the curation model accurately estimates opinions of diverse curators, that changing curators for a community results in clearly recognizable shifts in the community's content, and that, consequently, curation can reduce anti-social behavior by half without extra moderation effort. By sampling different types of curators, Cura lowers the threshold to genres of curated social media ranging from editorial groups to stakeholder roundtables to democracies.
... This method can be less timeintensive than pre-moderation, yet can still help prevent harmful content from being visible on a platform; 3) reactive moderation (Llansó 2020): it involves reviewing content only after it has been flagged or reported by other users. The platform relies on user reports to identify potentially harmful content first and then takes actions if the content violates community standards or terms of service; and 4) distributed moderation (Lampe and Resnick 2004): it relies on the community of power users (i.e., those users who have high reputation within an online community) to moderate the content. This can involve giving power users tools to flag or report harmful content, as well as moderating comments and other user-generated content. ...
Conference Paper
Full-text available
Content moderation is a common intervention strategy for reviewing user-generated content on social media platforms. Engaging users in content moderation is promising for making ethical and fair moderation decisions. A few studies that have considered user engagement in content moderation have primarily focused on classifying user-generated comments, rather than leveraging the information of user engagement to make a moderation decision on user-generated posts. Moreover, how to extract information from user engagement to enhance content moderation remains unclear. To address the above-mentioned limitations, this study proposes a framework for user engagement-enhanced moderation of user-generated posts. Specifically, it incorporates the credibility and stance of user-generated content into graph learning. Our empirical evaluation shows that the models based on our proposed framework outperform the state-of-the-art deep learning models in making moderation decisions for user-generated posts. The findings of this study have implications for augmenting the moderation of social media content and for improving the safety and success of online communities.
... For example, on Discord, users may report to community moderators via direct messages, dedicated channels, or emails [25]. Community moderators are often community members elected or appointed to make community-specific moderation actions in accordance with community guidelines about (un)favorable behaviors [12,40]. Crawford and Gillespie argue that user reporting represents interactions between users, platforms, algorithms, and broader political forces [19]. ...
Preprint
User reporting is an essential component of content moderation on many online platforms -- in particular, on end-to-end encrypted (E2EE) messaging platforms where platform operators cannot proactively inspect message contents. However, users' privacy concerns when considering reporting may impede the effectiveness of this strategy in regulating online harassment. In this paper, we conduct interviews with 16 users of E2EE platforms to understand users' mental models of how reporting works and their resultant privacy concerns and considerations surrounding reporting. We find that users expect platforms to store rich longitudinal reporting datasets, recognizing both their promise for better abuse mitigation and the privacy risk that platforms may exploit or fail to protect them. We also find that users have preconceptions about the respective capabilities and risks of moderators at the platform versus community level -- for instance, users trust platform moderators more to not abuse their power but think community moderators have more time to attend to reports. These considerations, along with perceived effectiveness of reporting and how to provide sufficient evidence while maintaining privacy, shape how users decide whether, to whom, and how much to report. We conclude with design implications for a more privacy-preserving reporting system on E2EE messaging platforms.
... For bot-engaged or human-bot coordinated hate raids, we recommend a mechanism to facilitate and encourage passive users to use non-text-based communication [108] to impact the atmosphere in the chatroom. Therefore, a potential tool should be considered to support crowdsourcing practices of viewers in general, such as crowdsourcing moderation with up and down votes [59]. Similarly, designers can develop a feature to ensure encouraging messages on the top of the chatroom when the chatroom is full of bots with messages and notifications. ...
Preprint
Full-text available
Online harassment and content moderation have been well-documented in online communities. However, new contexts and systems always bring new ways of harassment and need new moderation mechanisms. This study focuses on hate raids, a form of group attack in real-time in live streaming communities. Through a qualitative analysis of hate raids discussion in the Twitch subreddit (r/Twitch), we found that (1) hate raids as a human-bot coordinated group attack leverages the live stream system to attack marginalized streamers and other potential groups with(out) breaking the rules, (2) marginalized streamers suffer compound harms with insufficient support from the platform, (3) moderation strategies are overwhelmingly technical, but streamers still struggle to balance moderation and participation considering their marginalization status and needs. We use affordances as a lens to explain how hate raids happens in live streaming systems and propose moderation-by-design as a lens when developing new features or systems to mitigate the potential abuse of such designs.
... Participatory governance is a pertinent perspective here. Prior moderation and governance scholarship has explored various ways, such as allowing users to develop policies and institutional procedures in Wikipedia [34] and giving users technical means to upvote or downvote as distributed moderation on Slashdot [61]. While Roblox currently relies on AI-enforced moderation, UGVW platforms in general could leverage participatory governance and empower end users in more important moderation roles. ...
Conference Paper
Full-text available
Metaverse platforms such as Roblox have become increasingly popular and profitable through a business model that relies on their end users to create and interact with user-generated virtual worlds (UGVWs). However, UGVWs are difficult to moderate, because game design is inherently more complex than static content such as text and images; and Roblox, a game platform targeted primarily at child players, is notorious for harmful user-generated game such as Nazi roleplay games and gambling-like mechanisms. To develop a better understanding of how harmful design is embedded in UGVWs, we conducted an empirical study to understand Roblox users' experiences with harmful design. We identified several primary ways in which user-generated game designs can be harmful, ranging from directly injecting inappropriate content into the virtual environment of UGVWs to embedding problematic incentive mechanisms into the UGVWs. We further discuss opportunities and challenges for mitigating harmful designs. CCS CONCEPTS • Human-centered computing → Human computer interaction (HCI); Empirical studies in HCI; Interaction design; Empirical studies in interaction design.
... For bot-engaged or human-bot coordinated hate raids, we recommend a mechanism to facilitate and encourage passive users to use non-text-based communication [108] to impact the atmosphere in the chatroom. Therefore, a potential tool should be considered to support crowdsourcing practices of viewers in general, such as crowdsourcing moderation with up and down votes [59]. Similarly, designers can develop a feature to ensure encouraging messages on the top of the chatroom when the chatroom is full of bots with messages and notifications. ...
Article
Full-text available
Online harassment and content moderation have been well-documented in online communities. However, new contexts and systems always bring new ways of harassment and need new moderation mechanisms. This study focuses on hate raids, a form of group attack in real-time in live streaming communities. Through a qualitative analysis of hate raids discussion in the Twitch subreddit (r/Twitch), we found that (1) hate raids as a human-bot coordinated group attack leverages the live stream system to attack marginalized streamers and other potential groups with(out) breaking the rules, (2) marginalized streamers suffer compound harms with insufficient support from the platform, (3) moderation strategies are overwhelmingly technical, but streamers still struggle to balance moderation and participation considering their marginalization status and needs. We use affordances as a lens to explain how hate raids happens in live streaming systems and propose moderation-by-design as a lens when developing new features or systems to mitigate the potential abuse of such designs.
... As a result, many online platforms leverage at least some type of community-driven moderation features in hopes of mitigating these issues (e.g., Reddit users' community eforts to "fag" ofensive or harassing content [18,43,49]). This community-driven moderation approach has often been shown to be efective in promoting more civil political discourse [27] and to weed out toxicity in online communities by pushing toxic members out [32], which allows communities to shape their guiding principles and experiences [67]. ...
... Examples of centralized moderation are enforcement of system-wide rules by moderators and commercial algorithmic moderation instated by platforms [24]. Distributed moderation can take place in the form of user flagging, up-voting or similar recommendation systems, and reporting [42,44,45,62]. Proactive moderation curtails problematic posts before they are released by changing user behavior or preemptive moderation, whereas reactive moderation is responding to published content, often in a retributive manner such as in banning or deletion [50,69,71]. ...
Article
Full-text available
Volunteer moderators have the power to shape society through their influence on online discourse. However, the growing scale of online interactions increasingly presents significant hurdles for meaningful moderation. Furthermore, there are only limited tools available to assist volunteers with their work. Our work aims to meaningfully explore the potential of AI-driven, automated moderation tools for social media to assist volunteer moderators. One key aspect is to investigate the degree to which tools must become personalizable and context-sensitive in order to not just delete unsavory content and ban trolls, but to adapt to the millions of online communities on social media mega-platforms that rely on volunteer moderation. In this study, we conduct semi-structured interviews with 26 Facebook Group moderators in order to better understand moderation tasks and their associated challenges. Through qualitative analysis of the interview data, we identify and address the most pressing themes in the challenges they face daily. Using interview insights, we conceptualize three tools with automated features that assist them in their most challenging tasks and problems. We then evaluate the tools for usability and acceptance using a survey drawing on the technology acceptance literature with 22 of the same moderators. Qualitative and descriptive analyses of the survey data show that context-sensitive, agency-maintaining tools in addition to trial experience are key to mass adoption by volunteer moderators in order to build trust in the validity of the moderation technology.
... As online communication is growing by the day, the fewer the super-platforms of the day remind you of their social setup prototypes. Anywhere press release panels and media were on an occasion methodically handled by dedicated administrations who created a portion of the platforms, community firms' work at a balance that takes led away from outdated performs of municipal moderation (Lampe and Resnick, 2004). On the way to anything has been labeled 'corporate machine moderation' or 'platform moderation' (Vadlamudi, 2015). ...
Article
The growing pressure from the government on operators of different platforms on the needs to manage content in order to eliminate misinformation and ‘hate speech, this study examines the introduction of machine moderation mechanisms; surveys approximately the prevailing automated tools employed by key players in social media to manage rights violation, sabotage and hate speech; and recognizes major structures as the requirement for the implementation. We attempt to address the purpose of this paper by reviewing some selected pieces of literature. The article provides automated ‘hash matching and projecting artificial intelligence or machine learning devices. We also define machine moderation as a technological approach set up to demeanor content moderation at balance by foremost platforms for user-created content like YouTube, Facebook, and Twitter.
... At the organizational level, to manage harmful content and improve participants' civility, communities use norm-setting to enforce standards of appropriate behaviors such as explicit guidelines, community norms, and reputation systems [15,56,65]. They also apply and combine moderation techniques [70], such as premoderation (checking content before publishing, such as moderation on Wikipedia [24]), post-moderation (publishing immediately and moderating within the next 24 hours, such as posts on Facebook and Reddit are removed by moderators [32]), automated moderation (technical tools applying pre-defned rules to reject or approve without human intervention, such as Twitter's Blocklist [30,43] and News bots [53]), and distributed moderation (relying on users' participation, such as rating scores on Slashdot [49] and fagging on Facebook [22]). Though the combination of moderation strategies can remove harmful content at scale, to some extent, harassers always circumvent the algorithm and abusive language is still pervasive [31]. ...
... Traditional practices to limit the disruption that can be caused by antisocial behavior consist of blocking messages based on basic text properties (e.g., length), interaction parameters (e.g., posting frequency, reply frequency), or according to the standards of designated moderators [19]. These practices had some drawbacks, such as their applicability to small, medium-sized conversations. ...
Preprint
Full-text available
Content moderation is the process of screening and monitoring user-generated content online. It plays a crucial role in stopping content resulting from unacceptable behaviors such as hate speech, harassment, violence against specific groups, terrorism, racism, xenophobia, homophobia, or misogyny, to mention some few, in Online Social Platforms. These platforms make use of a plethora of tools to detect and manage malicious information; however, malicious actors also improve their skills, developing strategies to surpass these barriers and continuing to spread misleading information. Twisting and camouflaging keywords are among the most used techniques to evade platform content moderation systems. In response to this recent ongoing issue, this paper presents an innovative approach to address this linguistic trend in social networks through the simulation of different content evasion techniques and a multilingual Transformer model for content evasion detection. In this way, we share with the rest of the scientific community a multilingual public tool, named "pyleetspeak" to generate/simulate in a customizable way the phenomenon of content evasion through automatic word camouflage and a multilingual Named-Entity Recognition (NER) Transformer-based model tuned for its recognition and detection. The multilingual NER model is evaluated in different textual scenarios, detecting different types and mixtures of camouflage techniques, achieving an overall weighted F1 score of 0.8795. This article contributes significantly to countering malicious information by developing multilingual tools to simulate and detect new methods of evasion of content on social networks, making the fight against information disorders more effective.
Article
Content moderation practices and technologies need to change over time as requirements and community expectations shift. However, attempts to restructure the existing moderation practices can be difficult, especially for platforms that rely on their communities to moderate, because changes can transform the workflow and workload participants' reward systems. By examining the extensive archival discussions around a prepublication moderation technology on Wikipedia named Flagged Revisions, complemented by seven semi-structured interviews, we identify various challenges in restructuring community-based moderation practices. Thus, we find that while a new system might sound good in theory and perform well in terms of quantitative metrics, it may conflict with existing social norms. Furthermore, our findings underscore how the relationship between platforms and self-governed communities can hinder the ability to assess the performance of any new system and introduce considerable costs related to maintaining, overhauling, or scrapping any piece of infrastructure.
Article
The role of a moderator is often characterized as solely punitive, however, moderators have the power to not only execute reactive and punitive actions but also create norms and support the values they want to see within their communities. One way moderators can proactively foster healthy communities is through positive reinforcement, but we do not currently know whether moderators on Reddit enforce their norms by providing positive feedback to desired contributions. To fill this gap in our knowledge, we surveyed 115 Reddit moderators to build two taxonomies: one for the content and behavior that actual moderators want to encourage and another taxonomy of actions moderators take to encourage desirable contributions. We found that prosocial behavior, engaging with other users, and staying within the topic and norms of the subreddit are the most frequent behaviors that moderators want to encourage. We also found that moderators are taking actions to encourage desirable contributions, specifically through built-in Reddit mechanisms (e.g., upvoting), replying to the contribution, and explicitly approving the contribution in the moderation queue. Furthermore, moderators reported taking these actions specifically to reinforce desirable behavior to the original poster and other community members, even though many of the actions are anonymous, so the recipients are unaware that they are receiving feedback from moderators. Importantly, some moderators who do not currently provide feedback do not object to the practice. Instead, they are discouraged by the lack of explicit tools for positive reinforcement and the fact that their fellow moderators are not currently engaging in methods for encouragement. We consider the taxonomy of actions moderators take, the reasons moderators are deterred from providing encouragement, and suggestions from the moderators themselves to discuss implications for designing tools to provide positive feedback.
Article
Large-scale online platforms powered by user-generated content are extensively researched as venues of learning and knowledge production. In this ethnographically oriented study, we examine knowledge practices on a community question answering platform for computer programmers in relation to the platform mechanics of voting. Grounded in the practice theoretical perspective and drawing on the analysis of online discussion threads and platform-related online materials, our study unpacks the dominant practice of crowd-based curation, the complementing practice of distributed moderation, and the more marginal practice of providing feedback to content producers. The practices co-exist in tension and consonance, which are embedded in the materiality of the platform and are continuously enacted through user discursive boundary work, sustaining the mentioned practices as intelligible for other users, and outlining what counts as legitimately participation on the platform. The study contributes to existing research on the roles voting plays on online platforms, as well as offers implications for research on social and material organization of users' online practices. The study also discusses that it is the ambiguity around the mechanics of voting that allows practices to co-exist. While this ambiguity is often discussed by users as problematic, we suggest as potential implication of our study that it may be productive to design platforms for workable forms of ambiguity allowing knowledge practices to co-exist in tension and to provide space for user negotiations of these practices.
Article
Social media systems are as varied as they are pervasive. They have been almost universally adopted for a broad range of purposes including work, entertainment, activism, and decision making. As a result, they have also diversified, with many distinct designs differing in content type, organization, delivery mechanism, access control, and many other dimensions. In this work, we aim to characterize and then distill a concise design space of social media systems that can help us understand similarities and differences, recognize potential consequences of design choice, and identify spaces for innovation. Our model, which we call Form-From, characterizes social media based on (1) the form of the content, either threaded or flat, and (2) from where or from whom one might receive content, ranging from spaces to networks to the commons. We derive Form-From inductively from a larger set of 62 dimensions organized into 10 categories. To demonstrate the utility of our model, we trace the history of social media systems as they traverse the Form-From space over time, and we identify common design patterns within cells of the model.
Article
Volunteer moderators serve as gatekeepers for problematic content, such as racism and other forms of hate speech, on digital platforms. Prior studies have reported volunteer moderators' diverse roles in different governance models, highlighting the tensions between moderators and other stakeholders (e.g., administrative teams and users). Building upon prior research, this paper focuses on how volunteer moderators moderate racist content and how a platform's governance influences these practices. To understand how moderators deal with racist content, we conducted in-depth interviews with 13 moderators from city subreddits on Reddit. We found that moderators heavily relied on AutoMod to regulate racist content and racist user accounts. However, content that was crafted through covert racism and "color-blind'' racial frames was not addressed well. We attributed these challenges in moderating racist content to (1) moderators' concerns of power corruption, (2) arbitrary moderator team structures, and (3) evolving forms of covert racism. Our results demonstrate that decentralized governance on Reddit could not support local efforts to regulate color-blind racism. Finally, we discuss the conceptual and practical ways to disrupt color-blind moderation.
Article
Social media sites like Reddit, Discord, and Clubhouse utilize a community-reliant approach to content moderation. Under this model, volunteer moderators are tasked with setting and enforcing content rules within the platforms' sub-communities. However, few mechanisms exist to ensure that the rules set by moderators reflect the values of their community. Misalignments between users and moderators can be detrimental to community health. Yet little quantitative work has been done to evaluate the prevalence or nature of user-moderator misalignment. Through a survey of 798 users on r/ChangeMyView, we evaluate user-moderator alignment at the level of policy-awareness (does users know what the rules are?), practice-awareness (do users know how the rules are applied?) and policy-/practice-support (do users agree with the rules and how they are applied?). We find that policy-support is high, while practice-support is low -- using a hierarchical Bayesian model we estimate the correlation between community opinion and moderator decisions to range from .14 to .45 across subreddit rules. Surprisingly, these correlations were only slightly higher when users were asked to predict moderator actions, demonstrating low awareness of moderation practices. Our findings demonstrate the need for careful analysis of user-moderator alignment at multiple levels. We argue that future work should focus on building tools to empower communities to conduct these analyses themselves.
Article
Moderators are at the core of maintaining healthy online communities. For these moderators, who are often volunteers from the community, filtering through content and responding to misbehavior on time has become increasingly challenging as online communities continue to grow. To address such challenges of scale, recent research has looked into designing better tools for moderators of various platforms (e.g. Reddit, Twitch, Facebook, and Twitter). In this paper, we focus on Discord, a platform where communities are typically involved in large, synchronous group chats, creating an environment with a faster pace and a lack of structure compared to previously studied platforms. To tackle the unique challenges presented by Discord, we developed a new human-AI system called ConvEx for exploring online conversations. ConvEx is an AI-augmented version of the standard Discord interface designed to help moderators be proactive in identifying and preventing potential problems. It provides visual embeddings of conversational metrics, such as activity and toxicity levels, and can be extended to visualize other metrics. Through a user study with eight active moderators of Discord servers, we found that ConvEx supported several high-level strategies in monitoring a server and analyzing conversations. ConvEx allowed moderators to obtain a holistic view of activity across multiple channels on the server while guiding their attention towards problematic conversations and messages in a channel, helping them identify important contextual information to obtain reliable information from the AI analysis while also being able to pick up on contextual nuances which the AI missed. We conclude with design considerations for integrating AI into future interfaces for moderating synchronous, unstructured online conversations.
Article
Shortcomings of current models of moderation have driven policy makers, scholars, and technologists to speculate about alternative models of content moderation. While alternative models provide hope for the future of online spaces, they can fail without proper scaffolding. Community moderators are routinely confronted with similar issues and have therefore found creative ways to navigate these challenges. Learning more about the decisions these moderators make, the challenges they face, and where they are successful can provide valuable insight into how to ensure alternative moderation models are successful. In this study, I perform a collaborative ethnography with moderators of r/AskHistorians, a community that uses an alternative moderation model, highlighting the importance of accounting for power in moderation. Drawing from Black feminist theory, I call this "intersectional moderation." I focus on three controversies emblematic of r/AskHistorians' alternative model of moderation: a disagreement over a moderation decision; a collaboration to fight racism on Reddit; and a period of intense turmoil and its impact on policy. Through this evidence I show how volunteer moderators navigated multiple layers of power through care work. To ensure the successful implementation of intersectional moderation, I argue that designers should support decision-making processes and policy makers should account for the impact of the sociotechnical systems in which moderators work.
Article
Can crowd workers be trusted to judge whether news-like articles circulating on the Internet are misleading, or does partisanship and inexperience get in the way? And can the task be structured in a way that reduces partisanship? We assembled pools of both liberal and conservative crowd raters and tested three ways of asking them to make judgments about 374 articles. In a no research condition, they were just asked to view the article and then render a judgment. In an individual research condition, they were also asked to search for corroborating evidence and provide a link to the best evidence they found. In a collective research condition, they were not asked to search, but instead to review links collected from workers in the individual research condition. Both research conditions reduced partisan disagreement in judgments. The individual research condition was most effective at producing alignment with journalists’ assessments. In this condition, the judgments of a panel of sixteen or more crowd workers were better than that of a panel of three expert journalists, as measured by alignment with a held out journalist’s ratings.
Conference Paper
Full-text available
As each micro community centered around the streamer attempts to set its own guidelines in live streaming communities, it is common for volunteer moderators (mods) and the streamer to disagree on how to handle various situations. In this study, we conducted an online survey (N=240) with live streaming mods to explore their commitment to the streamer to grow the micro community and the different styles in which they handle conflicts with the streamer. We found that 1) mods apply more active and cooperative styles than passive and assertive styles to manage conflicts, but they might be forced to do so, and 2) mods with strong commitments to the streamer would like to apply styles showing either high concerns for the streamer or low concerns for themselves. We reflect on how these results can affect micro community development and recommend designs to mitigate conflict and strengthen commitment.
Article
Full-text available
Very large-scale conversation (VLSC) involves the exchange of thou- sands of electronic mail (e-mail) messages among hundreds or thousands of people. Usenet newsgroups are good examples (but not the only examples) of online sites where VLSCs take place. To facilitate understanding of the social and semantic struc- ture of VLSCs, two tools from the social sciences—social networks and semantic networks—have been extended for the purposes of interface design. As interface de- vices, social and semantic networks need to be flexible, layered representations that are useful as a means for summarizing, exploring, and cross-indexing the large vol- umes of messages that constitute the archives of VLSCs. This paper discusses the design criteria necessary for transforming these social scientific representations into interface devices. The discussion is illustrated with the description of the Conversa- tion Map system, an implemented system for browsing and navigating VLSCs.
Conference Paper
Full-text available
An appropriately designed interface to persistent, threaded conversations could reinforce socially beneficial behavior by prominently featuring how frequently and to what degree each user exhibits such behaviors. Based on the data generated by the Netscan data-mining project [9], we have developed a set of tools for illustrating the structure of discussion threads like those found in Usenet newsgroups and the patterns of participation within the discussions. We describe the benefits and challenges of integrating these tools into a multi-faceted dashboard for navigating and reading discussions in social cyberspaces like Usenet and related interaction media. Visualizations of the structure of online discussions have applications for research into the sociology of online groups as well as possible interface designs for their members.
Article
Full-text available
An informational cascade occurs when it is optimal for an individual, having observed the actions of those ahead of him, to follow the behavior of the preceding individual with regard to his own information. The authors argue that localized conformity of behavior and the fragility of mass behaviors can be explained by informational cascades. Copyright 1992 by University of Chicago Press.
Article
Full-text available
Usenet may be regarded as the world's largest conversational application, with over 17,000 newsgroups and 3 million users. Despite its ubiquity and popularity, however, we know little about the nature of the interactions it supports. This empirical paper investigates mass interaction in Usenet. We analyse over 2.15 million messages from 659,450 posters, collected from 500 newsgroups over 6 months. We first characterise mass interaction, presenting basic data about demographics, conversational strategies and interactivity. Using predictions from the common ground [3] model of interaction, we next conduct causal modelling to determine relations between demographics, conversational strategies and interactivity. We find evidence for moderate conversational threading, but large participation inequalities in Usenet, with a small minority of participants posting a large proportion of messages. Contrary to the common ground model and "Netiquette" guidelines [8,10] we also find that "cross-post...
Chapter
Of all Internet services, Usenet (Salzberg, 1998; Spencer, 1998) is probably the least understood and — arguably — the most intriguing. Often confused or conflated with the Internet and predating the Web by more than a decade, Usenet is a logical network that employs the Internet and other physical networks as transport mechanisms. The network is internally structured into tens of thousands of topically organized forums, called newsgroups. Like electronic mail, Usenet is asynchronous; messages and replies to messages do not appear immediately. Usenet is designed so that a message, once posted to a local service, propagates through the network so that all or most servers soon contain a copy, but this process may take several hours to complete. The result is an active deliberative system of global reach and undeniable importance. Usenet cannot be ignored by anyone attempting to understand the nature of social information spaces.
Article
DDPTC02The large-scale adoption of computer mediated communication technologies has resulted in what has been described as ismass interactionln, shared discourse between hundreds, thousands or more individuals. A number of theoretical papers have made the argument that because of the existence of various technological and psychological constraints, the forms that mass interaction takes, can, partly be understood in terms of system dynamics. In particular, it has been suggested that user information overload results in non-linear feedback loops which impacts on discourse structure. This paper describes an empirical examination of three hypothesized effects of such loops by the analysis of 2.65 million USENET messages posted to 600 newsgroups over a 6-month period. Statistical analysis of the data demonstrated the existence of the hypothesized effects and support the assertion that individual 'information overload' coping strategies have an observable impact on mass interaction discourse dynamics. This in turn suggests that the usability of computer mediated communication technologies can be examined in terms of group-level usability.
Article
predicated on the belief that information filtering can be more effective when humans are involved in the filtering process. Tapestry was designed to support both content-based filtering and collaborative filtering, which entails people collaborating to help each other perform filtering by recording their reactions to documents they read. The reactions are called annotations; they can be accessed by other people’s filters. Tapestry is intended to handle any incoming stream of electronic documents and serves both as a mail filter and repository; its components are the indexer, document store, annotation store, filterer, little box, remailer, appraiser and reader/browser. Tapestry’s client/server architecture, its various components, and the Tapestry query language are described.
Article
The author analyzes a sequential decision model in which each decisionmaker looks at the decisions made by previous decisionmakers in taking her own decision. This is rational for her because these other decisionmakers may have some information that is important for her. The author then shows that the decision rules that are chosen by optimizing individuals will be characterized by herd behavior; i.e., people will be doing what others are doing rather than using their information. The author then shows that the resulting equilibrium is inefficient. Copyright 1992, the President and Fellows of Harvard College and the Massachusetts Institute of Technology.
Article
Collaborative filers help people make choices based on the opinions of other people. GroupLens is a system for collaborative filtering of netnews, to help people find articles they will like in the huge stream of available articles. News reader clients display predicted scores and make it easy for users to rate articles after they read them. Rating servers, called Better Bit Bureaus, gather and disseminate the ratings. The rating servers predict scores based on the heuristic that people who agreed in the past will probably agree again. Users can protect their privacy by entering ratings under a pseudonym, without reducing the effectiveness of the score prediction. The entire architecture is open: alternative software for news clients and Better Bit Bureaus can be developed independently and can interoperate with the components we have developed.
Article
Recent developments in computer networks have driven the cost of distributing information virtually to zero, creating extraordinary opportunities for sharing product evaluations. We present pricing and subsidy mechanisms that operate through a computerized market and induce the efficient provision of evaluations. The mechanisms overcome three major challenges: first, evaluations, which are public goods, are likely to be underprovided; second, an inefficient ordering of evaluators may arise; third, the optimal quantity of evaluations depends on what is learned from the initial evaluations. Keywords: evaluations, information sharing, product quality, computer network, market (JEL D70, D83, H41, L15) 2 Subjective evaluations by others are a valuable tool for consumers who are choosing which products to buy or how to spend their time. For example, we read magazines devoted to product evaluation before purchasing cars and appliances. We ask our friends and read reviews by professional cr...
Article
We consider the problems of societal norms for cooperation and reputation when it is possible to obtain "cheap pseudonyms", something which is becoming quite common in a wide variety of interactions on the Internet. This introduces opportunities to misbehave without paying reputational consequences. A large degree of cooperation can still emerge, through a convention in which newcomers "pay their dues" by accepting poor treatment from players who have established positive reputations. One might hope for an open society where newcomers are treated well, but there is an inherent social cost in making the spread of reputations optional. We prove that no equilibrium can sustain significantly more cooperation than the dues-paying equilibrium in a repeated random matching game with a large number of players in which players have finite lives and the ability to change their identities, and there is a small but nonvanishing probability of mistakes. Although one could remove the ine...
Article
This paper describes a technique for making personalized recommendations from any type of database to a user based on similarities between the interest profile of that user and those of other users. In particular, we discuss the implementation of a networked system called Ringo, which makes personalized recommendations for music albums and artists. Ringo's database of users and artists grows dynamically as more people use the system and enter more information. Four different algorithms for making recommendations by using social information filtering were tested and compared. We present quantitative and qualitative results obtained from the use of Ringo by more than 2000 people. KEYWORDS: social information filtering, personalized recommendation systems, user modeling, information retrieval, intelligent systems, CSCW. INTRODUCTION Recent years have seen the explosive growth of the sheer volume of information. The number of books, movies, news, advertisements, and in particular on-lin...
Article
The Internet and World Wide Web have brought us into a world of endless possibilities: interactive Web sites to experience, music to listen to, conversations to participate in, and every conceivable consumer item to order. But this world also is one of endless choice: how can we select from a huge universe of items of widely varying quality? Computational recommender systems have emerged to address this issue. They enable people to share their opinions and benefit from each other's experience. We present a framework for understanding recommender systems and survey a number of distinct approaches in terms of this framework. We also suggest two main research challenges: (1) helping people form communities of interest while respecting personal privacy, and (2) developing algorithms that combine multiple types of information to compute recommendations. In HCI In The New Millennium, Jack Carroll, ed., Addison-Wesley, 2001 p. 2 of 21 Introduction The new millennium is an age of i...
GroupLens: an open architecture for collaborative filtering of netnews
  • P Resnick
Resnick, P., et al. GroupLens: an open architecture for collaborative filtering of netnews. In ACM conference on Computer Supported Cooperative Work. 1994. Chapel Hill, NC.