Fig 5 - uploaded by Renkai Ma
Content may be subject to copyright.
Multi-dimensional notion of fairness in YouTube moderation.

Multi-dimensional notion of fairness in YouTube moderation.

Source publication
Conference Paper
Full-text available
How social media platforms could fairly conduct content moderation is gaining attention from society at large. Researchers from HCI and CSCW have investigated whether certain factors could affect how users perceive moderation decisions as fair or unfair. However, little attention has been paid to unpacking or elaborating on the formation processes...

Contexts in source publication

Context 1
... fairness on YouTube thus presents as a multi-dimensional notion as summarized in Figure 5: • First, it has a temporal dimension in terms of at what stage YouTubers invoked the fairness of moderation. Participants could invoke the concept of fairness when they just received moderation decisions or penalties, when they sought resources to repair them, when they tracked their channels' later performance, or when they observed how future new videos with all normal statuses underperformed in metrics. ...
Context 2
... inconsistent moderation decisions, we further highlighted how moderation fairness on YouTube demonstrated temporal dimensions of moderation, arousing our participants' perceived unfairness. Such perceptions were generated when participants had different qualities of moderation experiences in processes (e.g., observing ripple impacts, handling moderation), as shown in Figure 5. ...
Context 3
... fairness on YouTube thus presents as a multi-dimensional notion as summarized in Figure 5: • First, it has a temporal dimension in terms of at what stage YouTubers invoked the fairness of moderation. Participants could invoke the concept of fairness when they just received moderation decisions or penalties, when they sought resources to repair them, when they tracked their channels' later performance, or when they observed how future new videos with all normal statuses underperformed in metrics. ...
Context 4
... inconsistent moderation decisions, we further highlighted how moderation fairness on YouTube demonstrated temporal dimensions of moderation, arousing our participants' perceived unfairness. Such perceptions were generated when participants had different qualities of moderation experiences in processes (e.g., observing ripple impacts, handling moderation), as shown in Figure 5. ...
Context 5
... fairness on YouTube thus presents as a multi-dimensional notion as summarized in Figure 5: • First, it has a temporal dimension in terms of at what stage YouTubers invoked the fairness of moderation. Participants could invoke the concept of fairness when they just received moderation decisions or penalties, when they sought resources to repair them, when they tracked their channels' later performance, or when they observed how future new videos with all normal statuses underperformed in metrics. ...
Context 6
... inconsistent moderation decisions, we further highlighted how moderation fairness on YouTube demonstrated temporal dimensions of moderation, arousing our participants' perceived unfairness. Such perceptions were generated when participants had different qualities of moderation experiences in processes (e.g., observing ripple impacts, handling moderation), as shown in Figure 5. ...

Citations

... Beyond the purpose of content moderation that regulates the appropriateness of creative content, creator moderation consists of multiple governance mechanisms managing content creators' visibility [3,4,8], identity [4], revenue [5,7], labor [7], and more. Given platformization and monetization of creative labor [10, 19,27], video sharing platforms like YouTube and TikTok tend to practice creator moderation through an assemblage of various algorithms (e.g., monetization, content moderation, recommendation algorithms, and more) [23]. So, creators may correspondingly experience moderation such as demonetization [7,22] or shadowban [3,29]. ...
... So, creators may correspondingly experience moderation such as demonetization [7,22] or shadowban [3,29]. However, as interests in the CSCW community grow in understanding creators' moderation experiences (e.g., [22,23]), relatively little attention has been paid to unpacking or elaborating on bureaucratic traits of creator moderation, so we ask: How do content creators navigate bureaucracy in creator moderation? In this poster, we call the algorithms conducting creator moderation as moderation algorithms holistically when not specifying certain algorithms such as recommendation algorithms. ...
Poster
Full-text available
Recent HCI studies have recognized an analogy between bureaucracy and algorithmic systems; given platformization of content creators, video sharing platforms like YouTube and TikTok practice creator moderation, i.e., an assemblage of algorithms that manage not only creators’ content but also their income, visibility, identities, and more. However, it has not been fully understood as to how bureaucracy manifests in creator moderation. In this poster, we present an interview study with 28 YouTubers (i.e., video content creators) to analyze the bureaucracy of creator moderation from their moderation experiences. We found participants wrestled with bureaucracy as multiple obstructions in re-examining moderation decisions, coercion to appease different algorithms in creator moderation, and the platform’s indifference to participants’ labor. We discuss and contribute a conceptual understanding of how algorithmic and organizational bureaucracy intertwine in creator moderation, laying a solid ground for our future study.