PosterPDF Available

"I am not a YouTuber who can make whatever video I want. I have to keep appeasing algorithms": Bureaucracy of Creator Moderation on YouTube



Recent HCI studies have recognized an analogy between bureaucracy and algorithmic systems; given platformization of content creators, video sharing platforms like YouTube and TikTok practice creator moderation, i.e., an assemblage of algorithms that manage not only creators’ content but also their income, visibility, identities, and more. However, it has not been fully understood as to how bureaucracy manifests in creator moderation. In this poster, we present an interview study with 28 YouTubers (i.e., video content creators) to analyze the bureaucracy of creator moderation from their moderation experiences. We found participants wrestled with bureaucracy as multiple obstructions in re-examining moderation decisions, coercion to appease different algorithms in creator moderation, and the platform’s indifference to participants’ labor. We discuss and contribute a conceptual understanding of how algorithmic and organizational bureaucracy intertwine in creator moderation, laying a solid ground for our future study.
“I am not a YouTuber who can make whatever video I want. I have
to keep appeasing algorithms”: Bureaucracy of Creator
Moderation on YouTube
Renkai Ma
The Pennsylvania State University
Yubo Kou
The Pennsylvania State University
Recent HCI studies have recognized an analogy between bureau-
cracy and algorithmic systems; given platformization of content
creators, video sharing platforms like YouTube and TikTok practice
creator moderation, i.e., an assemblage of algorithms that manage
not only creators’ content but also their income, visibility, iden-
tities, and more. However, it has not been fully understood as to
how bureaucracy manifests in creator moderation. In this poster,
we present an interview study with 28 YouTubers (i.e., video con-
tent creators) to analyze the bureaucracy of creator moderation
from their moderation experiences. We found participants wres-
tled with bureaucracy as multiple obstructions in re-examining
moderation decisions, coercion to appease dierent algorithms in
creator moderation, and the platform’s indierence to participants’
labor. We discuss and contribute a conceptual understanding of how
algorithmic and organizational bureaucracy intertwine in creator
moderation, laying a solid ground for our future study.
Human-centered computing;Collaborative and social com-
puting;Empirical studies in collaborative and social com-
creator moderation, content moderation, algorithmic moderation,
ACM Reference Format:
Renkai Ma and Yubo Kou. 2022. “I am not a YouTuber who can make what-
ever video I want. I have to keep appeasing algorithms”: Bureaucracy of
Creator Moderation on YouTube. In Companion Computer Supported Co-
operative Work and Social Computing (CSCW’22 Companion), November
08–22, 2022, Virtual Event, Taiwan. ACM, New York, NY, USA, 6 pages.
Recent HCI studies have shown bureaucratic traits of algorith-
mic systems. Rooted in organization science, the notion of bureau-
cracy refers to rule-governed procedures that are inexible to adapt
This work is partially supported by NSF grant No. 2006854
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
CSCW’22 Companion, November 08–22, 2022, Virtual Event, Taiwan
©2022 Copyright held by the owner/author(s).
ACM ISBN 978-1-4503-9190-0/22/11.
decision-making processes to novel cases [
]. HCI researchers have
demonstrated how bureaucracy is embedded in algorithms that
support or automate decision-making. For example, on Amazon
Mechanical Turk, quality assessment algorithms would not assign
new tasks to workers if their past work was not completed as gold-
standard solutions in the algorithms’ training dataset [2, 17].
This analogy between algorithmic systems and bureaucracy also
exists in creator moderation. Beyond the purpose of content moder-
ation that regulates the appropriateness of creative content, creator
moderation consists of multiple governance mechanisms manag-
ing content creators’ visibility [
], identity [
], revenue [
labor [
], and more. Given platformization and monetization of
creative labor [
], video sharing platforms like YouTube
and TikTok tend to practice creator moderation through an assem-
blage of various algorithms (e.g., monetization, content moderation,
recommendation algorithms, and more) [
]. So, creators may cor-
respondingly experience moderation such as demonetization [
or shadowban [
]. However, as interests in the CSCW commu-
nity grow in understanding creators’ moderation experiences (e.g.,
]), relatively little attention has been paid to unpacking or
elaborating on bureaucratic traits of creator moderation, so we
ask: How do content creators navigate bureaucracy in creator
moderation? In this poster, we call the algorithms conducting
creator moderation as moderation algorithms holistically when not
specifying certain algorithms such as recommendation algorithms.
To answer the research question, we interviewed 28 YouTubers
who had experienced creator moderation. Through an inductive
qualitative analysis [
], we identied three primary ways that
YouTubers wrestled with bureaucracy: (1) multiple obstructions
prevented our participants from appealing moderation decisions;
(2) participants felt they were coerced to appease dierent types of
algorithms; (3) participants thought their labor was undervalued
by the platform.
We discuss how bureaucracy is embedded in creator moderation
procedures on YouTube, which shares similar traits with not only
algorithmic bureaucracy but also organizational bureaucracy of
online community governance. Organizational bureaucracy refers
to privileged moderation decision-making for some users than oth-
ers. One such example appears on Wikipedia regarding content
exclusion for editors’ contributions. It entails role-based goals of
maintaining an encyclopedia by assigning hierarchical roles to
users such as site owners, sichter (i.e., old users), voluntary users,
or unregistered ones [
], where old members are exempt from re-
view to contribute content [
]. We show that our participants rst
experienced algorithmic bureaucracy from moderation decisions
and then organizational bureaucracy when appealing the decisions.
This exploratory study aims to contribute to the HCI and CSCW
CSCW’22 Companion, November 08–22, 2022, Virtual Event, Taiwan Renkai Ma and Yubo Kou
Table 1: Participant proles. Subscription # (fanbase) was collected on the date of the interviews. Status was identied by
YouTubers themselves by the time spent on creating videos. Career refers to how long YouTubers consistently make videos for
their primary channel. Category refers to content category, which is dened by YouTube. “N/A” means the information that
our participants chose not to disclose.
# Sub # Age Status Nationality Race Gender Career Category
P1 25.8k 18 part-time US White Male 5 months Games
P2 21.3k 23 full-time US White Male 5 years Games
P3 6.6k 40 part-time England White Male 3 years Travel
P4 52k 28 part-time US Black Female 6 years People
P5 4.33k 19 part-time England White Male 5 years Technology
P6 268k 29 full-time US White Male 9 years Animation
P7 84.7K 29 full-time US White Male 3 years Games
P8 177k 32 part-time US White Male 3.5 years History
P9 365k 28 full-time Germany White Male 2 years Entertainment
P10 23.1k 38 part-time Mexico Hispanic Female 2.5 years Education
P11 292k 29 part-time Brazil White Female 12 years Entertainment
P12 2.02k 21 part-time England White Male 2.5 years Education
P13 124k 19 full-time US Hispanic Male 4 years Entertainment
P14 88.6k 28 part-time Colombia Hispanic Male 2 years Education
P15 12.6k 29 part-time Mexico Hispanic Male 6 years Education
P16 35.5k 29 part-time Mexico Hispanic Female 4 years Technology
P17 5.7k 21 part-time US N/A Male 8 years Entertainment
P18 26.8k 29 part-time Mexico Hispanic Female 3 years Education
P19 53.9k 32 part-time Mexico Hispanic Female 3 years Technology
P20 8.8k 18 part-time US White Male 2 years Games
P21 497k 25 full-time US N/A Male 2 years Entertainment
P22 230k 22 part-time US White Male 7 years Animation
P23 31.3k 48 part-time Colombia Latino Male 5 years Education
P24 63.2k 31 part-time US White Male 5 years History
P25 52k 48 part-time US White Male 3 years Film
P26 5.51k 27 part-time US Asian Female 1 year Entertainment
P27 60.6k 55 full-time US White Male 2 years Technology
P28 21.4k 23 full-time Demark Mixed Male 6 months Entertainment
research with a conceptual understanding of how algorithmic and
organizational bureaucracy intertwine in creator moderation.
After the approval from our institution’s Institutional Review Board
(IRB) board, we interviewed 28 YouTubers (See Table 1) to answer
our research question. We used purposeful sampling [
] with
recruitment criteria that participants must be over 18 years old
and have experienced creator moderation (e.g., “limited ads, [
copyright claims [
], or other types). We shared the recruitment
information on Twitter, Facebook, and Reddit. From January to
October 2021, we interviewed 28 YouTubers. Every participant was
compensated with a $ 20 gift card, with three proactively refused
Through Zoom, all interviews were conducted through a semi-
structured interview protocol involving two sections. After we
received verbal consent from each participant, we conducted (1)
warm-up questions, such as demographics and YouTube usage ques-
tions (e.g., frequency of publishing videos), and then (2) moderation
experiences questions such as what moderation and explanations
they received, how they were impacted, and probes, i.e., follow-up
We used an inductive thematic analysis on our interview dataset
]. This procedure was expanded in three steps. First, two coders
separately ran ‘open coding:’ screening the data and assigned codes
to excerpts that could answer our research question. Then, the
coders conducted ‘axial coding’ by combining open coding codes
into themes. Finally, the data analysis ended by running ‘selective
coding’ where relevant themes were grouped into overarching
We found three aspects of bureaucracy in creator moderation: (1)
multiple obstacles to appeal moderation decisions, (2) coercion to
appease dierent algorithms, and (3) the platform’s indierence to
participants’ individual labor.
“I am not a YouTuber who can make whatever video I want. I have to keep appeasing algorithms”:
Bureaucracy of Creator Moderation on YouTube CSCW’22 Companion, November 08–22, 2022, Virtual Event, Taiwan
3.1 Obstruction in Layers
Obstruction in layers means multiple obstacles that our participants
encountered to appeal moderation decisions. After the issuance
of such decisions, YouTubers could request human reviewers’ re-
examination on them (e.g., contacting creator support [
], request-
ing a review [
]). This appeared to be a straightforward procedure,
but our participants experienced it as indirect and layered. They
observed a hierarchy in terms of who could actually reach human
reviewers. Then, they were instructed to take lengthy and confusing
steps to achieve their goals.
3.1.1 Hierarchical Communication. Hierarchical communication
means that the communication channels between YouTubers and
the creator moderation system have varying qualities and ecien-
cies. For instance, P24 said:
In terms of tangible support for it (video deletion),
there wasn’t really much. It was more about the lead-
ership of the Slack community (WeCreateEdu) [who]
basically reached out to their contacts on YouTube,
which [was] fairly high up in the content moderation
area, and then it was just a matter of a waiting game.
WeCreateEdu is a group of YouTubers who create educational con-
tent on YouTube, and they had built an online community on Slack,
a business communication platform. In the above case, P24 ac-
knowledged a hierarchical feature of moderation where YouTube
maintained closed contacts with organizers of a specic YouTuber
community. Because existing appeal procedures did not help re-
examine P24’s video deletion punishment, he turned to leverage the
community’s contacts for better solutions. However, P24 himself
did not have such access.
Some participants had successfully contacted human reviewers
through third-party platforms (e.g., Twitter), while they witnessed
more disproportionate conversations compared with other YouTu-
bers. For instance, P2 said:
. . .
) or a big YouTuber like [YouTuber A;] he’s having
a bunch of issues on his channel, and it has over 10
million subscribers. So, he went on back and forth, and
you can probably look that (chatting history) up with
the team YouTube. But if you follow team YouTube
on Twitter, you can see all kinds of conversations
that people [are] yelling at them because they don’t
do anything. They’re really there for the elite people.
@TeamYouTube is a YouTube team’s ocial Twitter account oer-
ing help to YouTubers. P2 witnessed it oered hierarchical length
of conversations between a YouTuber with a larger fanbase (i.e.,
YouTuber A) and small YouTubers. He thus assumed fanbase was
critical for YouTube to oer better responses to requests for solving
moderation issues.
3.1.2 Ineicient Process. Inecient process describes that partic-
ipants experienced appeal as complex and lengthy, resulting in
undue delays. After moderation algorithms issued decisions, some
participants chose to appeal through YouTube’s platform support
(e.g., creator support [
]). However, this did not mean participants
could eectively solve moderation issues as they wished. For ex-
ample, P7, a YouTuber who experienced channel demonetization,
They (YouTube) want you to go through the forms
and the formulas. They had sent me one link, and it
said if you believe it was an error, you can reapply to
the YouTube partnership program in 30 days as long
as you’re not breaking any rules. But what rules am I
breaking in videos? There’s no information; it’s like a
wall. [P7]
Channel demonetization means a YouTuber’s whole channel be-
comes illegible to earn ad income from videos. In the above case,
P7 needed to wait a certain period of time to appeal such moder-
ation. However, without knowing what content rules he violated,
his appeal might fail to be initiated once he is deemed to violate
content rules again. Thus, extending prior work’s ndings that
users encountered the opacity of appeal on Reddit [
] or Insta-
gram [
], we found that on YouTube, the opacity of moderation
decision-making further became an obstacle of eectively initiating
We also found that time was an important factor in doing so. P3
shared his experiences of appealing ‘limited ads’: You can appeal
demonetization, and then it’s amazing [they] will take 27 or 28 days,
and then they reject you. Or you’re about to get to the 28th day, and
you win because they haven’t found [any]thing [problematic]. [P3]
“Limited ads” means a video is not suitable for most advertisers,
so its ad income will be decreased or removed. YouTube’s content
rules state that human review on ‘limited ads’ takes up to 7 days
]. However, P3 waited more than that for moderation systems
to process his appeals. His negative tone further expressed the de-
sire for moderation system’s better eciency since time matters
for monetization. As P19, who experienced “limited ads, elabo-
rated, “We lost money because I need to wait for a response back from
3.2 Appeasing Algorithms under Coercion
Appeasing algorithms under coercion refers to situations where par-
ticipants felt coerced into appeasing dierent types of algorithms.
Not only did participants experience content moderation that prior
work has primarily investigated (e.g., [
]), but they also felt
forced to negotiate for their content existence, income, visibility,
and more. Some participants unconditionally followed whatever de-
cisions algorithms issued, such as what P7 said: It (self-certication)
will say congratulations, you’re rating your videos accurately, and
in future uploads, we will automatically monetize you depending
on your own answers [P7]. Self-certication is an algorithmic func-
tion for a YouTuber to self-report whether a new video complies
with dierent sections in advertiser-friendly content guidelines
before publishing it. P7’s case showed he actively contributed his
uncompensated labor for the platform to train self-certication al-
gorithms/functions better. For another example, after a community
guideline strike (i.e., a content warning), P13 described how he
appeased monetization and recommendation algorithms:
I kind of do feel like I was targeted, especially after
the false strike. (
. . .
) It hurt my motivation very much
because it takes a while for my videos to get pushed
CSCW’22 Companion, November 08–22, 2022, Virtual Event, Taiwan Renkai Ma and Yubo Kou
(by recommendation algorithms), and I have to make
longer videos that are more watch time intensive be-
cause that’s what the algorithms like. [P13]
After the strike, P13 observed YouTube’s recommendation algo-
rithms did not actively promote his videos, so his ad income de-
creased sharply. He assumed that longer videos could be more
recommendable to the algorithms because they are more protable:
more mid-roll ads can be placed on a video that is longer than
eight minutes [
]. Thus, P13 created longer videos to appease
recommendation and monetization algorithms for more ad income.
However, some participants chose not to appease YouTube, and
they experienced negative consequences happening to their channel
performance. For instance, P8, a YouTuber who created history
content experienced ‘limited ads, said:
They have rendered certain topics o limits. (
. . .
) I
tend to be a little brazen, but that hurts my channel
to grow and to be seen. (
. . .
) YouTube is fostering a
climate in which talking about the Holocaust is im-
possible. That’s a problem. (
. . .
) These things (videos)
were mostly derived from a lecture that I actually gave
to students, and yet I am heavily discouraged from
doing it on YouTube. [P8]
Advertiser-friendly content guidelines allow violent content “in a
news, educational, artistic, or documentary context” [
]. P8 felt
the moderation he received unreasonable and believed that there
were latent boundaries of discussing specic topics beyond existing
content rules. This case showed how algorithmic systems of creator
moderation on YouTube failed to adapt decision-making processes
to novel cases. P8’s insistence on creating Holocaust history content
that he considered completely acceptable on YouTube showed that
he refused to appease YouTube. However, such decision caused neg-
ative impacts on his channel’s monetization, visibility, and growth.
3.3 Indierence to Individual Labor
Indierence to individual labor refers to our participants’ com-
plaints that YouTube showed little care to the value of their indi-
vidual labor. Extending prior work that discusses labor structures
on YouTube [
], we further showed how YouTubers subjec-
tively sense YouTube’s indierent attitudes toward their labor. For
example, P2 said:
It (suspension) actually deleted all my money or mon-
etization for April, which was over 700,000 views for
my other videos; it was like a couple of hundred dol-
lars. (
. . .
) Because my channel got deleted on May 1,
and I hadn’t processed it by the 12th, I never saw that
money. And they never told me what happened to
that money. [P2]
YouTube processes monetization (e.g., ad income) from the 7th to
12th of each month for their prots generated in the last month. P2
did not receive the ad income generated from videos with all normal
monetization statuses due to account suspension. This inequality
in prot distribution indicated platform indierence to P2’s labor.
Furthermore, participants complained that demonetization re-
vealed YouTube’s lack of intent to help them monetize videos in
common ways. For example, P25, a YouTuber who create videos of
horror movie compilation, said:
It says limited ads, but a video like mine is tamer than
an episode of the walking dead. The Walking Dead
is on AMC, that’s a basic cable network; they nd
many people to advertise on that show. So, I don’t
believe that YouTube can’t nd advertisers on horror
lm content or whatever [for me]. I think they just
don’t try. [P25]
P25’s ad income was decreased because the moderation system
deemed that his videos depicted violence and were not suitable
for advertisers. P25 believed that his content could be friendly for
certain categories of ads, so he complained about YouTube’s lim-
ited endeavors to help nd matched advertisers for him. Although
YouTube and AMC might have dierent standards in content rules,
P25’s case showed his impression of YouTube’s limited care for him
to re-monetize.
Extending the prior work on experiences of content moderation
(e.g., [
]), we showed how YouTubers experienced
the bureaucracy of creator moderation. That is, bureaucracy is
embedded in moderation procedures, sharing similar traits with
both algorithmic bureaucracy and organizational bureaucracy of
online community governance.
Similar to how prior work discusses algorithmic bureaucracy
], this study found that moderation algorithms failed to con-
textualize our participants’ video content for moderation decision-
making. YouTube largely automates moderation decision-making
through algorithms (e.g., machine learning) [
]. It, however,
remains questionable how moderation algorithms can ensure con-
tent rules in a qualitative manner (e.g., judgment, discourse) to be
perfectly translated to identify user content as unacceptable, espe-
cially given our participants’ complaints on latent content rules.
Algorithmic bureaucracy on YouTube keeps expanding its scope
and complexity, sometimes involving more uncompensated labor
from YouTubers. An example is YouTube’s “self-certication” func-
tion to improve algorithmic accuracy. Consequently, YouTubers
like P7 willingly provided much uncompensated, voluntary work
so that moderation algorithms, including monetization, recommen-
dation, and more, could be trained better. This means, instead of
making algorithms more exible for YouTubers, the platform makes
YouTubers adapt to the inexible algorithms.
Organizational bureaucracy [
] in creator moderation on
YouTube lies partly in its invisible hierarchy. Prior work has dis-
cussed the hierarchical orders of users in moderation decision-
making for articles on Wikipedia [
]. Similarly, our ndings
showed the hierarchy of appeal processes. Some YouTubers can
access certain human reviewers for privileged interpretational in-
teractions and solutions to moderation issues. While hierarchical
moderation decision-making on Wikipedia is somehow visible to
all community members, the hierarchy in creator moderation is
largely invisible. Thus, YouTubers have a longer learning curve to
gure out these complexities and intricacies.
“I am not a YouTuber who can make whatever video I want. I have to keep appeasing algorithms”:
Bureaucracy of Creator Moderation on YouTube CSCW’22 Companion, November 08–22, 2022, Virtual Event, Taiwan
Content policies serve to ratify such organizational bureaucracy.
Researchers criticize existing content rules rarely consider the con-
text of user content [
], easily leading to algorithmic bureau-
cracy. We further showed that content rule acted as an obstacle: it
provided excuses, such as the 30-day rule, to justify the perceived
low eciency of processing appeals, and prevented YouTubers
from returning to earn ad income from time-sensitive content for a
prolonged period.
Thus, the bureaucracy of creator moderation is both algorithmic
and organizational: YouTubers would rst experience algorithmic
bureaucracy upon receiving moderation decisions and then or-
ganizational bureaucracy when initiating appeals. Organizational
bureaucracy works to exacerbate the repercussions of algorithmic
bureaucracy. Lengthy undue procedures ahead, our participants
could only wait for algorithmic decisions to be re-examined by
human reviewers. With limited power in such unbalanced labor
relation, participants like P13 acquired a mindset of “appeasing
algorithms” or sacriced personal preferences and interests to meet
demands that they believed to be unreasonable.
This poster’s late-breaking ndings, how algorithmic bureau-
cracy and organizational bureaucracy intersect in creator moder-
ation, will inform our future studies. Building on these ndings,
we plan to study the complex relationships between bureaucratic
moderation systems and user communities, how user agency plays
a role in such relationships, and how platform policies could be
designed to prevent bureaucracy and support users in constructive
Julia Alexander. 2019. YouTube moderation bots punish videos
tagged as ‘gay’ or ‘lesbian, study nds. The Verge. Retrieved from
demonetization-terms- words-nerd- city-investigation
Ali Alkhatib and Michael Bernstein. 2019. Street–level algorithms: A theory at the
gaps between policy and decisions. In CHI Conference on Human Factors in Com-
puting Systems Proceedings (CHI 2019), Association for Computing Machinery,
New York, NY, USA, 1–13. DOI:
Carolina Are. 2021. The Shadowban Cycle: an autoethnography of pole dancing,
nudity and censorship on Instagram. Fem. Media Stud. (2021). DOI:https://doi.
Sophie Bishop. 2018. Anxiety, panic and self-optimization: Inequalities and the
YouTube algorithm. Converg. Int. J. Res. into New Media Technol. 24, 1 (2018),
69–84. DOI:
Ragnhild Brøvig-Hanssen and Ellis Jones. 2021. Remix’s retreat? Content moder-
ation, copyright law and mashup music: New Media Soc. (June 2021). DOI:https:
Brian Butler, Elisabeth Joyce, and Jacqueline Pike.2008. Don’t lo ok now, but we’ve
created a bureaucracy: The nature and roles of policies and rules in Wikipedia.
Conf. Hum. Factors Comput. Syst. - Proc. (2008), 1101–1110. DOI:
<number>[7]</numberRobyn Caplan and Tarleton Gillespie. 2020. Tiered Gover-
nance and Demonetization: The Shifting Terms of Labor and Compensation in
the Platform Economy. Soc. Media + Soc. 6, 2 (2020). DOI:
Kelley Cotter. 2021. “Shadowbanning is not a thing”: black box gaslighting and
the power to independently know and credibly critique algorithms. Information,
Commun. Soc. (2021). DOI:
Michel Crozier and Erhard Friedberg. 1964. The Bureaucratic Phenomenon. Rout-
ledge. DOI:
[10] Brooke Erin Duy. 2020. Algorithmic precarity in cultural work. Com-
mun. Public 5, 3–4 (September 2020), 103–107. DOI:
Jessica L. Feuston, Alex S. Taylor, and Anne Marie Piper. 2020. Conformity of
Eating Disorders through Content Moderation. Proc. ACM Human-Computer
Interact. 4, CSCW1 (May 2020). DOI:
Robert Gorwa, Reuben Binns, and Christian Katzenbach. 2020. Algorithmic con-
tent moderation: Technical and political challenges in the automation of platform
governance. Big Data Soc. 7, 1 (January 2020), 205395171989794. DOI:https:
Oliver L. Haimson, Daniel Delmonaco, Peipei Nie, and Andrea Wegner. 2021.
Disproportionate Removals and Diering Content Moderation Experiences for
Conservative, Transgender, and Black Social Media Users: Marginalization and
Moderation Gray Areas. Proc. ACM Human-Computer Interact. 5, CSCW2 (Oc-
tober 2021). DOI:
Shagun Jhaver, Darren Scott Appling, Eric Gilbert, and Amy Bruckman. 2019.
“Did you suspect the post would be removed?”: Understanding user reactions
to content removals on reddit. Proc. ACM Human-Computer Interact. 3, CSCW
(November 2019), 1–33. DOI:
Shagun Jhaver, Amy Bruckman, and Eric Gilbert. 2019. Does transparency in
moderation really matter?: User behavior after content removal explanations
on reddit. Proc. ACM Human-Computer Interact. 3, CSCW (2019). DOI:https:
Prerna Juneja, Deepika Rama Subramanian, and Tanushree Mitra. 2020. Through
the looking glass: Study of transparency in Reddit’s moderation practices. Proc.
ACM Human-Computer Interact. 4, GROUP (January 2020), 1–35. DOI:https:
Sanjay Kairam and Jerey Heer. 2016. Parting Crowds: Characterizing divergent
interpretations in crowdsourced annotation tasks. Proc. ACM Conf. Comput.
Support. Coop. Work. CSCW 27, (February 2016), 1637–1648. DOI:https://doi.
D. Bondy ValdovinosKaye and Joanne E. Gray. 2021. Copyright Gossip: Exploring
Copyright Opinions, Theories, and Strategies on YouTube: Soc. Media + Soc. 7, 3
(August 2021). DOI:
Susanne Kopf. 2020. “Rewarding Good Creators”: Corporate Social Media Dis-
course on Monetization Schemes for Content Creators. Soc. Media + Soc. 6, 4
(October 2020), 205630512096987. DOI:
Paul B. de Laat. 2012. Coercion or empowerment? Moderation of content in
Wikipedia as ‘essentially contested’ bureaucratic rules. Ethics Inf. Technol. 2012
142 14, 2 (February 2012), 123–135. DOI: 9289-
Ralph LaRossa. 2005. Grounded Theory Methods and Qualitative Family Research.
J. Marriage Fam. 67, 4 (November 2005), 837–857. DOI:
Renkai Ma and Yubo Kou. 2021. “How advertiser-friendly is my video?”: YouTu-
ber’s Socioeconomic Interactions with Algorithmic Content Moderation. PACM
Hum. Comput. Interact. 5, CSCW2 (2021), 1–26. DOI:
Renkai Ma and Yubo Kou. 2022. “I’m not sure what dierence is between their
content and mine, other than the person itself”: A Study of Fairness Perception
of Content Moderation on YouTube. Proc. ACM Human-Computer Interact. 6,
CSCW2 (2022), 28. DOI:
[24] Sarah Myers West. 2018. Censored, suspended, shadowbanned: User interpreta-
tions of content moderation on social media platforms. New Media Soc. 20, 11
(2018), 4366–4383. DOI:
Sabine Niederer and José van Dijck. 2010. Wisdom of the crowd or technicity of
content? Wikipedia as a sociotechnical system: New Media Soc. 12, 8 (July 2010),
1368–1387. DOI:
Juho Pääkkönen, Matti Nelimarkka, Jesse Haapoja, and Airi Lampinen. 2020.
Bureaucracy as a Lens for Analyzing and Designing Algorithmic Systems. Conf.
Hum. Factors Comput. Syst. - Proc. (April 2020). DOI:
Hector Postigo. 2016. The socio-technical architecture of digital labor: Converting
play into YouTube money. New Media Soc. 18, 2 (2016), 332–349. DOI:https:
Aja Roman. 2019. YouTubers claim the site systematically demonetizes LGBTQ
content. Vox. Retrieved from
youtube-lgbtq- censorship-demonetization- nerd-city-algorithm- report
Laura Savolainen. 2022. The shadow banning controversy: perceived governance
and algorithmic folklore: Media, Cult. Soc. (March 2022). DOI:
Nicolas P. Suzor, Sarah Myers West, Andrew Quodling, and Jillian York. 2019.
What Do We Mean When We Talk About Transparency? Toward Meaningful
Transparency in Commercial Content Moderation. Int. J. Commun. 13, (2019).
Retrieved from
Sarah J. Tracy. 2013. Qualitative Research Methods: Collecting Evidence, Crafting
Rebecca Tushnet. 2019. Content Moderation in an Age of Extremes.
Case West. Reserv. J. Law, Technol. Internet 10, (2019). Retrieved from
Kristen Vaccaro, Christian Sandvig, and Karrie Karahalios. 2020. At the End
of the Day Facebook Does What It Wants”: How Users Experience Contest-
ing Algorithmic Content Moderation. In Proceedings of the ACM on Human-
Computer Interaction, Association for Computing Machinery, 1–22. DOI:https:
CSCW’22 Companion, November 08–22, 2022, Virtual Event, Taiwan Renkai Ma and Yubo Kou
Richard Ashby Wilson and Molly K. Land. 2021. Hate Speech on Social Media:
Content Moderation in Context. Conn. Law Rev. (2021). Retrieved from https:
“Limited or no ads” explained. YouTube Help. Retrieved from https://support.
Get in touch with the YouTube Creator Support team. YouTube Help. Retrieved
Request human review of videos marked “Not suitable for most advertis-
ers. YouTube Help. Retrieved from
7083671?hl=en#zippy=%2Chow-monetization- status-is- applied
Manage mid-roll ad breaks in long videos. YouTube Help. Retrieved from https:
Advertiser-friendly content guidelines. YouTube Help. Retrieved from
%2Cguide-to- self-certication
... Legal scholars have tended to measure content policies' unfairness on social media platforms because the platforms scarcely consider the context of user content (e.g., online community culture and norms) [110]. In more recent HCI research, given the sheered volume of user content and that platforms increasingly rely on algorithms (e.g., machine learning) in moderation [38,45], CSCW researchers have found that various groups of end-users, such as gender and sexual minority people [41,109], content creators [66], or players in competitive games [60] experience opaque algorithmic moderation decisions and more users perceive such decisions hard to be re-examined through appeal procedures [67,108,109]. To gain a comprehensive understanding of how researchers across various disciplines conceptualize moderation experiences and to facilitate the cross-pollination of ideas, we proposed a third research question: ...
... Especially, deviant behaviors are diverse at a conceptual level with the four categories we discussed in Section 3.1 and at an operational level with novel types of problematic content such as adult materials and terrorist content [72,100,107]. Such plural taxonomy of deviant user behaviors further implies the extensiveness of platform policies, which not only take users who generate content into account but also other stakeholders such as advertisers [12,58,67] who might be impacted by problematic user content. ...
... Such deficiency of moderation design echoed well with the design claims that many prior researchers such as Kraut & Resnick and other colleagues [63] stressed, including consistent moderation criteria/standards, more chances to appeal moderation decisions, and moderation decisionmaking conducted by online communities with rotating power. If online platforms took these prior design claims into account in designing moderation algorithms, moderated users would not ever encounter algorithmic moderation decisions conflicted with content rules [55,68] or lengthy procedures of appealing the decisions [67,73,109], as recent researchers uncovered. ...
Conference Paper
Full-text available
Researchers across various fields have investigated how users experience moderation through different perspectives and methodologies. At present, there is a pressing need of synthesizing and extracting key insights from prior literature to formulate a systematic understanding of what constitutes a moderation experience and to explore how such understanding could further inform moderation-related research and practices. To answer this question, we conducted a systematic literature review (SLR) by analyzing 42 empirical studies related to moderation experiences and published between January 2016 and March 2022. We describe these studies’ characteristics and how they characterize users’ moderation experiences. We further identify five primary perspectives that prior researchers use to conceptualize moderation experiences. These findings suggest an expansive scope of research interests in understanding moderation experiences and considering moderated users as an important stakeholder group to reflect on current moderation design but also pertain to the dominance of the punitive, solutionist logic in moderation and ample implications for future moderation research, design, and practice.
... Importantly, these game-related contexts are a unique type of online community in nature, and thus fundamentally different from the game contexts in several ways. These users, like many social media users, might complain moderation decision-making as opaque or unfair (e.g., [26,73,91]), but their behaviors are mostly presented as user-generated content (e.g., videos [69] on YouTube, textual content on Reddit [44,47], or audio on Discord [50]) in a relatively static way. That means, either human moderators or moderation algorithms could find ways (e.g., hash/keyword matching or classification [38]) to identify and legitimately adjudicate whether they violate content policies [12,45,79], and moderation decision-makers could further notify what they have accused users for [47,98]. ...
... Punished players' diverse needs for justice, when taken into consideration together with prior findings on punished users' needs for justice in other game-related and non-gaming contexts (e.g., [44,69,99]), raise a critical question regarding general moderation research and practice -how moderation design could conceive punished users as an important stakeholder group. From platform's perspective, punished players are deemed to be offenders who violate platform policies (e.g., code of conduct). ...
... Especially, as our findings showed that games did not explain well what and why players were accused of, these punished players would be socially stigmatized with a label or stereotype of toxic players or offenders [56]. However, like many other social media users, players might encounter hardships of contesting punishment decisions [69,98,99] and justifying the punishments are legitimate on their own force [79]. Thus, users are less motivated to put effort into clearing their name, if they perceive the punishment decision-making as lacking in justice. ...
Conference Paper
Full-text available
Multiplayer online games seek to address toxic behaviors such as trolling and griefing through behavior moderation, where penalties such as chat restriction or account suspension are issued against toxic players in the hope that punishments create a teachable moment for punished players to reflect and improve future behavior. While punishments impact player experience (PX) in profound ways, little is known regarding how players experience behavior moderation. In this study, we conducted a survey of 291 players to understand their experiences with punishments in online multiplayer games. Through several statistical analyses, we found that moderation explanation plays a critical role in improving players’ perceived transparency and fairness of moderation; and these perceptions significantly affect what players do after punishments. We discuss moderation experience as an important facet of PX, bridge the game and moderation literature, and provide design implications for behavior moderation in multiplayer online games.
... Arriagada and Ibáñez [5] found that Chilean fashion and lifestyle Instagram creators lifestyle adapt their creative practices to platform changes. Ma and Kou [64] reported how YouTube creators negotiated with the platform in order to appeal moderation decisions such as video removal. ...
Conference Paper
Full-text available
Metaverse platforms such as Roblox have become increasingly popular and profitable through a business model that relies on their end users to create and interact with user-generated virtual worlds (UGVWs). However, UGVWs are difficult to moderate, because game design is inherently more complex than static content such as text and images; and Roblox, a game platform targeted primarily at child players, is notorious for harmful user-generated game such as Nazi roleplay games and gambling-like mechanisms. To develop a better understanding of how harmful design is embedded in UGVWs, we conducted an empirical study to understand Roblox users' experiences with harmful design. We identified several primary ways in which user-generated game designs can be harmful, ranging from directly injecting inappropriate content into the virtual environment of UGVWs to embedding problematic incentive mechanisms into the UGVWs. We further discuss opportunities and challenges for mitigating harmful designs. CCS CONCEPTS • Human-centered computing → Human computer interaction (HCI); Empirical studies in HCI; Interaction design; Empirical studies in interaction design.
... Content creators' stake in moderation is categorically different from that of Reddit or Twitter users, as the former derive a livelihood from the platformization and monetization of their creative labor [39,42,52]. Thus, content creators like YouTubers in our study are economically incentivized to appease the creator moderation system [54], regardless of its complexity and opacity. This, in turn, renders the transparency issue of creator moderation even more pressing. ...
Full-text available
Transparency matters a lot to people who experience moderation on online platforms; much CSCW research has viewed offering explanations as one of the primary solutions to enhance moderation transparency. However, relatively little attention has been paid to unpacking what transparency entails in moderation design, especially for content creators. We interviewed 28 YouTubers to understand their moderation experiences and analyze the dimensions of moderation transparency. We identified four primary dimensions: participants desired the moderation system to present moderation decisions saliently, explain the decisions profoundly, afford communication with the users effectively, and offer repairment and learning opportunities. We discuss how these four dimensions are mutually constitutive and conditioned in the context of creator moderation, where the target of governance mechanisms extends beyond the content to creator careers. We then elaborate on how a dynamic, transparency perspective could value content creators’ digital labor, how transparency design could support creators’ learning, as well as implications for transparency design of other creator platforms.
Conference Paper
Full-text available
How social media platforms could fairly conduct content moderation is gaining attention from society at large. Researchers from HCI and CSCW have investigated whether certain factors could affect how users perceive moderation decisions as fair or unfair. However, little attention has been paid to unpacking or elaborating on the formation processes of users’ perceived (un)fairness from their moderation experiences, especially users who monetize their content. By interviewing 21 for-profit YouTubers (i.e., video content creators), we found three primary ways through which participants assess moderation fairness, including equality across their peers, consistency across moderation decisions and policies, and their voice in algorithmic visibility decision-making processes. Building upon the findings, we discuss how our participants’ fairness perceptions demonstrate a multi-dimensional notion of moderation fairness and how YouTube implements an algorithmic assemblage to moderate YouTubers. We derive translatable design considerations for a fairer moderation system on platforms affording creator monetization.
Full-text available
For all practical purposes, the policy of social media companies to suppress hate speech on their platforms means that the longstanding debate in the United States about whether to limit hate speech in the public square has been resolved in favor of vigorous regulation. Nonetheless, revisiting these debates provides insights essential for developing more empirically-based and narrowly tailored policies regarding online hate. First, a central issue in the hate speech debate is the extent to which hate speech contributes to violence. Those in favor of more robust regulation claim a connection to violence, while others dismiss these arguments as tenuous. The data generated by social media, however, now allow researchers to empirically test whether there are measurable harms resulting from hate speech. These data can assist in formulating evidence-based policies to address the most significant harms of hate speech, while avoiding overbroad regulation. Second, reexamining the U.S. debate about hate speech also reveals the serious missteps of social media policies that prohibit hate speech without regard to context. The policies that social media companies have developed define hate speech solely with respect to the content of the message. As the early advocates of limits on hate speech made clear, the meaning, force, and consequences of speech acts are deeply contextual, and it is impossible to understand the harms of hate speech without reference to political realities and power asymmetries. Regulation that is abstracted from context will inevitably be overbroad. This Article revisits these debates and considers how they map onto the platform law of content moderation, where emerging evidence indicates a correlation between hate speech online, virulent nationalism, and violence against minorities and activists. It concludes by advocating specific recommendations to bring greater consideration of context into the speech-regulation policies and procedures of social media companies.
Full-text available
Efforts to govern algorithms have centerd the ‘black box problem,’ or the opacity of algorithms resulting from corporate secrecy and technical complexity. In this article, I conceptualize a related and equally fundamental challenge for governance efforts: black box gaslighting. Black box gaslighting captures how platforms may leverage perceptions of their epistemic authority on their algorithms to undermine users’ confidence in what they know about algorithms and destabilize credible criticism. I explicate the concept of black box gaslighting through a case study of the ‘shadowbanning’ dispute within the Instagram influencer community, drawing on interviews with influencers (n = 17) and online discourse materials (e.g., social media posts, blog posts, videos, etc.). I argue that black box gaslighting presents a formidable deterrent for those seeking accountability: an epistemic contest over the legitimacy of critiques in which platforms hold the upper hand. At the same time, I suggest we must be mindful of the partial nature of platforms’ claim to ‘the truth,’ as well as the value of user understandings of algorithms.
Full-text available
Social media sites use content moderation to attempt to cultivate safe spaces with accurate information for their users. However, content moderation decisions may not be applied equally for all types of users, and may lead to disproportionate censorship related to people's genders, races, or political orientations. We conducted a mixed methods study involving qualitative and quantitative analysis of survey data to understand which types of social media users have content and accounts removed more frequently than others, what types of content and accounts are removed, and how content removed may differ between groups. We found that three groups of social media users in our dataset experienced content and account removals more often than others: political conservatives, transgender people, and Black people. However, the types of content removed from each group varied substantially. Conservative participants' removed content included content that was offensive or allegedly so, misinformation, Covid-related, adult, or hate speech. Transgender participants' content was often removed as adult despite following site guidelines, critical of a dominant group (e.g., men, white people), or specifically related to transgender or queer issues. Black participants' removed content was frequently related to racial justice or racism. More broadly, conservative participants' removals often involved harmful content removed according to site guidelines to create safe spaces with accurate information, while transgender and Black participants' removals often involved content related to expressing their marginalized identities that was removed despite following site policies or fell into content moderation gray areas. We discuss potential ways forward to make content moderation more equitable for marginalized social media users, such as embracing and designing specifically for content moderation gray areas.
Conference Paper
Full-text available
To manage user-generated harmful video content, YouTube relies on AI algorithms (e.g., machine learning) in content moderation and follows a retributive justice logic to punish convicted YouTubers through demonetization, a penalty that limits or deprives them of advertisements (ads), reducing their future ad income. Moderation research is burgeoning in CSCW, but relatively little attention has been paid to the socioeconomic implications of YouTube's algorithmic moderation. Drawing from the lens of algorithmic labor, we describe how algorithmic moderation shapes YouTubers' labor conditions through algorithmic opacity and precarity. YouTubers coped with such challenges from algorithmic moderation by sharing and applying practical knowledge they learned about moderation algorithms. By analyzing video content creation as algorithmic labor, we unpack the socioeconomic implications of algorithmic moderation and point to necessary post-punishment support as a form of restorative justice. Lastly, we put forward design considerations for algorithmic moderation systems.
Full-text available
This study investigates copyright discourses on YouTube. Through a qualitative content analysis of 144 YouTube videos, we explore how YouTube creators understand copyright law, how they minimize risks posed by copyright infringement, and how they navigate a highly technical and dynamic copyright enforcement ecosystem. Our findings offer insights into how digitally situated cultural producers are impacted by and respond to automated content moderation. This is important because increasingly lawmakers around the world are asking digital platforms to implement efficient systems for content moderation, and yet there is a lack of good information about the stakeholders most directly impacted by these practices. In this study, we present a systematic analysis of copyright gossip, building on the concept of algorithmic gossip, which comprises the opinions, theories, and strategies of creators who are affected by YouTube’s copyright enforcement systems.
Full-text available
Many online media platforms currently utilise algorithmically driven content moderation to prevent copyright infringement. This article explores content moderation’s effect on mashup music – a form of remix which relies primarily on the unauthorised combining of pre-existing, recognisable recordings. Drawing on interviews ( n = 30) and an online survey ( n = 92) with mashup producers, we show that content moderation affects producers’ creative decisions and distribution strategies, and has a strong negative effect on their overall motivation to create mashups. The objections that producers hold to this state of affairs often strongly resonate with current copyright exceptions. However, we argue that these exceptions, which form a legal ‘grey zone’, are currently unsatisfactorily accommodated for by platforms. Platforms’ political-economic power allows them, in effect, to ‘occupy’ and control this zone. Consequently, the practical efficacy of copyright law’s exceptions in this setting is significantly reduced.
Full-text available
This paper contributes to the social media moderation research space by examining the still under-researched “shadowban”, a form of light and secret censorship targeting what Instagram defines as borderline content, particularly affecting posts depicting women’s bodies, nudity and sexuality. “Shadowban” is a user-generated term given to the platform’s “vaguely inappropriate content” policy, which hides users’ posts from its Explore page, dramatically reducing their visibility. While research has already focused on algorithmic bias and on social media moderation, there are not, at present, studies on how Instagram’s shadowban works. This autoethnographic exploration of the shadowban provides insights into how it manifests from a user’s perspective, applying a risk society framework to Instagram’s moderation of pole dancing content to show how the platform’s preventive measures are affecting user rights.
In this paper, I approach platform governance through algorithmic folklore, consisting of beliefs and narratives about moderation systems that are passed on informally and can exist in tension with official accounts. More specifically, I analyse user discussions on ‘shadow banning’, a controversial, potentially non-existing form of content moderation on popular social media platforms. I argue that discursive mobilisations of the term can act as a methodological entry point to studying the shifting grounds and emerging logics of algorithmic governance, not necessarily in terms of the actual practices themselves, but in terms of its experiential dimension that, in turn, indicates broader modalities and relationalities of control. Based on my analysis of the user discussions, I argue that the constitutive logics of social media platforms increasingly seem to run counter to the values of good governance, such as clarity and stability of norms, and consistency of enforcement. This is reflected in how users struggle, desperately, to form expectations about system operation and police themselves according to perceived rules, yet are left in a state of dependency and frustration, unable to take hold of their digital futures.
Interest has grown in designing algorithmic decision making systems for contestability. In this work, we study how users experience contesting unfavorable social media content moderation decisions. A large-scale online experiment tests whether different forms of appeals can improve users' experiences of automated decision making. We study the impact on users' perceptions of the Fairness, Accountability, and Trustworthiness of algorithmic decisions, as well as their feelings of Control (FACT). Surprisingly, we find that none of the appeal designs improve FACT perceptions compared to a no appeal baseline. We qualitatively analyze how users write appeals, and find that they contest the decision itself, but also more fundamental issues like the goal of moderating content, the idea of automation, and the inconsistency of the system as a whole. We conclude with suggestions for -- as well as a discussion of the challenges of -- designing for contestability.