PosterPDF Available

"I am not a YouTuber who can make whatever video I want. I have to keep appeasing algorithms": Bureaucracy of Creator Moderation on YouTube

Authors:

Abstract

Recent HCI studies have recognized an analogy between bureaucracy and algorithmic systems; given platformization of content creators, video sharing platforms like YouTube and TikTok practice creator moderation, i.e., an assemblage of algorithms that manage not only creators’ content but also their income, visibility, identities, and more. However, it has not been fully understood as to how bureaucracy manifests in creator moderation. In this poster, we present an interview study with 28 YouTubers (i.e., video content creators) to analyze the bureaucracy of creator moderation from their moderation experiences. We found participants wrestled with bureaucracy as multiple obstructions in re-examining moderation decisions, coercion to appease different algorithms in creator moderation, and the platform’s indifference to participants’ labor. We discuss and contribute a conceptual understanding of how algorithmic and organizational bureaucracy intertwine in creator moderation, laying a solid ground for our future study.
“I am not a YouTuber who can make whatever video I want. I have
to keep appeasing algorithms”: Bureaucracy of Creator
Moderation on YouTube
Renkai Ma
The Pennsylvania State University
renkai@psu.edu
Yubo Kou
The Pennsylvania State University
yubokou@psu.edu
ABSTRACT
Recent HCI studies have recognized an analogy between bureau-
cracy and algorithmic systems; given platformization of content
creators, video sharing platforms like YouTube and TikTok practice
creator moderation, i.e., an assemblage of algorithms that manage
not only creators’ content but also their income, visibility, iden-
tities, and more. However, it has not been fully understood as to
how bureaucracy manifests in creator moderation. In this poster,
we present an interview study with 28 YouTubers (i.e., video con-
tent creators) to analyze the bureaucracy of creator moderation
from their moderation experiences. We found participants wres-
tled with bureaucracy as multiple obstructions in re-examining
moderation decisions, coercion to appease dierent algorithms in
creator moderation, and the platform’s indierence to participants’
labor. We discuss and contribute a conceptual understanding of how
algorithmic and organizational bureaucracy intertwine in creator
moderation, laying a solid ground for our future study.
CCS CONCEPTS
Human-centered computing;Collaborative and social com-
puting;Empirical studies in collaborative and social com-
puting;
KEYWORDS
creator moderation, content moderation, algorithmic moderation,
YouTuber
ACM Reference Format:
Renkai Ma and Yubo Kou. 2022. “I am not a YouTuber who can make what-
ever video I want. I have to keep appeasing algorithms”: Bureaucracy of
Creator Moderation on YouTube. In Companion Computer Supported Co-
operative Work and Social Computing (CSCW’22 Companion), November
08–22, 2022, Virtual Event, Taiwan. ACM, New York, NY, USA, 6 pages.
https://doi.org/10.1145/3500868.3559445
1 INTRODUCTION
Recent HCI studies have shown bureaucratic traits of algorith-
mic systems. Rooted in organization science, the notion of bureau-
cracy refers to rule-governed procedures that are inexible to adapt
This work is partially supported by NSF grant No. 2006854
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
CSCW’22 Companion, November 08–22, 2022, Virtual Event, Taiwan
©2022 Copyright held by the owner/author(s).
ACM ISBN 978-1-4503-9190-0/22/11.
https://doi.org/10.1145/3500868.3559445
decision-making processes to novel cases [
9
]. HCI researchers have
demonstrated how bureaucracy is embedded in algorithms that
support or automate decision-making. For example, on Amazon
Mechanical Turk, quality assessment algorithms would not assign
new tasks to workers if their past work was not completed as gold-
standard solutions in the algorithms’ training dataset [2, 17].
This analogy between algorithmic systems and bureaucracy also
exists in creator moderation. Beyond the purpose of content moder-
ation that regulates the appropriateness of creative content, creator
moderation consists of multiple governance mechanisms manag-
ing content creators’ visibility [
3
,
4
,
8
], identity [
4
], revenue [
5
,
7
],
labor [
7
], and more. Given platformization and monetization of
creative labor [
10
,
19
,
27
], video sharing platforms like YouTube
and TikTok tend to practice creator moderation through an assem-
blage of various algorithms (e.g., monetization, content moderation,
recommendation algorithms, and more) [
23
]. So, creators may cor-
respondingly experience moderation such as demonetization [
7
,
22
]
or shadowban [
3
,
29
]. However, as interests in the CSCW commu-
nity grow in understanding creators’ moderation experiences (e.g.,
[
22
,
23
]), relatively little attention has been paid to unpacking or
elaborating on bureaucratic traits of creator moderation, so we
ask: How do content creators navigate bureaucracy in creator
moderation? In this poster, we call the algorithms conducting
creator moderation as moderation algorithms holistically when not
specifying certain algorithms such as recommendation algorithms.
To answer the research question, we interviewed 28 YouTubers
who had experienced creator moderation. Through an inductive
qualitative analysis [
21
,
31
], we identied three primary ways that
YouTubers wrestled with bureaucracy: (1) multiple obstructions
prevented our participants from appealing moderation decisions;
(2) participants felt they were coerced to appease dierent types of
algorithms; (3) participants thought their labor was undervalued
by the platform.
We discuss how bureaucracy is embedded in creator moderation
procedures on YouTube, which shares similar traits with not only
algorithmic bureaucracy but also organizational bureaucracy of
online community governance. Organizational bureaucracy refers
to privileged moderation decision-making for some users than oth-
ers. One such example appears on Wikipedia regarding content
exclusion for editors’ contributions. It entails role-based goals of
maintaining an encyclopedia by assigning hierarchical roles to
users such as site owners, sichter (i.e., old users), voluntary users,
or unregistered ones [
6
,
25
], where old members are exempt from re-
view to contribute content [
20
]. We show that our participants rst
experienced algorithmic bureaucracy from moderation decisions
and then organizational bureaucracy when appealing the decisions.
This exploratory study aims to contribute to the HCI and CSCW
CSCW’22 Companion, November 08–22, 2022, Virtual Event, Taiwan Renkai Ma and Yubo Kou
Table 1: Participant proles. Subscription # (fanbase) was collected on the date of the interviews. Status was identied by
YouTubers themselves by the time spent on creating videos. Career refers to how long YouTubers consistently make videos for
their primary channel. Category refers to content category, which is dened by YouTube. “N/A” means the information that
our participants chose not to disclose.
# Sub # Age Status Nationality Race Gender Career Category
P1 25.8k 18 part-time US White Male 5 months Games
P2 21.3k 23 full-time US White Male 5 years Games
P3 6.6k 40 part-time England White Male 3 years Travel
P4 52k 28 part-time US Black Female 6 years People
P5 4.33k 19 part-time England White Male 5 years Technology
P6 268k 29 full-time US White Male 9 years Animation
P7 84.7K 29 full-time US White Male 3 years Games
P8 177k 32 part-time US White Male 3.5 years History
P9 365k 28 full-time Germany White Male 2 years Entertainment
P10 23.1k 38 part-time Mexico Hispanic Female 2.5 years Education
P11 292k 29 part-time Brazil White Female 12 years Entertainment
P12 2.02k 21 part-time England White Male 2.5 years Education
P13 124k 19 full-time US Hispanic Male 4 years Entertainment
P14 88.6k 28 part-time Colombia Hispanic Male 2 years Education
P15 12.6k 29 part-time Mexico Hispanic Male 6 years Education
P16 35.5k 29 part-time Mexico Hispanic Female 4 years Technology
P17 5.7k 21 part-time US N/A Male 8 years Entertainment
P18 26.8k 29 part-time Mexico Hispanic Female 3 years Education
P19 53.9k 32 part-time Mexico Hispanic Female 3 years Technology
P20 8.8k 18 part-time US White Male 2 years Games
P21 497k 25 full-time US N/A Male 2 years Entertainment
P22 230k 22 part-time US White Male 7 years Animation
P23 31.3k 48 part-time Colombia Latino Male 5 years Education
P24 63.2k 31 part-time US White Male 5 years History
P25 52k 48 part-time US White Male 3 years Film
P26 5.51k 27 part-time US Asian Female 1 year Entertainment
P27 60.6k 55 full-time US White Male 2 years Technology
P28 21.4k 23 full-time Demark Mixed Male 6 months Entertainment
research with a conceptual understanding of how algorithmic and
organizational bureaucracy intertwine in creator moderation.
2 METHODS: DATA COLLECTION AND
ANALYSIS
After the approval from our institution’s Institutional Review Board
(IRB) board, we interviewed 28 YouTubers (See Table 1) to answer
our research question. We used purposeful sampling [
31
] with
recruitment criteria that participants must be over 18 years old
and have experienced creator moderation (e.g., “limited ads, [
35
]
copyright claims [
5
,
18
], or other types). We shared the recruitment
information on Twitter, Facebook, and Reddit. From January to
October 2021, we interviewed 28 YouTubers. Every participant was
compensated with a $ 20 gift card, with three proactively refused
reimbursement.
Through Zoom, all interviews were conducted through a semi-
structured interview protocol involving two sections. After we
received verbal consent from each participant, we conducted (1)
warm-up questions, such as demographics and YouTube usage ques-
tions (e.g., frequency of publishing videos), and then (2) moderation
experiences questions such as what moderation and explanations
they received, how they were impacted, and probes, i.e., follow-up
questions.
We used an inductive thematic analysis on our interview dataset
[
21
]. This procedure was expanded in three steps. First, two coders
separately ran ‘open coding:’ screening the data and assigned codes
to excerpts that could answer our research question. Then, the
coders conducted ‘axial coding’ by combining open coding codes
into themes. Finally, the data analysis ended by running ‘selective
coding’ where relevant themes were grouped into overarching
themes.
3 FINDINGS
We found three aspects of bureaucracy in creator moderation: (1)
multiple obstacles to appeal moderation decisions, (2) coercion to
appease dierent algorithms, and (3) the platform’s indierence to
participants’ individual labor.
“I am not a YouTuber who can make whatever video I want. I have to keep appeasing algorithms”:
Bureaucracy of Creator Moderation on YouTube CSCW’22 Companion, November 08–22, 2022, Virtual Event, Taiwan
3.1 Obstruction in Layers
Obstruction in layers means multiple obstacles that our participants
encountered to appeal moderation decisions. After the issuance
of such decisions, YouTubers could request human reviewers’ re-
examination on them (e.g., contacting creator support [
36
], request-
ing a review [
37
]). This appeared to be a straightforward procedure,
but our participants experienced it as indirect and layered. They
observed a hierarchy in terms of who could actually reach human
reviewers. Then, they were instructed to take lengthy and confusing
steps to achieve their goals.
3.1.1 Hierarchical Communication. Hierarchical communication
means that the communication channels between YouTubers and
the creator moderation system have varying qualities and ecien-
cies. For instance, P24 said:
In terms of tangible support for it (video deletion),
there wasn’t really much. It was more about the lead-
ership of the Slack community (WeCreateEdu) [who]
basically reached out to their contacts on YouTube,
which [was] fairly high up in the content moderation
area, and then it was just a matter of a waiting game.
[P24]
WeCreateEdu is a group of YouTubers who create educational con-
tent on YouTube, and they had built an online community on Slack,
a business communication platform. In the above case, P24 ac-
knowledged a hierarchical feature of moderation where YouTube
maintained closed contacts with organizers of a specic YouTuber
community. Because existing appeal procedures did not help re-
examine P24’s video deletion punishment, he turned to leverage the
community’s contacts for better solutions. However, P24 himself
did not have such access.
Some participants had successfully contacted human reviewers
through third-party platforms (e.g., Twitter), while they witnessed
more disproportionate conversations compared with other YouTu-
bers. For instance, P2 said:
(
. . .
) or a big YouTuber like [YouTuber A;] he’s having
a bunch of issues on his channel, and it has over 10
million subscribers. So, he went on back and forth, and
you can probably look that (chatting history) up with
the team YouTube. But if you follow team YouTube
on Twitter, you can see all kinds of conversations
that people [are] yelling at them because they don’t
do anything. They’re really there for the elite people.
[P2]
@TeamYouTube is a YouTube team’s ocial Twitter account oer-
ing help to YouTubers. P2 witnessed it oered hierarchical length
of conversations between a YouTuber with a larger fanbase (i.e.,
YouTuber A) and small YouTubers. He thus assumed fanbase was
critical for YouTube to oer better responses to requests for solving
moderation issues.
3.1.2 Ineicient Process. Inecient process describes that partic-
ipants experienced appeal as complex and lengthy, resulting in
undue delays. After moderation algorithms issued decisions, some
participants chose to appeal through YouTube’s platform support
(e.g., creator support [
36
]). However, this did not mean participants
could eectively solve moderation issues as they wished. For ex-
ample, P7, a YouTuber who experienced channel demonetization,
said:
They (YouTube) want you to go through the forms
and the formulas. They had sent me one link, and it
said if you believe it was an error, you can reapply to
the YouTube partnership program in 30 days as long
as you’re not breaking any rules. But what rules am I
breaking in videos? There’s no information; it’s like a
wall. [P7]
Channel demonetization means a YouTuber’s whole channel be-
comes illegible to earn ad income from videos. In the above case,
P7 needed to wait a certain period of time to appeal such moder-
ation. However, without knowing what content rules he violated,
his appeal might fail to be initiated once he is deemed to violate
content rules again. Thus, extending prior work’s ndings that
users encountered the opacity of appeal on Reddit [
16
] or Insta-
gram [
11
], we found that on YouTube, the opacity of moderation
decision-making further became an obstacle of eectively initiating
appeals.
We also found that time was an important factor in doing so. P3
shared his experiences of appealing ‘limited ads’: You can appeal
demonetization, and then it’s amazing [they] will take 27 or 28 days,
and then they reject you. Or you’re about to get to the 28th day, and
you win because they haven’t found [any]thing [problematic]. [P3]
“Limited ads” means a video is not suitable for most advertisers,
so its ad income will be decreased or removed. YouTube’s content
rules state that human review on ‘limited ads’ takes up to 7 days
[
37
]. However, P3 waited more than that for moderation systems
to process his appeals. His negative tone further expressed the de-
sire for moderation system’s better eciency since time matters
for monetization. As P19, who experienced “limited ads, elabo-
rated, “We lost money because I need to wait for a response back from
YouTube.
3.2 Appeasing Algorithms under Coercion
Appeasing algorithms under coercion refers to situations where par-
ticipants felt coerced into appeasing dierent types of algorithms.
Not only did participants experience content moderation that prior
work has primarily investigated (e.g., [
13
15
,
33
]), but they also felt
forced to negotiate for their content existence, income, visibility,
and more. Some participants unconditionally followed whatever de-
cisions algorithms issued, such as what P7 said: It (self-certication)
will say congratulations, you’re rating your videos accurately, and
in future uploads, we will automatically monetize you depending
on your own answers [P7]. Self-certication is an algorithmic func-
tion for a YouTuber to self-report whether a new video complies
with dierent sections in advertiser-friendly content guidelines
before publishing it. P7’s case showed he actively contributed his
uncompensated labor for the platform to train self-certication al-
gorithms/functions better. For another example, after a community
guideline strike (i.e., a content warning), P13 described how he
appeased monetization and recommendation algorithms:
I kind of do feel like I was targeted, especially after
the false strike. (
. . .
) It hurt my motivation very much
because it takes a while for my videos to get pushed
CSCW’22 Companion, November 08–22, 2022, Virtual Event, Taiwan Renkai Ma and Yubo Kou
(by recommendation algorithms), and I have to make
longer videos that are more watch time intensive be-
cause that’s what the algorithms like. [P13]
After the strike, P13 observed YouTube’s recommendation algo-
rithms did not actively promote his videos, so his ad income de-
creased sharply. He assumed that longer videos could be more
recommendable to the algorithms because they are more protable:
more mid-roll ads can be placed on a video that is longer than
eight minutes [
38
]. Thus, P13 created longer videos to appease
recommendation and monetization algorithms for more ad income.
However, some participants chose not to appease YouTube, and
they experienced negative consequences happening to their channel
performance. For instance, P8, a YouTuber who created history
content experienced ‘limited ads, said:
They have rendered certain topics o limits. (
. . .
) I
tend to be a little brazen, but that hurts my channel
to grow and to be seen. (
. . .
) YouTube is fostering a
climate in which talking about the Holocaust is im-
possible. That’s a problem. (
. . .
) These things (videos)
were mostly derived from a lecture that I actually gave
to students, and yet I am heavily discouraged from
doing it on YouTube. [P8]
Advertiser-friendly content guidelines allow violent content “in a
news, educational, artistic, or documentary context” [
39
]. P8 felt
the moderation he received unreasonable and believed that there
were latent boundaries of discussing specic topics beyond existing
content rules. This case showed how algorithmic systems of creator
moderation on YouTube failed to adapt decision-making processes
to novel cases. P8’s insistence on creating Holocaust history content
that he considered completely acceptable on YouTube showed that
he refused to appease YouTube. However, such decision caused neg-
ative impacts on his channel’s monetization, visibility, and growth.
3.3 Indierence to Individual Labor
Indierence to individual labor refers to our participants’ com-
plaints that YouTube showed little care to the value of their indi-
vidual labor. Extending prior work that discusses labor structures
on YouTube [
7
,
27
], we further showed how YouTubers subjec-
tively sense YouTube’s indierent attitudes toward their labor. For
example, P2 said:
It (suspension) actually deleted all my money or mon-
etization for April, which was over 700,000 views for
my other videos; it was like a couple of hundred dol-
lars. (
. . .
) Because my channel got deleted on May 1,
and I hadn’t processed it by the 12th, I never saw that
money. And they never told me what happened to
that money. [P2]
YouTube processes monetization (e.g., ad income) from the 7th to
12th of each month for their prots generated in the last month. P2
did not receive the ad income generated from videos with all normal
monetization statuses due to account suspension. This inequality
in prot distribution indicated platform indierence to P2’s labor.
Furthermore, participants complained that demonetization re-
vealed YouTube’s lack of intent to help them monetize videos in
common ways. For example, P25, a YouTuber who create videos of
horror movie compilation, said:
It says limited ads, but a video like mine is tamer than
an episode of the walking dead. The Walking Dead
is on AMC, that’s a basic cable network; they nd
many people to advertise on that show. So, I don’t
believe that YouTube can’t nd advertisers on horror
lm content or whatever [for me]. I think they just
don’t try. [P25]
P25’s ad income was decreased because the moderation system
deemed that his videos depicted violence and were not suitable
for advertisers. P25 believed that his content could be friendly for
certain categories of ads, so he complained about YouTube’s lim-
ited endeavors to help nd matched advertisers for him. Although
YouTube and AMC might have dierent standards in content rules,
P25’s case showed his impression of YouTube’s limited care for him
to re-monetize.
4 DISCUSSION & FUTURE WORK
Extending the prior work on experiences of content moderation
(e.g., [
14
,
22
,
24
,
30
,
33
]), we showed how YouTubers experienced
the bureaucracy of creator moderation. That is, bureaucracy is
embedded in moderation procedures, sharing similar traits with
both algorithmic bureaucracy and organizational bureaucracy of
online community governance.
Similar to how prior work discusses algorithmic bureaucracy
[
2
,
26
], this study found that moderation algorithms failed to con-
textualize our participants’ video content for moderation decision-
making. YouTube largely automates moderation decision-making
through algorithms (e.g., machine learning) [
1
,
12
,
28
]. It, however,
remains questionable how moderation algorithms can ensure con-
tent rules in a qualitative manner (e.g., judgment, discourse) to be
perfectly translated to identify user content as unacceptable, espe-
cially given our participants’ complaints on latent content rules.
Algorithmic bureaucracy on YouTube keeps expanding its scope
and complexity, sometimes involving more uncompensated labor
from YouTubers. An example is YouTube’s “self-certication” func-
tion to improve algorithmic accuracy. Consequently, YouTubers
like P7 willingly provided much uncompensated, voluntary work
so that moderation algorithms, including monetization, recommen-
dation, and more, could be trained better. This means, instead of
making algorithms more exible for YouTubers, the platform makes
YouTubers adapt to the inexible algorithms.
Organizational bureaucracy [
20
,
25
] in creator moderation on
YouTube lies partly in its invisible hierarchy. Prior work has dis-
cussed the hierarchical orders of users in moderation decision-
making for articles on Wikipedia [
25
]. Similarly, our ndings
showed the hierarchy of appeal processes. Some YouTubers can
access certain human reviewers for privileged interpretational in-
teractions and solutions to moderation issues. While hierarchical
moderation decision-making on Wikipedia is somehow visible to
all community members, the hierarchy in creator moderation is
largely invisible. Thus, YouTubers have a longer learning curve to
gure out these complexities and intricacies.
“I am not a YouTuber who can make whatever video I want. I have to keep appeasing algorithms”:
Bureaucracy of Creator Moderation on YouTube CSCW’22 Companion, November 08–22, 2022, Virtual Event, Taiwan
Content policies serve to ratify such organizational bureaucracy.
Researchers criticize existing content rules rarely consider the con-
text of user content [
32
,
34
], easily leading to algorithmic bureau-
cracy. We further showed that content rule acted as an obstacle: it
provided excuses, such as the 30-day rule, to justify the perceived
low eciency of processing appeals, and prevented YouTubers
from returning to earn ad income from time-sensitive content for a
prolonged period.
Thus, the bureaucracy of creator moderation is both algorithmic
and organizational: YouTubers would rst experience algorithmic
bureaucracy upon receiving moderation decisions and then or-
ganizational bureaucracy when initiating appeals. Organizational
bureaucracy works to exacerbate the repercussions of algorithmic
bureaucracy. Lengthy undue procedures ahead, our participants
could only wait for algorithmic decisions to be re-examined by
human reviewers. With limited power in such unbalanced labor
relation, participants like P13 acquired a mindset of “appeasing
algorithms” or sacriced personal preferences and interests to meet
demands that they believed to be unreasonable.
This poster’s late-breaking ndings, how algorithmic bureau-
cracy and organizational bureaucracy intersect in creator moder-
ation, will inform our future studies. Building on these ndings,
we plan to study the complex relationships between bureaucratic
moderation systems and user communities, how user agency plays
a role in such relationships, and how platform policies could be
designed to prevent bureaucracy and support users in constructive
ways.
REFERENCES
[1]
Julia Alexander. 2019. YouTube moderation bots punish videos
tagged as ‘gay’ or ‘lesbian, study nds. The Verge. Retrieved from
https://www.theverge.com/2019/9/30/20887614/youtube-moderation-lgbtq-
demonetization-terms- words-nerd- city-investigation
[2]
Ali Alkhatib and Michael Bernstein. 2019. Street–level algorithms: A theory at the
gaps between policy and decisions. In CHI Conference on Human Factors in Com-
puting Systems Proceedings (CHI 2019), Association for Computing Machinery,
New York, NY, USA, 1–13. DOI:https://doi.org/10.1145/3290605.3300760
[3]
Carolina Are. 2021. The Shadowban Cycle: an autoethnography of pole dancing,
nudity and censorship on Instagram. Fem. Media Stud. (2021). DOI:https://doi.
org/10.1080/14680777.2021.1928259
[4]
Sophie Bishop. 2018. Anxiety, panic and self-optimization: Inequalities and the
YouTube algorithm. Converg. Int. J. Res. into New Media Technol. 24, 1 (2018),
69–84. DOI:https://doi.org/10.1177/1354856517736978
[5]
Ragnhild Brøvig-Hanssen and Ellis Jones. 2021. Remix’s retreat? Content moder-
ation, copyright law and mashup music: New Media Soc. (June 2021). DOI:https:
//doi.org/10.1177/14614448211026059
[6]
Brian Butler, Elisabeth Joyce, and Jacqueline Pike.2008. Don’t lo ok now, but we’ve
created a bureaucracy: The nature and roles of policies and rules in Wikipedia.
Conf. Hum. Factors Comput. Syst. - Proc. (2008), 1101–1110. DOI:https://doi.org/
10.1145/1357054.1357227
[7]
<number>[7]</numberRobyn Caplan and Tarleton Gillespie. 2020. Tiered Gover-
nance and Demonetization: The Shifting Terms of Labor and Compensation in
the Platform Economy. Soc. Media + Soc. 6, 2 (2020). DOI:https://doi.org/10.1177/
2056305120936636
[8]
Kelley Cotter. 2021. “Shadowbanning is not a thing”: black box gaslighting and
the power to independently know and credibly critique algorithms. Information,
Commun. Soc. (2021). DOI:https://doi.org/10.1080/1369118X.2021.1994624
[9]
Michel Crozier and Erhard Friedberg. 1964. The Bureaucratic Phenomenon. Rout-
ledge. DOI:https://doi.org/10.4324/9781315131092
[10]
[10] Brooke Erin Duy. 2020. Algorithmic precarity in cultural work. Com-
mun. Public 5, 3–4 (September 2020), 103–107. DOI:https://doi.org/10.1177/
2057047320959855
[11]
Jessica L. Feuston, Alex S. Taylor, and Anne Marie Piper. 2020. Conformity of
Eating Disorders through Content Moderation. Proc. ACM Human-Computer
Interact. 4, CSCW1 (May 2020). DOI:https://doi.org/10.1145/3392845
[12]
Robert Gorwa, Reuben Binns, and Christian Katzenbach. 2020. Algorithmic con-
tent moderation: Technical and political challenges in the automation of platform
governance. Big Data Soc. 7, 1 (January 2020), 205395171989794. DOI:https:
//doi.org/10.1177/2053951719897945
[13]
Oliver L. Haimson, Daniel Delmonaco, Peipei Nie, and Andrea Wegner. 2021.
Disproportionate Removals and Diering Content Moderation Experiences for
Conservative, Transgender, and Black Social Media Users: Marginalization and
Moderation Gray Areas. Proc. ACM Human-Computer Interact. 5, CSCW2 (Oc-
tober 2021). DOI:https://doi.org/10.1145/3479610
[14]
Shagun Jhaver, Darren Scott Appling, Eric Gilbert, and Amy Bruckman. 2019.
“Did you suspect the post would be removed?”: Understanding user reactions
to content removals on reddit. Proc. ACM Human-Computer Interact. 3, CSCW
(November 2019), 1–33. DOI:https://doi.org/10.1145/3359294
[15]
Shagun Jhaver, Amy Bruckman, and Eric Gilbert. 2019. Does transparency in
moderation really matter?: User behavior after content removal explanations
on reddit. Proc. ACM Human-Computer Interact. 3, CSCW (2019). DOI:https:
//doi.org/10.1145/3359252
[16]
Prerna Juneja, Deepika Rama Subramanian, and Tanushree Mitra. 2020. Through
the looking glass: Study of transparency in Reddit’s moderation practices. Proc.
ACM Human-Computer Interact. 4, GROUP (January 2020), 1–35. DOI:https:
//doi.org/10.1145/3375197
[17]
Sanjay Kairam and Jerey Heer. 2016. Parting Crowds: Characterizing divergent
interpretations in crowdsourced annotation tasks. Proc. ACM Conf. Comput.
Support. Coop. Work. CSCW 27, (February 2016), 1637–1648. DOI:https://doi.
org/10.1145/2818048.2820016
[18]
D. Bondy ValdovinosKaye and Joanne E. Gray. 2021. Copyright Gossip: Exploring
Copyright Opinions, Theories, and Strategies on YouTube: Soc. Media + Soc. 7, 3
(August 2021). DOI:https://doi.org/10.1177/20563051211036940
[19]
Susanne Kopf. 2020. “Rewarding Good Creators”: Corporate Social Media Dis-
course on Monetization Schemes for Content Creators. Soc. Media + Soc. 6, 4
(October 2020), 205630512096987. DOI:https://doi.org/10.1177/2056305120969877
[20]
Paul B. de Laat. 2012. Coercion or empowerment? Moderation of content in
Wikipedia as ‘essentially contested’ bureaucratic rules. Ethics Inf. Technol. 2012
142 14, 2 (February 2012), 123–135. DOI:https://doi.org/10.1007/S10676-012- 9289-
7
[21]
Ralph LaRossa. 2005. Grounded Theory Methods and Qualitative Family Research.
J. Marriage Fam. 67, 4 (November 2005), 837–857. DOI:https://doi.org/10.1111/j.
1741-3737.2005.00179.x
[22]
Renkai Ma and Yubo Kou. 2021. “How advertiser-friendly is my video?”: YouTu-
ber’s Socioeconomic Interactions with Algorithmic Content Moderation. PACM
Hum. Comput. Interact. 5, CSCW2 (2021), 1–26. DOI:https://doi.org/10.1145/
3479573
[23]
Renkai Ma and Yubo Kou. 2022. “I’m not sure what dierence is between their
content and mine, other than the person itself”: A Study of Fairness Perception
of Content Moderation on YouTube. Proc. ACM Human-Computer Interact. 6,
CSCW2 (2022), 28. DOI:https://doi.org/10.1145/3555150
[24] Sarah Myers West. 2018. Censored, suspended, shadowbanned: User interpreta-
tions of content moderation on social media platforms. New Media Soc. 20, 11
(2018), 4366–4383. DOI:https://doi.org/10.1177/1461444818773059
[25]
Sabine Niederer and José van Dijck. 2010. Wisdom of the crowd or technicity of
content? Wikipedia as a sociotechnical system: New Media Soc. 12, 8 (July 2010),
1368–1387. DOI:https://doi.org/10.1177/1461444810365297
[26]
Juho Pääkkönen, Matti Nelimarkka, Jesse Haapoja, and Airi Lampinen. 2020.
Bureaucracy as a Lens for Analyzing and Designing Algorithmic Systems. Conf.
Hum. Factors Comput. Syst. - Proc. (April 2020). DOI:https://doi.org/10.1145/
3313831.3376780
[27]
Hector Postigo. 2016. The socio-technical architecture of digital labor: Converting
play into YouTube money. New Media Soc. 18, 2 (2016), 332–349. DOI:https:
//doi.org/10.1177/1461444814541527
[28]
Aja Roman. 2019. YouTubers claim the site systematically demonetizes LGBTQ
content. Vox. Retrieved from https://www.vox.com/culture/2019/10/10/20893258/
youtube-lgbtq- censorship-demonetization- nerd-city-algorithm- report
[29]
Laura Savolainen. 2022. The shadow banning controversy: perceived governance
and algorithmic folklore: Media, Cult. Soc. (March 2022). DOI:https://doi.org/10.
1177/01634437221077174
[30]
Nicolas P. Suzor, Sarah Myers West, Andrew Quodling, and Jillian York. 2019.
What Do We Mean When We Talk About Transparency? Toward Meaningful
Transparency in Commercial Content Moderation. Int. J. Commun. 13, (2019).
Retrieved from https://ijoc.org/index.php/ijoc/article/view/9736
[31]
Sarah J. Tracy. 2013. Qualitative Research Methods: Collecting Evidence, Crafting
Analysis.
[32]
Rebecca Tushnet. 2019. Content Moderation in an Age of Extremes.
Case West. Reserv. J. Law, Technol. Internet 10, (2019). Retrieved from
https://heinonline.org/HOL/Page?handle=hein.journals/caswestres10&id=83&
div=5&collection=journals
[33]
Kristen Vaccaro, Christian Sandvig, and Karrie Karahalios. 2020. At the End
of the Day Facebook Does What It Wants”: How Users Experience Contest-
ing Algorithmic Content Moderation. In Proceedings of the ACM on Human-
Computer Interaction, Association for Computing Machinery, 1–22. DOI:https:
//doi.org/10.1145/3415238
CSCW’22 Companion, November 08–22, 2022, Virtual Event, Taiwan Renkai Ma and Yubo Kou
[34]
Richard Ashby Wilson and Molly K. Land. 2021. Hate Speech on Social Media:
Content Moderation in Context. Conn. Law Rev. (2021). Retrieved from https:
//papers.ssrn.com/sol3/papers.cfm?abstract_id=3690616
[35]
“Limited or no ads” explained. YouTube Help. Retrieved from https://support.
google.com/youtube/answer/9269824?hl=en
[36]
Get in touch with the YouTube Creator Support team. YouTube Help. Retrieved
from https://support.google.com/youtube/answer/3545535?hl=en&co=GENIE.
Platform%3DDesktop&oco=0#zippy=%2Cemail
[37]
Request human review of videos marked “Not suitable for most advertis-
ers. YouTube Help. Retrieved from https://support.google.com/youtube/answer/
7083671?hl=en#zippy=%2Chow-monetization- status-is- applied
[38]
Manage mid-roll ad breaks in long videos. YouTube Help. Retrieved from https:
//support.google.com/youtube/answer/6175006?hl=en#zippy=%2Cfrequently-
asked-questions
[39]
Advertiser-friendly content guidelines. YouTube Help. Retrieved from
https://support.google.com/youtube/answer/6162278?hl=en#Adult&zippy=
%2Cguide-to- self-certication
... While some prior work delves into moderation practices of classifying content as inappropriate for general child safety (e.g., [3,5,70]), relatively little research examines how these practices directly contribute to child privacy protection -preventing online platforms from collecting children's personal information. Additionally, prior HCI and CSCW literature (e.g., [36,44,62,86]) has touched upon users' experiences with moderation, with a particular focus on content creators on YouTube [53,61,62]. However, the connection between these experiences and child privacy remains unexplored. ...
... On the one hand, our study shows the necessity of acknowledging the interconnectedness among safety designs like MFK classification and other classification systems. For example, our findings showed that MFK classification, as one type of safety design, was interwoven with other platform classifications, such as monetization algorithms, advertising settings [21,61,62], and age verification [37]. These connections can increase the risk of exposing child consumers to safety issues. ...
Conference Paper
Full-text available
Protecting children’s online privacy is paramount. Online platforms seek to enhance child privacy protection by implementing new classification systems into their content moderation practices. One prominent example is YouTube’s “made for kids” (MFK) classification. However, traditional content moderation focuses on managing content rather than users’ privacy; little is known about how users experience these classification systems. Thematically analyzing online discussions about YouTube’s MFK classification system, we present a case study on content creators’ and consumers’ experiences. We found that creators and consumers perceived MFK classification as misaligned with their actual practices, creators encountered unexpected consequences of practicing labeling, and creators and consumers identified MFK classification’s intersections with other platform designs. Our findings shed light on an interwoven network of multiple classification systems that extends the original focus on child privacy to encompass broader child safety issues; these insights contribute to the design principles of child-centered safety within this intricate network.
... Algorithms developed by platforms often overly moderate content, especially sexual content such as nudity and body appearance [2]. A lack of transparency in the algorithm can lead to confusion among users who may not understand why their content was removed while other offensive content was left untouched [55,65]. A thread of scholars highlights the importance of algorithm transparency to maintain users; vaguely communicated and/or weakly justified moderation decisions often leave users feeling unfairly treated [39,56,90]. ...
... Some users might directly drop out of communities, or alter their content to circumvent content moderation when they feel irritated by unjust moderation [14]. Others involve coping strategies and labor to mitigate content moderation and preserve content, such as involving algorithmic labor and sharing their algorithm knowledge with the communities [54,55]. Some employ folk theories to navigate systems to improve the visibility of their content [19,20]. ...
Conference Paper
Full-text available
The Human-Computer Interaction (HCI) community has consistently focused on the experiences of users moderated by social media platforms. Recently, scholars have noticed that moderation practices could perpetuate biases, resulting in the marginalization of user groups undergoing moderation. However, most studies have primarily addressed marginalization related to issues such as racism or sexism, with little attention given to the experiences of people with disabilities. In this paper, we present a study on the moderation experiences of blind users on TikTok, also known as "BlindToker, " to address this gap. We conducted semi-structured interviews with 20 BlindTokers and used thematic analysis to analyze the data. Two main themes emerged: BlindTokers' situated content moderation experiences and their reactions to content moderation. We reported on the lack of accessibility on TikTok's platform, contributing to the moderation and marginalization of BlindTokers. Additionally, we discovered instances of harassment from trolls that prompted BlindTokers to respond with harsh language, triggering further moderation. We discussed these findings in the context of the literature on moderation, marginalization, and transformative justice, seeking solutions to address such issues.
... In our interviews, this sense of hierarchy and inequality is linked back to the algorithmic systems itself, indicating a general perception of power of that technology (Ma and Kou, 2022;Petre et al., 2019). While some CCs express this more carefully and with less specific role images, many interviewees describe their relationship to the algorithm as one that demands submission or at least cooperation on their part. ...
Article
Algorithmic systems wield substantial influence in contemporary society. Since it is mostly unknown how algorithms specifically work, content creators (CCs) on YouTube who rely on them for economic reasons are in a constant state of sensemaking regarding the characteristics and perceived preferences of the algorithm. To understand these perceptions, we draw from previous research on technological agency and examine the ways in which CCs view the algorithm as an independent entity with agentic features. We do this by conducting a thematic analysis of 30 interviews with German CCs on YouTube. We find that CCs do perceive agentic qualities of the algorithm and that their assessment depends on their experience and exposure to it. Four key themes were identified: The algorithm is perceived as (1) non-transparent and largely unpredictable; (2) intentional, autonomous, and human-like; (3) number-based and communicating through metrics; and (4) exerting a great deal of power while constantly reinforcing hierarchies.
... Peer support, here, should not be conflated with the appeal process that is usually offered by large-scale, commercial platforms that employ a sophisticated combination of automated and human moderation [80,81]. The appeal process is instituted by platforms and functions more or less as a bureaucracy [69], sometimes widens inequality between users [2], and do not always address players' concerns [70], while peer support is community-led, dynamic, and contextual. In this context, although peer support providers are appointed moderators, they de facto participate in the process of community management, and thus take on some of the social roles that Seering et al. identified among volunteer moderators [88], such as nurturing and supporting communities. ...
... Arriagada and Ibáñez [5] found that Chilean fashion and lifestyle Instagram creators lifestyle adapt their creative practices to platform changes. Ma and Kou [64] reported how YouTube creators negotiated with the platform in order to appeal moderation decisions such as video removal. ...
Conference Paper
Full-text available
Metaverse platforms such as Roblox have become increasingly popular and profitable through a business model that relies on their end users to create and interact with user-generated virtual worlds (UGVWs). However, UGVWs are difficult to moderate, because game design is inherently more complex than static content such as text and images; and Roblox, a game platform targeted primarily at child players, is notorious for harmful user-generated game such as Nazi roleplay games and gambling-like mechanisms. To develop a better understanding of how harmful design is embedded in UGVWs, we conducted an empirical study to understand Roblox users' experiences with harmful design. We identified several primary ways in which user-generated game designs can be harmful, ranging from directly injecting inappropriate content into the virtual environment of UGVWs to embedding problematic incentive mechanisms into the UGVWs. We further discuss opportunities and challenges for mitigating harmful designs. CCS CONCEPTS • Human-centered computing → Human computer interaction (HCI); Empirical studies in HCI; Interaction design; Empirical studies in interaction design.
Article
AI is increasingly being used to moderate player behaviour in online multiplayer games, working to identify and respond to toxic and problematic conduct with greater efficiency and accuracy than existing automated systems. However, little work has explored the application of AI moderation in the gaming ecosystem, despite growing ethical concerns about AI applications in other domains. In this study, we conducted 2 expert workshops and interviewed 26 players and industry professionals on their understandings, perceptions and experiences with AI moderation in multiplayer games. Applying a metaphorical frame via template analysis, we outline four metaphors that capture participants' views on the roles of AI and automation in moderation: the Unreliable Police Force, the Unscrupulous Governor, the Uncaring Judge, and the Untiring Assistant. We discuss these roles as exacerbating a top-down, punitive online justice system and identify ethical concerns around transparency, fairness and inclusion, privacy, and human-AI collaboration. To address these concerns, we put forward a set of ethical design considerations and alternative roles for AI moderation in multiplayer games.
Conference Paper
Full-text available
How social media platforms could fairly conduct content moderation is gaining attention from society at large. Researchers from HCI and CSCW have investigated whether certain factors could affect how users perceive moderation decisions as fair or unfair. However, little attention has been paid to unpacking or elaborating on the formation processes of users’ perceived (un)fairness from their moderation experiences, especially users who monetize their content. By interviewing 21 for-profit YouTubers (i.e., video content creators), we found three primary ways through which participants assess moderation fairness, including equality across their peers, consistency across moderation decisions and policies, and their voice in algorithmic visibility decision-making processes. Building upon the findings, we discuss how our participants’ fairness perceptions demonstrate a multi-dimensional notion of moderation fairness and how YouTube implements an algorithmic assemblage to moderate YouTubers. We derive translatable design considerations for a fairer moderation system on platforms affording creator monetization.
Article
Full-text available
For all practical purposes, the policy of social media companies to suppress hate speech on their platforms means that the longstanding debate in the United States about whether to limit hate speech in the public square has been resolved in favor of vigorous regulation. Nonetheless, revisiting these debates provides insights essential for developing more empirically-based and narrowly tailored policies regarding online hate. First, a central issue in the hate speech debate is the extent to which hate speech contributes to violence. Those in favor of more robust regulation claim a connection to violence, while others dismiss these arguments as tenuous. The data generated by social media, however, now allow researchers to empirically test whether there are measurable harms resulting from hate speech. These data can assist in formulating evidence-based policies to address the most significant harms of hate speech, while avoiding overbroad regulation. Second, reexamining the U.S. debate about hate speech also reveals the serious missteps of social media policies that prohibit hate speech without regard to context. The policies that social media companies have developed define hate speech solely with respect to the content of the message. As the early advocates of limits on hate speech made clear, the meaning, force, and consequences of speech acts are deeply contextual, and it is impossible to understand the harms of hate speech without reference to political realities and power asymmetries. Regulation that is abstracted from context will inevitably be overbroad. This Article revisits these debates and considers how they map onto the platform law of content moderation, where emerging evidence indicates a correlation between hate speech online, virulent nationalism, and violence against minorities and activists. It concludes by advocating specific recommendations to bring greater consideration of context into the speech-regulation policies and procedures of social media companies.
Article
Full-text available
Efforts to govern algorithms have centerd the ‘black box problem,’ or the opacity of algorithms resulting from corporate secrecy and technical complexity. In this article, I conceptualize a related and equally fundamental challenge for governance efforts: black box gaslighting. Black box gaslighting captures how platforms may leverage perceptions of their epistemic authority on their algorithms to undermine users’ confidence in what they know about algorithms and destabilize credible criticism. I explicate the concept of black box gaslighting through a case study of the ‘shadowbanning’ dispute within the Instagram influencer community, drawing on interviews with influencers (n = 17) and online discourse materials (e.g., social media posts, blog posts, videos, etc.). I argue that black box gaslighting presents a formidable deterrent for those seeking accountability: an epistemic contest over the legitimacy of critiques in which platforms hold the upper hand. At the same time, I suggest we must be mindful of the partial nature of platforms’ claim to ‘the truth,’ as well as the value of user understandings of algorithms.
Article
Full-text available
Social media sites use content moderation to attempt to cultivate safe spaces with accurate information for their users. However, content moderation decisions may not be applied equally for all types of users, and may lead to disproportionate censorship related to people's genders, races, or political orientations. We conducted a mixed methods study involving qualitative and quantitative analysis of survey data to understand which types of social media users have content and accounts removed more frequently than others, what types of content and accounts are removed, and how content removed may differ between groups. We found that three groups of social media users in our dataset experienced content and account removals more often than others: political conservatives, transgender people, and Black people. However, the types of content removed from each group varied substantially. Conservative participants' removed content included content that was offensive or allegedly so, misinformation, Covid-related, adult, or hate speech. Transgender participants' content was often removed as adult despite following site guidelines, critical of a dominant group (e.g., men, white people), or specifically related to transgender or queer issues. Black participants' removed content was frequently related to racial justice or racism. More broadly, conservative participants' removals often involved harmful content removed according to site guidelines to create safe spaces with accurate information, while transgender and Black participants' removals often involved content related to expressing their marginalized identities that was removed despite following site policies or fell into content moderation gray areas. We discuss potential ways forward to make content moderation more equitable for marginalized social media users, such as embracing and designing specifically for content moderation gray areas.
Conference Paper
Full-text available
To manage user-generated harmful video content, YouTube relies on AI algorithms (e.g., machine learning) in content moderation and follows a retributive justice logic to punish convicted YouTubers through demonetization, a penalty that limits or deprives them of advertisements (ads), reducing their future ad income. Moderation research is burgeoning in CSCW, but relatively little attention has been paid to the socioeconomic implications of YouTube's algorithmic moderation. Drawing from the lens of algorithmic labor, we describe how algorithmic moderation shapes YouTubers' labor conditions through algorithmic opacity and precarity. YouTubers coped with such challenges from algorithmic moderation by sharing and applying practical knowledge they learned about moderation algorithms. By analyzing video content creation as algorithmic labor, we unpack the socioeconomic implications of algorithmic moderation and point to necessary post-punishment support as a form of restorative justice. Lastly, we put forward design considerations for algorithmic moderation systems.
Article
Full-text available
This study investigates copyright discourses on YouTube. Through a qualitative content analysis of 144 YouTube videos, we explore how YouTube creators understand copyright law, how they minimize risks posed by copyright infringement, and how they navigate a highly technical and dynamic copyright enforcement ecosystem. Our findings offer insights into how digitally situated cultural producers are impacted by and respond to automated content moderation. This is important because increasingly lawmakers around the world are asking digital platforms to implement efficient systems for content moderation, and yet there is a lack of good information about the stakeholders most directly impacted by these practices. In this study, we present a systematic analysis of copyright gossip, building on the concept of algorithmic gossip, which comprises the opinions, theories, and strategies of creators who are affected by YouTube’s copyright enforcement systems.
Article
Full-text available
Many online media platforms currently utilise algorithmically driven content moderation to prevent copyright infringement. This article explores content moderation’s effect on mashup music – a form of remix which relies primarily on the unauthorised combining of pre-existing, recognisable recordings. Drawing on interviews (n = 30) and an online survey (n = 92) with mashup producers, we show that content moderation affects producers’ creative decisions and distribution strategies, and has a strong negative effect on their overall motivation to create mashups. The objections that producers hold to this state of affairs often strongly resonate with current copyright exceptions. However, we argue that these exceptions, which form a legal ‘grey zone’, are currently unsatisfactorily accommodated for by platforms. Platforms’ political-economic power allows them, in effect, to ‘occupy’ and control this zone. Consequently, the practical efficacy of copyright law’s exceptions in this setting is significantly reduced.
Article
Full-text available
This paper contributes to the social media moderation research space by examining the still under-researched “shadowban”, a form of light and secret censorship targeting what Instagram defines as borderline content, particularly affecting posts depicting women’s bodies, nudity and sexuality. “Shadowban” is a user-generated term given to the platform’s “vaguely inappropriate content” policy, which hides users’ posts from its Explore page, dramatically reducing their visibility. While research has already focused on algorithmic bias and on social media moderation, there are not, at present, studies on how Instagram’s shadowban works. This autoethnographic exploration of the shadowban provides insights into how it manifests from a user’s perspective, applying a risk society framework to Instagram’s moderation of pole dancing content to show how the platform’s preventive measures are affecting user rights.
Article
In this paper, I approach platform governance through algorithmic folklore, consisting of beliefs and narratives about moderation systems that are passed on informally and can exist in tension with official accounts. More specifically, I analyse user discussions on ‘shadow banning’, a controversial, potentially non-existing form of content moderation on popular social media platforms. I argue that discursive mobilisations of the term can act as a methodological entry point to studying the shifting grounds and emerging logics of algorithmic governance, not necessarily in terms of the actual practices themselves, but in terms of its experiential dimension that, in turn, indicates broader modalities and relationalities of control. Based on my analysis of the user discussions, I argue that the constitutive logics of social media platforms increasingly seem to run counter to the values of good governance, such as clarity and stability of norms, and consistency of enforcement. This is reflected in how users struggle, desperately, to form expectations about system operation and police themselves according to perceived rules, yet are left in a state of dependency and frustration, unable to take hold of their digital futures.
Article
Interest has grown in designing algorithmic decision making systems for contestability. In this work, we study how users experience contesting unfavorable social media content moderation decisions. A large-scale online experiment tests whether different forms of appeals can improve users' experiences of automated decision making. We study the impact on users' perceptions of the Fairness, Accountability, and Trustworthiness of algorithmic decisions, as well as their feelings of Control (FACT). Surprisingly, we find that none of the appeal designs improve FACT perceptions compared to a no appeal baseline. We qualitatively analyze how users write appeals, and find that they contest the decision itself, but also more fundamental issues like the goal of moderating content, the idea of automation, and the inconsistency of the system as a whole. We conclude with suggestions for -- as well as a discussion of the challenges of -- designing for contestability.