Conference PaperPDF Available

Labeling in the Dark: Exploring Content Creators' and Consumers' Experiences with Content Classification for Child Safety on YouTube

Authors:

Abstract and Figures

Protecting children’s online privacy is paramount. Online platforms seek to enhance child privacy protection by implementing new classification systems into their content moderation practices. One prominent example is YouTube’s “made for kids” (MFK) classification. However, traditional content moderation focuses on managing content rather than users’ privacy; little is known about how users experience these classification systems. Thematically analyzing online discussions about YouTube’s MFK classification system, we present a case study on content creators’ and consumers’ experiences. We found that creators and consumers perceived MFK classification as misaligned with their actual practices, creators encountered unexpected consequences of practicing labeling, and creators and consumers identified MFK classification’s intersections with other platform designs. Our findings shed light on an interwoven network of multiple classification systems that extends the original focus on child privacy to encompass broader child safety issues; these insights contribute to the design principles of child-centered safety within this intricate network.
Content may be subject to copyright.
Labeling in the Dark: Exploring Content Creators’ and
Consumers’ Experiences with Content Classification for Child
Safety on YouTube
Renkai Ma
College of Information Sciences and Technology,
Pennsylvania State University
USA
renkai@psu.edu
Zinan Zhang
College of Information Sciences and Technology,
Pennsylvania State University
USA
zzinan@psu.edu
Xinning Gui
College of Information Sciences and Technology,
Pennsylvania State University
USA
xinninggui@psu.edu
Yubo Kou
College of Information Sciences and Technology,
Pennsylvania State University
USA
yubokou@psu.edu
ABSTRACT
Protecting children’s online privacy is paramount. Online platforms
seek to enhance child privacy protection by implementing new
classication systems into their content moderation practices. One
prominent example is YouTube’s “made for kids” (MFK) classica-
tion. However, traditional content moderation focuses on managing
content rather than users’ privacy; little is known about how users
experience these classication systems. Thematically analyzing
online discussions about YouTube’s MFK classication system, we
present a case study on content creators’ and consumers’ expe-
riences. We found that creators and consumers perceived MFK
classication as misaligned with their actual practices, creators
encountered unexpected consequences of practicing labeling, and
creators and consumers identied MFK classication’s intersec-
tions with other platform designs. Our ndings shed light on an
interwoven network of multiple classication systems that extends
the original focus on child privacy to encompass broader child
safety issues; these insights contribute to the design principles of
child-centered safety within this intricate network.
CCS CONCEPTS
Human-centered computing
Empirical studies in collab-
orative and social computing.
KEYWORDS
COPPA, content creation, child privacy protection, content creator,
child safety
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for components of this work owned by others than the
author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specic permission
and/or a fee. Request permissions from permissions@acm.org.
DIS ’24, July 1–5, 2024, IT University of Copenhagen, Denmark
©2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.
ACM ISBN 979-8-4007-0583-0/24/07
https://doi.org/10.1145/3643834.3661565
ACM Reference Format:
Renkai Ma, Zinan Zhang, Xinning Gui, and Yubo Kou. 2024. Labeling in
the Dark: Exploring Content Creators’ and Consumers’ Experiences with
Content Classication for Child Safety on YouTube. In Designing Interactive
Systems Conference (DIS ’24), July 1–5, 2024, IT University of Copenhagen,
Denmark. ACM, New York, NY, USA, 15 pages. https://doi.org/10.1145/
3643834.3661565
1 INTRODUCTION
Protecting children’s online privacy is paramount. Notable laws,
such as the Children’s Online Privacy Protection Act (COPPA) in
the US [
24
] and the General Data Protection Regulation (GDPR)
in Europe [
39
], underscore this commitment. The emphasis on
safeguarding children’s online privacy stems from their inherent
vulnerabilities and limited capacity to understand the risks of shar-
ing personal information (e.g., name, browsing habits), such as
threats like online harassment and cyberbullying [
2
,
89
], as well
as the inappropriate commercial use of children’s information for
targeted advertising [
41
], which COPPA and GDPR aim to mitigate.
As these risks evolve due to technical advancements, such as rec-
ommendation algorithms [
59
] and advertising technologies [
34
],
particularly with the growing popularity of social media, legal schol-
ars have called for a more nuanced, modernized update of COPPA
(e.g., [
37
,
67
,
74
]). Meanwhile, HCI researchers (e.g., [
2
,
41
,
82
,
98
])
have striven to design a secure online environment for youth and
minors.
To enhance child privacy protection, several online platforms in
the US have implemented new classication systems in their exist-
ing content moderation practices. Content moderation refers to the
organized practice of screening user-generated content to deter-
mine its appropriateness for a certain platform [
76
]. The changes in
moderation practices, including the incorporation of new classica-
tion systems, emerged in response to nes for violating COPPA. In
2019, TikTok was ned $5.7 million by the Federal Trade Commis-
sion (FTC) for violating COPPA [
26
]. In response, TikTok launched
“TikTok for Younger Users, a restricted version that shows algo-
rithmically curated content and restricts minors from generating
public videos or comments [
83
]. Similarly, after a $170 million FTC
DIS ’24, July 1–5, 2024, IT University of Copenhagen, Denmark Ma et al.
ne [
25
], YouTube launched the “made for kids” (MFK) classica-
tion system [
91
], which requires content creators to classify videos,
disabling consumers’ data tracking [
92
]. Specically, MFK videos
restrict consumers from making comments and impact creator con-
tent’s monetization performance, a type of moderation termed as
“demonetization” [21, 53, 60].
However, traditional content moderation practices focus on man-
aging content [
76
] rather than user privacy. While moderation prac-
tices contribute to general child safety by classifying content as
inappropriate for children (e.g., [
3
,
5
,
70
]), relatively little research
examines how these moderation practices directly contribute to
child privacy protection, especially preventing online platforms
from collecting children’s personal information. Also, social me-
dia platforms implement child privacy laws like COPPA [
24
] by
restricting content creation and consumption (e.g., [
83
,
92
]). How-
ever, prior work (e.g., [
2
,
8
,
55
,
82
,
89
]) primarily focuses on the
roles of parents, adolescents, and children in child privacy on social
media, leaving a gap in understanding the roles of creators and
consumers. Given these research gaps, we chose YouTube’s MFK
classication as a case study to explore creators’ and consumers’
experiences with the implementation of child privacy protection.
YouTube has been one of the most popular platforms among chil-
dren [
38
] and, after receiving one of the heaviest FTC nes for
violating COPPA, has made some of the most notable initiatives in
child privacy protection design, such as MFK classication [
91
,
92
]
and YouTube Kids app, compared with other platforms like TikTok
[
26
,
83
] and Facebook Messenger [
27
]. So, we ask: How do content
creators and consumers experience the MFK classication
system on YouTube?
Guided by our research question, we used reexive thematic anal-
ysis to qualitatively analyze relevant online discussions posted in
YouTube-related communities on Reddit. Creators and consumers
observed misalignment between MFK classication and their actual
practices, resulting in false positives and false negatives of MFK clas-
sication. Creators experienced unintended consequences when
manually practicing MFK classication. Both creators and con-
sumers further observed that MFK classication intersected with
other platform designs, especially classication systems, through
coordination, inconsistency, and conict. Drawing from and ex-
tending Bowker and Star’s classication theories [
16
], we discuss
how our ndings indicate an interwoven network of classication
systems that extends the MFK classication’s original focus of child
privacy to a broader issue of child safety (e.g., inappropriate content
and advertisements). We thus put forward the design principles of
child-centered safety in such an interwoven network of classica-
tion systems.
Our study utilizes social media discussions to delve into creators’
and consumers’ experiences associated with YouTube’s MFK clas-
sication system, reecting on how it shapes child safety. Rather
than cataloging the exhaustive account of creators’ motivations or
how they use the MFK system to protect child privacy, our study
prioritizes a practical lens to reveal a wide range of user reactions,
strategies, and challenges underneath their experiences with MFK
classication as YouTube and FTC enforce creators to directly im-
plement child privacy protection through the MFK classication
[
25
,
92
]. This is vital for understanding the intricate ways creators
contribute to child safety on the platform. Our ndings also high-
light the practical application of the MFK system and its inuence
on content consumption, thereby underpinning the necessity for
policy and platform design to authentically reect the dynamics
among creators, consumers, and the platform. Our exploratory
study can thus contribute to the HCI and designing interactive
systems (DIS) communities with four insights:
We oer an empirical account of content creators’ and con-
sumers’ experiences with the designs of child privacy protec-
tion on commercial and social media platforms like YouTube,
enriching existing HCI literature on children’s online privacy
(e.g., [55, 82, 89, 98]).
We show how social media and commercial platforms like
YouTube leverage content moderation and creators’ classi-
cation practices for child privacy protection, deepening HCI
literature concerning moderation practices (e.g., [
48
,
50
,
79
])
and content creators’ interactions with moderation designs
(e.g., [49, 53, 62]).
Drawing from and expanding on Bowker and Star’s classi-
cation theories [
16
], we show an interwoven network of
classication systems on YouTube and its broad eects on
child safety beyond the original focus of child privacy.
In such an interwoven classication network, we lay out
actionable design principles of child-centered safety on plat-
forms like YouTube.
2 BACKGROUND: COPPA AND “MADE FOR
KIDS” (MFK) CLASSIFICATION SYSTEM ON
YOUTUBE
COPPA [
24
] is a US federal law established in 1998 to protect the pri-
vacy of children under 13, requiring US-based websites or services
to obtain parental consent before collecting children’s personal
information, such as full names, home addresses, email addresses,
IP addresses, behavioral data, photos, and recordings [9, 24].
To comply with COPPA, YouTube has carried out several mea-
sures over time. In 2008, it implemented an age-restriction classi-
cation to limit specic videos to viewers over 18. It introduced
the YouTube Kids app/platform in 2015, providing a curated en-
vironment for kids [
4
]. In 2019, after a $170 million FTC ne for
violating COPPA [
25
], YouTube launched “made for kids” (MFK)
[
91
] on January 6, 2020. This requires all creators on the platform
to manually classify their videos as MFK or non-MFK [
92
] , with
guidelines provided in Table 1 and the classication interface shown
in Figure 1. YouTube uses machine learning algorithms to label all
unclassied videos or override creators’ classications and claims it
disables consumers’ data tracking on MFK videos to enhance child
privacy [91, 92, 94].
3 RELATED WORK
We discuss two literature groups: children’s online privacy and
classication in content moderation. We discuss how the former
distinguishes between interpersonal and commercial dimensions
of children’s online privacy. We further introduce Bower and Star’s
classication theories and discuss how they help understand the
classication in content moderation. This sets a solid ground for us
to explore users’ experiences with YouTube’s MFK classication.
Content Creators’ and Consumers’ Experiences with Content Classification for Child Safety on YouTube DIS ’24, July 1–5, 2024, IT University of Copenhagen, Denmark
Figure 1: MFK policy of how to decide a video between MFK
and non-MFK [91].
Figure 2: Screenshot of how creators manually classify a
video as MFK or non-MFK on the YouTube Studio dashboard
[92].
3.1
The Protection of Children’s Online Privacy
Protecting children’s online privacy is challenging. Nissenbaum
denes privacy as “a right to appropriate ow of personal informa-
tion” [
66
]. While there is no simple denition of children’s online
privacy, several HCI studies (e.g., [
55
,
63
,
97
]) commonly mention
that children interpret online privacy as recognizing the sensitivity
of personal information and the need to share it selectively on-
line. Children’s increasing use of social media poses risks to their
privacy. Internet safety researchers warn the public that children
lack an understanding of privacy and tend to disclose too much
information on social media [
68
], while parents might hold dier-
ent perspectives on it. Researchers found that parents generally
adopt a passive approach in mediating their children’s device use,
information sharing, and privacy education [
55
], but their involve-
ment and concern increase when their children’s data is collected
by social media platforms [
35
]. For children, Ghosh et al. found
that they disliked apps that were overly restrictive of their privacy,
negatively impacting their relationships with parents, while chil-
dren liked apps that supported self-regulation [
40
]. As stressed by
Badillo-Urquiola et al. [
10
], children value personal agency and
privacy rather than constant parental consent.
Under such diverse evidence concerning children’s online pri-
vacy protection, two distinguishable perspectives have emerged in
prior HCI research: interpersonal privacy and commercial privacy.
First, interpersonal privacy describes how children’s personal in-
formation is created, accessed, and multiplied through their social
connections [
59
]. HCI researchers have focused on “sharenting,
parents’ practices of sharing personal information of their children
online (e.g., [
6
8
,
15
,
56
,
65
]). Amon et al. found that the size of
parents’ social networks positively aects their parental sharing
frequency [
8
]. Ammari et al. found that parents negotiate with each
other and extended family members to establish sharing bound-
aries on social media (e.g., whether people can see or share their
children’s information) [
6
]. This line of work shows that dier-
ent social connections of children shape the ow of their personal
information online.
Second, commercial privacy concerns how online platforms
gather and analyze children’s personal information for business
purposes (e.g., [
1
,
64
,
69
]), such as targeted advertisements [
11
].
Recently, HCI researchers have started to explore how children un-
derstand their commercial privacy. Goray and Schoenebeck found
that children have limited awareness of whether online advertis-
ers collect their data and what types of personal information are
retained by social media platforms [
41
]. Zhao et al. also found that
while children can identify privacy risks of oversharing with their
social connections online, they remain incapable of identifying
online tracking and targeted advertisements [
98
]. Similarly, Sun
et al. uncovered that children tend to characterize data tracking
as operated by humans rather than analytic tools on social media
platforms and less consider the platforms that collect and process
their data as privacy threats [82].
Our study aims to understand the impacts of content creation
and consumption on children’s commercial privacy. In the US, social
media platforms like YouTube [
91
] and TikTok [
83
] have imple-
mented child privacy laws like COPPA [
24
] to limit content creation
and consumption, mitigating the risks of collecting child creators’
and consumers’ personal information. However, relatively little is
known about how content creators and consumers, as important
stakeholders in child privacy protection, perceive or respond to
these measures. This study aims to ll this research gap.
3.2 Classication Theory and Classication
Systems in Content Moderation
At the heart of content moderation is a classication task, where
platforms organize a vast array of user-generated content (UGC)
from hate speech to misinformation into categories dened in
platforms’ moderation policies (e.g., YouTube [96], Facebook [33])
to determine its appropriateness for the given platforms (Roberts,
2019). This classication is underscored by the moderation pro-
cesses discussed in prior literature (e.g., [
13
,
36
,
42
]), where mod-
eration policies establish the criteria for content classication to
identify and ag policy violations [
57
]. Platforms typically employ
human moderators for this classication task (Roberts, 2019) or
develop complex algorithms to detect and categorize policy vio-
lations [
45
]. These algorithmic systems, rened through learning
from prior moderated content, are then applied to detect new, non-
compliant content [
30
,
42
]. When moderation algorithms encounter
potential classication issues (e.g., inaccurate classication), hu-
man moderators are called upon for nal adjudication, typically
resulting in moderation decisions like content removal [
47
,
81
] or
account suspension [44, 85].
This systematic classication in moderation profoundly res-
onates with Bowker and Star’s classication theories [
16
]. Classi-
cation describes organizing things into categories, the metaphorical
or literal “boxes, which signicantly shape our experiences, knowl-
edge production, and social interactions. For example, Bowker and
DIS ’24, July 1–5, 2024, IT University of Copenhagen, Denmark Ma et al.
Star discussed an example of Dewey’s library scheme, which as-
signs classication numbers (e.g., index) to new books in a library
to allocate them to the appropriate location based on subjects. They
highlighted how it facilitates knowledge access and shapes our
cognition and understanding. While Bowker and Star mention that
such classication is ubiquitous, it is typically invisible and only
becomes noticeable when it breaks down.
Bowker and Star also stress that no classication system can fully
capture the world’s complexity [
16
]. So does moderation when it
confronts the variety of content. HCI and legal scholars criticize that
moderation policies across platforms lack granularity in dening
the appropriateness of content [
71
,
88
]. Similarly, Jhaver et al. found
that moderators reassessed old moderation policies and articulated
new ones to rene classication practices [
48
]. Existing moderation
policies might fail to capture too nuanced human behaviors, as
stressed by Jiang et al. [50].
Bowker and Star’s classication theories thus oer a lens to bet-
ter understand moderation experience the experiences of those
subjected to moderation decisions. They conceptualized the notion
of torque to describe how classication systems inuence individ-
uals’ lived experiences “where the ‘time’ of the body and of the
multiple identities cannot be aligned with the ‘time’ of the clas-
sication system” [
16
]. In moderation, HCI research reects this
classication misalignment. Vaccaro et al. highlighted that Face-
book users felt moderation was inconsistently applied, leading to
undue account suspensions [
85
]. Similarly, Haimson et al. reported
that sexually minority individuals felt marginalized by the mis-
classications of their surgery content [
44
]. Ma and Kou found
that YouTube creators faced nancial frustration when algorithms
misclassied their gaming content as violent [62].
In this study, we apply Bowker and Star’s classication theo-
ries to unpack the complexities of multiple classication systems
that coexist with YouTube’s child privacy protection implementa-
tion (e.g., MFK classication). While some prior work delves into
moderation practices of classifying content as inappropriate for
general child safety (e.g., [
3
,
5
,
70
]), relatively little research ex-
amines how these practices directly contribute to child privacy
protection preventing online platforms from collecting children’s
personal information. Additionally, prior HCI and CSCW litera-
ture (e.g., [
36
,
44
,
62
,
86
]) has touched upon users’ experiences
with moderation, with a particular focus on content creators on
YouTube [
53
,
61
,
62
]. However, the connection between these expe-
riences and child privacy remains unexplored. Through the case of
YouTube’s MFK classication system, designed to align with child
privacy laws [
9
,
24
], we aim to reveal both creators’ and consumers’
experiences with this system. Recently, legal scholars [
12
,
23
,
87
]
have suggested that YouTube creators might inadvertently misclas-
sify their content, and business researchers [
51
] have found that
child-directed content creators have reduced content quality due
to the MFK classication. Given these concerns, it’s necessary to
empirically study the experiences of both creators and consumers
with the MFK classication.
4 METHODS
4.1 Data Collection
In this study, we chose online discussion data from YouTube-specic
subreddits as the data source for two reasons. First, analyzing online
discussions from subreddits is a common approach for data collec-
tion in HCI research (e.g., [
31
,
58
,
60
]), especially when accessing
targeted participants like content creators is challenging. Second,
unlike interviews that rely on human memories, online discussions
oer real-time insights into users’ experiences as they share online,
aiding our understanding of their interactions with MFK classi-
cation. This data source enabled us to capture a broad view of
experiences interacting with MFK, encompassing user reactions,
perceptions, and challenges.
Thus, we choose relevant subreddits for data collection, includ-
ing r/youtube, r/youtubers, r/newtubers, and r/partneredyoutube.
That was because, after searching “YouTube” on Reddit, these sub-
reddits had noticeably larger community sizes: 1.1 million, 235
thousand, 328 thousand, and 57.6 thousand members, respectively,
than others, except biased communities such as r/fuckYTCOPPA
and r/BannedYouTube. Also, the four subreddits’ self-descriptions
are highly relevant in understanding both creators’ and consumers’
experiences. For example, r/youtube’s About Community” states
it is for general discussions about YouTube, and the other three
focus on content creation. Please note that we received approval
from our institution’s Institutional Review Board (IRB) before data
collection.
Our data collection involved four steps. First, in February 2023,
based on YouTube’s moderation policies [
91
,
92
] and the rst au-
thor’s domain knowledge about YouTube, we gathered a set of
keywords, including “COPPA and “made for kids, the two most
saliently relevant terms, as well as “kids friendly, child friendly,
“kid oriented, “child oriented, “kid, and “parent, where the latter
two were not strictly relevant but can potentially help fetch more
data for analysis. Second, using these keywords, we fetched 1,819
threads and 54,020 associated comments from the four subreddits
through the package ‘RedditExtractoR’ on R [
75
]. This R package
helped search the keywords in titles, posted texts, or comments and
returned the search results. We then skimmed all threads and selec-
tively read the comments to examine the richness of the dataset and
collect more keywords for further data collection. Third, we identi-
ed more keywords, including “Age 13”, “FTC, “children, “children
primary audience, and “MFK, leading to 745 more threads and
9,814 associated comments. The nal dataset included 2,564 threads
and 63,834 comments, stored and analyzed in Google Sheets, fo-
cusing on submission texts and comments. Fourth, since FTC ned
YouTube after September 2019 [
25
], we removed threads and com-
ments posted before January 1, 2019. This resulted in 528 threads
and 8,295 comments for data analysis (see column “Data posted
after January 2019” in Table 1).
Before data analysis, there was one step of data preprocessing.
We recognize that a platform like YouTube involves both content
creation and consumption by users. However, for the purpose of
this study, we need to dierentiate between creators and consumers
to understand their respective behaviors and perspectives. Thus, we
started by assuming all users on YouTube are content consumers.
Content Creators’ and Consumers’ Experiences with Content Classification for Child Safety on YouTube DIS ’24, July 1–5, 2024, IT University of Copenhagen, Denmark
This was based on the understanding that using YouTube inher-
ently requires users to consume content, even at the most basic
level, such as reading video titles or understanding platform func-
tionalities, before engaging with more substantial content like the
videos themselves. Then, for content creators, we set a “creator
self-disclosure” criterion: Posters, either thread or comment posters
in the four subreddits, were categorized as creators if they clearly
disclosed their creator identity by mentioning keywords like “my
audience, “my channel, “my video, or other keywords that are
easy for us to identify their creator role.
Otherwise, we considered the posters to be consumers. We ac-
knowledge that this method may not perfectly capture all creators,
especially those not explicitly mentioning their creator role. How-
ever, given the larger proportion of consumers in online commu-
nities and the need for a practical method of categorization, this
approach provides a functional way to dierentiate between the
two roles for our analysis.
4.2 Data Analysis
The research team applied reexive thematic analysis (RTA) [
18
,
19
]
inductively to analyze the whole dataset. RTA is a “theoretically ex-
ible method” for developing, analyzing, and interpreting patterns
in qualitative data [
18
], incorporating researchers’ experiences,
pre-existing knowledge, and social positions to analyze the data
critically.
Data analysis involved four steps. First, two researchers famil-
iarized themselves with the threads dataset and resolved confu-
sion about the contexts that the dataset mentioned (e.g., what
“age-restriction” is, how MFK classication will turn o video fea-
tures such as commenting and playlist). Second, they individually
screened the data and assigned initial codes in Google Sheets to
represent ideas expressed in the dataset that can answer the re-
search question. They held weekly meetings with two other senior
researchers to discuss each code’s relevance and correspondence
with original quotations. The dataset included much data that were
not directly related to MFK classication due to a wide range of
keyword searching, so a critical criterion of deciding data point’s
relevance for coding was whether it discussed about MFK classi-
cation, such as how creators talked about their perceptions and
reactions to COPPA or FTC and how they share perspectives about
these with others. For example, some creators shared knowledge
of creator growth, such as “the advice about views/subs vs. watch
time is solid, which is unrelated to our research question.
Third, the two researchers examined the relevance of the data
for coding and, meanwhile, continually assigned initial codes to
the dataset and conducted rounds of coding to identify what pat-
terns (i.e., subset themes) are reected by initial codes. This process
identied subset themes from the initial codes. For example, the
researchers assigned the initial code, “MFK misaligned with par-
ents’ expected involvement, and they did not acknowledge MFK’s
labeling, to the quote, “I am a parent myself, and it is my job. . ..It
is up to me to know or accept that a company like Google might
scrape data targeted at me and my kids. This code is conceptually
related to other codes about how other content consumers consider
MFK classication and its outcomes misaligning with their collec-
tive understanding, so we grouped them together under one theme,
“MFK misaligned with consumers’ collective understanding and
recognition, and further reported these similar codes together in
Section 5.1, Misalignment between MFK Classication and Prac-
tice. The criterion for grouping initial codes into subset themes
was if multiple codes consistently appeared and shared underly-
ing concepts that can answer our research questions. For example,
codes capturing user eorts to circumvent and not consider content
as MFK labeled, such as “avoidance strategies for using restricted
features” and “parental adjustments to content access, were consol-
idated under the theme of Section 5.1.2 User-Driven False Negatives
of MFK. This theme reects how users actively navigate around
the MFK system’s limitations to maintain their engagement with
content in ways that defy its restrictions. For another example, we
grouped codes describing creators’ struggles with MFK classica-
tion, such as “uncertainty in content classication” and “challenges
with vague guidelines, under the theme Section 5.2 Unexpected
Consequences of Practicing MFK Classication as Creators. This
theme reects how broad MFK policies complicate content man-
agement for creators.
In the last data analysis step, the research team continued as-
signing codes to the data and grouping codes into subset themes
until theoretical saturation [
43
]. This indicated a high percentage
of initial codes within more than half the volume of the dataset and
a minimal increment of initial codes after screening and analyzing
around 60% of the dataset (see column “Screened and analyzed
data for theoretical saturation” in Table 2). In other words, the
team reached the point where no particularly new codes or themes
emerged from the dataset [
43
]. Last, in the weekly meetings, all four
researchers consolidated similar subset themes into overarching
themes and discarded subset themes deemed thin without enough
initial codes. Eventually, data analysis led to a thematic map to
answer the research question suciently through three Findings
from Sections 5.1 to 5.3. In reporting ndings, we ensured our data’s
anonymity by removing YouTube channel names and paraphrasing
the original quotes. Please also note that while Bowker and Star’s
classication theories provide a valuable conceptual framework
for understanding classication systems on YouTube, they did not
directly drive our data analysis.
4.3 Researcher Positionality
Our interpretation of YouTube creators’ and consumers’ experi-
ences is based on our positionality [
73
], including social roles, in-
tellectual history, and lived experience. The rst two authors are
amateur video content creators on YouTube, with the rst author
closely in touch with and frequently engaging with more than
ten creators across content categories, such as gaming, animation,
and beauty, for over three years. The other two authors, while not
content creators themselves, are seasoned consumers of creators
across education, gaming, and entertainment. Regarding intellec-
tual history, the rst and last authors have been researching content
creators, creator-audience relationships, and YouTube since 2020,
equipping the research team with rich domain knowledge. This
combination of hands-on experience and academic insight positions
us well to perform reexive thematic analysis for this study.
DIS ’24, July 1–5, 2024, IT University of Copenhagen, Denmark Ma et al.
Table 1: Data Preprocessing (Section 4.1) and Analysis for Theoretical Saturation (Section 4.1)
Data source Data posted after January 2019 Screened and analyzed data for theoretical saturation
Quantity of threads Quantity of comments Quantity of threads Quantity of comments
r/newtubers 168 1,915 14 347
r/partneredyoutube 25 210 6 85
r/youtube 245 5,396 39 4,423
r/youtubers 90 774 4 130
Total 528 8,295 63 4,985
5 FINDINGS
We found that the YouTube creators and consumers perceived MFK
policy enforcement as misaligned with their actual community
practices (Section 5.1), creators shouldered the unexpected burden
of MFK classication (Section 5.2), and both creators and audiences
observed three types of intersection of MFK classication with
other platform designs, including coordination, inconsistency, and
conict (Section 5.3).
5.1 Misalignment between MFK Classication
and Practice
YouTube creators and consumers have observed a misalignment
between MFK classication and actual practices, resulting in false
positives and false negatives: (1) the MFK classication system
incorrectly ags videos as suitable for children, and (2) MFK videos
do not receive the intended level of recognition by creators and
consumers.
5.1.1 Perceived False Positives of MFK Classification. False positives
of MFK classication occur when videos classied as MFK do not
match the two criteria set out by the MFK policies—namely, that
the primary or intended audience is children [
91
,
92
,
94
]. One such
discrepancy occurs when an MFK video should have been classied
as age-restricted (i.e., for people over 18) [
90
] instead of MFK. A
consumer posted:
I just found a video that is labeled as [made] for kids, and
yet it is age-restricted based on community guidelines.
It’s a Fritz the Cat episode where there’s a Nazi bunny,
and swastikas are shown a lot in it. There’s clearly a
bug in the AI that makes the AI fail to consider age
restriction status. (consumer; r/youtube)
This content consumer perceived an MFK video should have
been classied as age-restricted because the video contained “Jojo
Rabbit, a comedy about a German boy who imagines his friend
is Hitler. This video thus had intense violence, death, and anti-
Semitism and was not appropriate for children to watch, as the
above consumer perceived.
Besides, creators themselves might produce false positives, as
evidenced by one who admitted:
This whole “made for kids” thing is so dumb. I’ve up-
loaded Overwatch futa porn and marked it as “made
for kids, but nothing has happened to me. (creator;
r/youtube)
The video posted by the creator above contained adult-oriented
material from the video game “Overwatch, which should not have
been marked as MFK. They further stressed the gap in the oversight
mechanisms of YouTube’s MFK classication in correcting creators’
mislabeling.
When inappropriate content is classied as MFK, some con-
sumers intuitively blame creators. For example, a consumer com-
mented when a news video of a massive shooting is classied as
MFK:
100 % on the uploader. They either checked the wrong
box, or they blanketed their entire channel as for kids
(which would be really stupid for news channels to do).
(consumer; r/youtube)
This consumer stressed two ways of avoiding false positives,
including (1) videos within one YouTube channel needed attentive
classication from creators, and (2) the MFK classication should
not automatically apply MFK tags to all new videos, even though a
creator set their whole channel as MFK.
However, creators observe algorithms cannot make nuanced
classications on their videos. A creator posted:
I keep getting YouTube setting my ESL language videos
specically targeted at teens and young adults to MADE
FOR KIDS. .. my channel is targeted at kids, yes, (.. ..)
but basically, my channel is 80 percent made for kids,
and 20 percent not made for kids. (creator; r/youtube)
This case underscored the limitations of YouTube’s algorithms
in discerning nuanced consumer targets, leading to false positives.
This case also shows how algorithms undermined creators’ origi-
nal discretion in classication, which is dierent from what MFK
policies expect [92].
Some viewers thus believed responsibility lies with both the
consumers and creators to avoid misclassications, as a viewer
commented: “Message the creator. They need to uncheck the ‘made
for kids’ box in their video settings”.
5.1.2 User-Driven False Negatives of MFK. False negatives refer to
instances where content is classied as MFK and thus should be
restricted under the MFK guidelines [92, 94] but is not recognized
or treated as such by creators and consumers. Specically, creators
and consumers have felt that the MFK classication limits their
ability to engage with content as they please. Thus, they devise
methods to maintain their autonomy, creating false negative actions
that treat videos as if they are not MFK, even though they are, as
labeled by the MFK classication system.
Content Creators’ and Consumers’ Experiences with Content Classification for Child Safety on YouTube DIS ’24, July 1–5, 2024, IT University of Copenhagen, Denmark
One such method involves the use of playlists, a feature that
allows users to organize, curate, and share videos in a specic order.
Despite MFK restrictions, a viewer shared:
I just tested this and found that I can still add Made for
Kids videos to playlists from the search results, except
for the actual video page! Hover over the video’s title,
click on the three dots that appear on the right side,
and click on Save to Playlist. (. ..) It’s 2022, and I’ve
been using this method since COPPA started, and it has
consistently worked for me. (consumer; r/youtube)
This consumer above not only utilized a system aw to create
a false negative but also shared the knowledge with others, thus
spreading the practice. Especially as this consumer validated its ef-
fectiveness in 2020 and 2022, the MFK classication did not enforce
moderation policies and implement function changes on YouTube
over time.
Children, recognized as a unique group of content consumers,
often venture into areas not covered by MFK classication, poten-
tially leading to unintended data collection. For example, a creator
wrote:
This proposed rule (MFK) won’t change anything. Kids
can just use a parent’s account via iPad/phone/TV/computer
or laptop, so all that’s really happening is the creators
are being punished.(creator; r/youtube)
In this case, as YouTube applied MFK policies [
91
] at a video level
rather than considering the broader context of children’s media
interaction habits, the creator above claimed that it would be easy to
create false negatives of MFK when children access their non-MFK
content inadvertently.
Another creator questioned, “How are YouTube and the FTC gonna
deal with kids commenting on Not Made for Kids videos and adding
them to playlists?”This query pointed out that children have been
engaging with content outside the MFK classication, hinting at
the widespread user-driven creation of false negatives.
Parents deem that MFK classication undermines their auton-
omy in managing their children’s content consumption experience.
For example, a parent expressed their frustration:
I am a parent myself, and it is my job to parent my kid
how I see t. It is up to me to know or accept that a
company like Google might scrape data targeted at my
kids and me. Just like most laws like this, they always
start out with good intentions, but you know how the
saying goes. (consumer; r/youtube)
This parent highlights two issues that might create false positives
of MFK. First, the MFK classication above did not oer a parent
consent option, which was misaligned with COPPA requirements
[
24
,
25
]. Second, dierent from how parents can make autonomous,
informed decisions on their children’s well-being [
54
], this parent
above complained about the lack of control over their kids’ data
privacy, showing a disconnect between policy and practical parental
needs.
The lack of nuanced control is further emphasized by another
parent’s request for more selective content ltering:
I want an app that will give me the ability to select
the shows/channels *I* want my kids to be able to see.
Whether that app is YouTube, YouTube Kids, or what-
ever, I don’t care. For example, my son is 13 and big into
Fortnite. I want him to be able to watch specic YouTu-
bers that do Fortnite while excluding others. (consumer;
r/youtube)
This case underscored the deciencies in the current MFK classi-
cation design, which did not aord parents the active role they seek
in the content classication process and inadvertently prompted
them to create workarounds that could lead to the creation of false
negatives videos that the MFK classication system would cat-
egorize as not suitable for children being treated by parents as
acceptable for their children’s content consumption.
5.2 Unexpected Consequences of Practicing
MFK Classication as Creators
The unexpected consequences of MFK classication refer to the
predicament creators need to overcome in their content creation,
compared to the time before YouTube enforced MFK policies [
91
,
94
].
YouTube expects creators to practice MFK classication: “We rely
on you to tell us if your content is intended for kids because you
know your content best. We trust you to set your audience accurately”
[
94
]. However, creators feel uncertain about what accurate labeling
is, which prompts them to exert additional eort to standardize
content creation or reduce their passion for content creation.
5.2.1 Uncertainty Regarding Proper Classification. While YouTube
explicates which content categories are MFK [
91
], creators still
struggle to understand how to apply these policies to classication
practices, especially when their videos are lled with dierent,
nuanced content elements. For example, video game is a subject
matter generally directed at kids, as stated by MFK policy [
91
].
However, this generalization does not account for the diverse genres
and themes within gaming, many of which may not be suitable for
children. This one-size-ts-all approach to classication presents
challenges for creators. A creator posted:
I don’t know if my video game videos are “made for
kids” or not due to the lack of clarication. Even if games
like Call of Duty are violent and look realistic, they can
say it is kid-directed as the games are animated. And
what about Fortnite, Minecraft, etc.? Kids can watch
anything, and anything can be made for kids. Some are
just clearer than others. (creator; r/youtube)
This creator struggled to make a classication decision given the
abundance of content elements such as game types, some extent of
violence, and visuals between reality and animation. This struggle
showed that the simple term “game” in MFK policy cannot su-
ciently cover the actual complexity of videos and creators’ practices
in measuring their videos. Besides, content elements that are not di-
rectly measurable also pose obstacles. For example, a game creator
posted:
If I play Minecraft, which is a game *directed to kids*,
but I myself am aiming for a young adult+ audience
because my humor is a bit unsuitable for kids, I’m in a
very grey area if you consult those guidelines. (creator;
r/youtubers)
DIS ’24, July 1–5, 2024, IT University of Copenhagen, Denmark Ma et al.
Minecraft is a popular sandbox game. This creator thought their
humorousness as a creative part of their Minecraft gaming videos
made the videos inappropriate for children. Meanwhile, they com-
plained that MFK policies did not explain the extent or kind of
creativity that can make a gaming video MFK [91].
As a viewer observed, this vagueness might lead to improper,
careless classication practices: “I think a lot of people marked their
own channels as made for kids, fearing that if they didn’t, they’d be
chased up/sued. This viewer shared that many creators’ confusion
or struggle was exacerbated by fears of legal repercussions.
5.2.2 Extra Labor for Disagreed Classifications. Creators face addi-
tional work when disagreeing with MFK classications made by
YouTube. A creator posted:
Or YouTube set it themselves [through machine learning
algorithms]. I had a video I had to manually set back
to Not Made for Kids 4 times. Every once in a while, I
go through my videos and double-check to make sure
YouTube didn’t make the decision on its own again.
(creator; r/youtube)
This creator manually labeled back and forth and frequently
checked out other videos’ statuses on the YouTube Studio dashboard
to make sure the automatic MFK classication made sense to them.
Given that one of the two primary criteria for classifying a video
as MFK is “children are the primary audience of the video” [
91
],
many creators assume that when a video is labeled as MFK, the
primary consumers are already kids. When facing the other cri-
terion, “children are not the primary audience, but the video is
still directed at children, creators will likely assume the criterion
YouTube chooses for its classication decision randomly. For exam-
ple, a creator posted:
Literally, almost every video I’ve seen marked [by YouTube]
for kids is not intended for kids, likely leading to alien-
ating their main audience by disabling comments and
probably bringing in a child audience that the video, in
some ways more than others, is clearly *not* intended
for. I had to bring it to one user’s attention through an-
other video that was not marked as such to remove the
marker, as it was clearly not intended as such. (creator;
r/youtube)
Without classication explanations, the creator disagreed that
the videos labeled by the platform should be classied as MFK.
Then, they felt compelled to draw online trac of older consumers
to MFK videos to remove the labels because they did not want the
negative impacts of the labels. Some creators even mentioned their
channel-level eorts:
My original channel was going to be a kid channel. I
did more research into monetizing made-for-kids videos.
If I were monetized, I’d make little to no money at all,
and it absolutely wouldn’t be worth it. I’d keep up with
educating parents but make it geared toward the parent.
(creator; r/NewTubers)
This creator above performed the extra labor by changing their
whole channel’s content category to avoid MFK classication be-
cause of the low and unpromising protability of content creation
associated with it.
5.2.3 Reduced Motivation for Creation. MFK classication often
demotivates creators from creating content. A creator highlighted
the challenge of establishing a kid’s channel in the unpromising
protability and fanbase: “Unless it’s a hobby or an insanely huge
channel, I don’t see the point of having a kid’s channel. No way to get
one o the ground in this environment.
The ambiguity of MFK policies further discourages creators. A
creator posted:
This foggy space between what we are and are not al-
lowed to do disheartened and uninspired me to start up-
loading anything at all. Some of the things I would like
to make would be safe for all. Some would only be safe
for a more mature crowd. Do I walk the tightrope and
throw some truly unique and fun ideas out the window
in case I tread in the wrong territory or make a claim
that a bot deems incorrect? “For kids” does not mean
the same thing as “safe for kids. (creator; r/youtube)
This case showed that creators clearly understood the dierence
between MFK and “safe for kids. They were afraid that the YouTube
platform’s algorithms would mix up these concepts and misclassify
their “safe for kids” videos as MFK. Due to this fear, the creator
hesitated and considered not investing more creativity in their
content creation.
5.3 Intersection of MFK Classication with
Other Platform Designs
MFK classication is not mutually exclusive with other platform
designs. Instead, creators and consumers nd it intersected with
other designs, especially other classication systems, in three ways:
coordination, inconsistency, and conict. While these intersections
sometimes align with protecting children’s privacy, creators and
consumers often observe them as failing to do so or even negatively
impacting user interests.
5.3.1 Coordination. Coordination refers to how MFK classication
does not work alone but works with other classication systems for
child privacy protection. A viewer posted: “How do I get my features
back as a watcher, not an uploader (creator)? I am not a kid and would
like to use all the features of YouTube. This viewer disagreed that
such coordination between video function disablement and MFK
classication should be applied to their content consumption expe-
rience as an adult. Besides consumer experience, creators also found
that MFK classication coordinates with other content moderation
classications to inuence their MFK classication decisions. For
example, a creator discussed with a viewer:
Creator: My Hot Wheels review channel is for adult col-
lections but is completely family-friendly. But because
they are a “kid toy” that appeals to kids (according to
MFK policies), I’m probably going to have to label them
“made for kids, and therefore, what is the point of try-
ing to grow a channel which is never going to amount
to anything.
Consumer: Start swearing?
Creator: I would do that, but it’s in a kid-friendly game
because if I curse, they would ag my video or terminate
my channel. (r/youtube)
Content Creators’ and Consumers’ Experiences with Content Classification for Child Safety on YouTube DIS ’24, July 1–5, 2024, IT University of Copenhagen, Denmark
When following MFK policies to classify a video as MFK, the
creator in the above case was more likely to receive less income
and audience engagement from the video. But if the creator cursed
in a video and thus labeled it as non-MFK, they would violate other
content policies (e.g., community guideline [
96
]) beyond the MFK
one, to lose videos or even the whole channel. Thus, the creator
weighed the risk of content removal higher than the limited growth
due to MFK classication, highlighting the dicult position creators
are in.
Creators further voiced dissatisfaction about how data tracking
coordinates and is tied to content moderation:
YouTube only oers these two in pairs: disable tracking
AND mark as made for kids, or enable tracking and
mark as not made for kids. I just want to disable all
tracking, data collection, comments, whatever, to be on
the safe side of the crazy people in the FTC. (creator;
r/youtube)
The creator wanted to decouple labeling content as intended for
children (i.e., MFK) from data tracking because they wished to avoid
restricting their audience solely to children. However, the coordi-
nation between data tracking and content moderation discouraged
them from fully embracing the MFK classication/framework.
5.3.2 Inconsistency. Inconsistency means how the MFK classi-
cation works with other designs in a way inconsistent with what
is stated in the MFK moderation policies [
91
,
94
]. For example, a
newbie creator posted: “What’s even weirder is that when I set
a video as made for kids on the rst video that I made, it still al-
lows comments. As comment disablement is a designed change
given the MFK policy [
94
], the creator here felt surprised that it
did not work in the designed way. This posed the risk of collecting
behavioral data of potential kid consumers.
Such inconsistency also appears in non-MFK videos. For example,
a creator shared:
My friend was trying to turn notications on for my
channel and got a warning saying, “This action is turned
o for content made for kids. But the thing is, I never
selected a video or my channel for kids. Why does it
happen, and how to x it? (consumer; r/youtube)
“Notication” in this example refers to the bell icon beside the
YouTube channel, which can notify consumers of new videos, and
YouTube turns it o on MFK channels according to the policies
[
94
]. The inconsistency existed in two phases of MFK classication.
First, in its decision-making phase, if the creator assumed their
channel was non-MFK, then YouTube’s notication bell worked
inconsistently on their channel. Second, in the sense-making phase
of MFK classication results, the notication bell as a notication
system only notied the consumers of MFK videos they watched
but did not notify the creators who created these videos. This was
inconsistent with MFK policy, where YouTube noties creators of
MFK videos that are classied by the platform [94].
Suppose the inconsistency in the last case is potentially attributed
to creators’ lack of awareness that their videos are classied as MFK
by the platform algorithms. In that case, other creators ag the
explicit inconsistency when they already classify their channels as
non-MFK. For example, a creator mentioned: “I changed the setting
on my channel to a hard not made for kids and set all my videos to
that as well, but the problem persisted.
A creator further highlighted MFK classication’s inconsistency
with monetization algorithms/classications:
I have several videos that were manually changed to
“for kids” after I had published them. Interestingly, their
CPM only dropped by 1/3. On the other hand, I have pub-
lished videos as “for kids, and their CPM is 1/4 of what
a normal video would be. (creator; r/PartneredYoutube)
As MFK policies [
94
] explain, there will be no targeted ads on
MFK videos, meaning only contextual ads will be placed, generating
lower ad income. However, the creator above recognized inconsis-
tent ad income performances between MFK and non-MFK videos,
indicated by CPM (the net amount of ad income for every 1,000
ad impressions). Such inconsistency between MFK classication
and monetization algorithms further implied uncertainty about
whether YouTube’s ad placement system (i.e., how YouTube places
ads) consistently worked with MFK policy enforcement.
5.3.3 Conflict. Conict arises when the MFK classication oper-
ates simultaneously with other platform designs, especially other
classications, which compromises child privacy and safety pro-
tection what the MFK classication intends to implement. For
example, a creator mentioned:
Because people under 13 can still go on the main site.
If they (YouTube) made a 13+ requirement, and not
logging in with a mature account could get you on the
kid’s website, that would be perfectly ne. It kind of
sucks right now (with MFK classication on the main
site). (creator; r/youtube)
This creator discussed two designs about the consumers on
YouTube sequentially. First is an open-access consumer model
where most YouTube videos are publicly accessible to users without
registering or logging in with an account. The second is the age
verication classication, where the consumers need to be over
13 years old to register an account on YouTube. So, the conict in
the above case was that the open-access consumer model allowed
potential kids to watch videos on YouTube without an account,
while the MFK classication intended to prevent potential kids
from accessing non-MFK videos and getting their data tracked by
the platform.
Such conict is not rare. A creator discussed the paradox of a
video being simultaneously MFK and age-restricted:
Age restriction classication has not changed; it still
means and does the same thing. Made For Kids clas-
sication now only means what roughly COPPA in-
tended was to stop data collection and surgical ad tar-
geting aimed at kids. Now, since You are aware that
video should be restricted (due to language), You might
worry that kids will watch it (since it’s known as a car-
toon), and You will get in trouble with COPPA. (creator;
r/youtube)
Age restriction refers to a binary classication that creators need
to practice, and that can indicate whether their videos are only
for people over 18; otherwise, the YouTube platform will label the
videos on behalf of creators [
90
]. Here, the creator grappled with the
DIS ’24, July 1–5, 2024, IT University of Copenhagen, Denmark Ma et al.
conict between age restrictions meant for adult content and MFK
regulations designed to protect children, potentially endangering
the intended consumers’ safety.
MFK classication intends to prevent consumers’ data on MFK
videos from being collected by the platform, and no targeted ads
will be placed [
92
]. However, this intent is conicted with how
YouTube places the ads. A viewer posted:
They probably still collect data from clicking ads [on
MFK videos], though, like those ones in the up-next feed
that are designed to look like kids’ videos but, when
clicked, will take your child o to a third-party website.
(consumer; r/youtube)
This viewer mentioned a possibility where kids’ data can be col-
lected by the ads they watched and clicked on. This conicted with
children’s privacy protection. Beyond content itself, MFK policies
did not consider how ad placement impacts potential kids, not to
mention that sometimes the ads are harmful, as a viewer said: “‘try
out a Slavic wife, see what happens’ might be a little inappropriate
for children.
“YouTube Kids,” a separate platform from the regular, main
YouTube for consumers under the age of 13, is also perceived to
be repetitive with MFK classication. For example, a creator elab-
orated, “There is an app called YouTube Kids. If they can’t handle
two apps at once, then shut down YouTube Kids because (with made
for kids) they clearly want kids and everyone else to use the normal
YouTube. This creator implied that MFK classication and the
YouTube Kids platform should be integrated so they would not be
confused about which one was for child privacy protection.
Consumers also notice such conict. For example, a consumer
noticed kid creators who seemed to be under 13 were active on
YouTube: “Technically, it’s not allowed, but YouTube doesn’t ban those
under 13 creators for some reason. (You can nd so many under 13
creators on YouTube). That meant MFK classication was poten-
tially practiced by those under 13, and thus, their data might be
collected from the platform as both a creator and viewer.
6 DISCUSSION
The YouTube platform initially proposed MFK classication and its
associated moderation policies [
91
,
92
,
94
] to respond to the allega-
tions from and legal tension with FTC about violating children’s
online privacy [
25
]. Although both FTC and YouTube believe MFK
classication can alleviate this legal tension, it unexpectedly creates
new tension among the platform, creators, and consumers, compli-
cating the eorts for child safety. That means, while the MFK clas-
sication has helped prevent the collection of children’s personal
information—enhancing child privacy—it has also inadvertently
exposed children to broader safety risks, including inappropriate
video content and advertising. This section thus will discuss how
the YouTube platform positions MFK classication in an interwo-
ven network of classication systems. Using and expanding on
Bowker and Star’s classication theories [
16
], we will unpack this
network’s structure and impacts on child safety. This section will
close with design principles for child-centered safety.
6.1 An Interwoven Network of Multiple
Classication Systems and Its Broad
Impacts on Child Safety
Bowker and Star have performed an extensive analysis of single and
static classication systems through examples like apartheid’s racial
classication in South Africa [
16
]. However, our analysis diverges in
a signicant aspect: Digital and social media platforms like YouTube
design a complex interplay of multiple, dynamic classications. On
YouTube, MFK classication intersects with other classications like
advertising restrictions and monetization algorithms, inuencing
content creation, monetization, and audience consumption patterns.
These patterns, in turn, inuence platform algorithms and trends,
showing that YouTube’s classication systems are interconnected.
Against this background, our ndings shed light on a designed,
interwoven network of classication systems that operate interde-
pendently, in tension, and dynamically. First, the classications on
YouTube age-restriction, MFK, and age verication are not stan-
dalone but directly impact child safety (see blue arrows in Figure 3).
As Section 5.3 shows, age-restriction (i.e., content classication for
consumers only over 18) was labeled concurrently with MFK on
videos, leaving creators uncertain if the goal is to limit viewership
to adults over 18 or to protect children by disabling data tracking.
This is compounded by YouTube’s open-access consumer model,
which allows content viewing without an account, thus obscuring
the presence of viewers under 13. This critical child safety concern
undermines age verication on YouTube, which approves users to
be on YouTube if they are over 13. The interdependence of these
classications often goes unnoticed by creators until it negatively
aects creators’ content, monetization, and audience engagement,
resonating with Bowker and Star’s concept of infrastructural inver-
sion [
16
], where the underlying classications only become visible
during conicts or breakdowns. Similarly, extending prior HCI re-
search on such classication breakdowns [
14
,
44
,
78
], our study
brings to light not only the classication at work but also the route
from its structure to impacts, compounding the experiences of users,
including creators, consumers, and children.
Second, classication systems on YouTube pull dierent entities’
contention (see red arrows in Figure 3), including YouTube, creators,
and consumers. Bowker and Star highlight the concept of boundary
objects, referring to “objects that both inhabit several communities
of practice and satisfy the informational requirements of each of
them, which manages the tensions among diverse perspectives
[
16
]. MFK classication is a boundary object: It is meant to pro-
tect children’s privacy but is interpreted diversely. Our ndings
(e.g., Section 5.2) show that creators interpreted it as a negotia-
tion between their vested interests and external requirements from
YouTube or FTC. Parents, however, felt it limited their agency in
content selection for their kids (e.g., Section 5.1.2), while YouTube’s
algorithms used this label to curate content, including ads, for view-
ers (e.g., Section 5.1.1). This multiplicity reects the engagement of
diverse users with children’s online privacy measures, as seen in
prior HCI research (e.g., children [
55
,
82
], developers [
32
]. Our study
further underscores a scale challenge: Engagement with YouTube’s
network of classications extends beyond mere videos and MFK
labels to a broader array of policies (e.g., community guidelines
[
96
], algorithms, and interfaces (e.g., YouTube Studio [
93
]), forming
Content Creators’ and Consumers’ Experiences with Content Classification for Child Safety on YouTube DIS ’24, July 1–5, 2024, IT University of Copenhagen, Denmark
Figure 3: An interwoven network of classication systems impacting child safety on YouTube is indicated by our ndings.
Blue arrows show interdependence between classications, red arrows highlight contention among creators, the platform,
and consumers, while green arrows show how classications transition within the network. Double-headed arrows denote
bidirectional relationships. For instance, red arrows indicate that consumers note malicious ads placed by the platform, but
meanwhile, they cannot comment on MFK videos. Bolder blue arrows signify stronger relationships (e.g., MFK classication
mutually aects age-restriction criteria).
what Bowker and Star conceptualized as a boundary infrastructure,
“objects that cross larger levels of scale than boundary objects” [
16
].
As creators assign the MFK labels, they do not merely classify a
video but interact with YouTube’s entire classication ecosystem, af-
fecting everything from video uploads to content recommendations
and ad placements. This again shows how a singular classication,
when entwined with others, can have profound implications for
users, particularly children.
Third, classication systems operate dynamically on YouTube
(see green circle arrows in Figure 3). Drawing on Bowker and Star’s
concept, infrastructural inversion [
16
], where classication systems
are reformed in response to breakdowns, we observe, on YouTube,
that the breakdowns don’t necessarily bring classication modi-
cations. Instead, dierent entities navigate breakdowns through
the existing classication network: Platform algorithms correct or
override classications like MFK and age restrictions (e.g., Section
5.3.2), creators make or reverse their mislabeling (e.g., Section 5.2.2),
and consumers point out dierent types of misclassications (e.g.,
Section 5.1.1). This also diers from prior work, where classica-
tions adapted to nuanced user behaviors (e.g., updated moderation
policies [
28
,
88
]), classications on YouTube dynamically shift, of-
ten transitioning from one label to another over time, while the
underlying network of classications may remain unchanged.
The complexities within this classication network could rst
risk child privacy. Previous work has highlighted the design in-
adequacies of social media in adhering to child privacy laws like
COPPA (e.g., [
37
,
74
]), noting parents’ roles in circumventing age
checks [
17
]. YouTube’s MFK classication, which involves creators
directly in classifying content for child consumers, contrasts tradi-
tional age verication or ltered browsing [
95
]. The open-access
consumer model of YouTube further complicates child privacy ef-
forts, allowing unaccounted viewership, including by children. This
makes it nearly impossible for creators to identify if a viewer is a
child. Although creators lack data on consumers under 13 on the
YouTube Studio dashboard [93], MFK policies [91, 94] still require
them to label content based on its primary intention for this age
group. This inconsistency, coupled with both the platform’s and
creators’ drive to maximize growth (e.g., visibility and income),
makes child privacy protection more challenging.
The MFK classication network can further expose children to
safety issues. Prior research has assessed moderation eectiveness
how it restricts the proliferation of inappropriate content (e.g.,
[
22
,
79
,
84
]). Our study highlights a more critical concern: When
YouTube relies on content creators to classify their content for mod-
eration [
94
], it is evident that they aren’t professional moderators
or labelers, often leading to errors and repeated relabeling, increas-
ing the exposure of problematic videos and advertisements. Our
ndings thus reveal a fundamental aw in merging child privacy
protection with moderation making content an indicator for con-
sumer demographics. As the MFK misclassications intersect with
other labels, coupled with children’s unpredictable content con-
sumption, this aw extends the original focus of MFK classication
on children’s data privacy into child safety on YouTube.
DIS ’24, July 1–5, 2024, IT University of Copenhagen, Denmark Ma et al.
6.2
Designing for Child-Centered Online Safety
in the Network of Classication Systems
On social media platforms like YouTube, children’s online safety in-
volves dierent stakeholders, such as content creators, consumers,
platform designers, and policymakers. However, our ndings reveal
a divergence in understanding and approaching child safety across
these groups. Content creators and consumers, particularly, face
discrepancies in how they experience content creation and con-
sumption: They wanted to weigh the moderation challenges and
impacts posed by MFK classication with the necessity for them
to protect children’s privacy. Besides, as prior work in legal and
HCI elds has criticized platforms for inconsistent enforcement of
moderation policies (e.g., [
44
,
77
,
85
,
88
]), our study supplements
how such inconsistency originates: Platform designs work dier-
ently from what moderation policies state, unpredictably intersect
with other designs, and thus undermine the force of moderation in
regulating inappropriate content, including ads. Especially when
the MFK classication enforces policies and operates, we found it
ignores the design of parental consent and participation, showing
a discrepancy that extends beyond platform governance issues to
policy gaps between platform policymaking and COPPA [24].
Thus, we highlight two design principles that are key to enhanc-
ing child-centered online safety:
The rst design principle is the multi-stakeholder prin-
ciple in child safety. This entails giving visibility to both stake-
holders who directly get involved in child safety and those who
play an indirect role in it. On the one hand, in platforms’ content
moderation, the classication work is often invisible [
20
]. Our nd-
ings show that this invisibility can obscure the eorts of content
creators for child safety. Bowker and Star highlight the critical need
for visibility in classication systems—not only to understand and
recognize the work that goes into them but also to critically ex-
amine their impacts [
16
]. When it comes to MFK policies [
91
], it
thus means bringing to light the invisible labor that underpins child
safety within the classication network where MFK classication
is part.
On the other hand, a singular safety design needs to acknowl-
edge the inuence of multiple stakeholders. Prior DIS researchers
have examined parental use of AI-assisted or technology-based
decision-making [
52
,
54
] or how children themselves use chatbots
[
72
] to enhance safety. Our ndings show that while the YouTube
platform oers MFK classication as an important child safety
function, there’s a noticeable gap in the willingness and ability of
consumers and creators to participate in MFK classication. Design
implication: Thus, platforms like YouTube should support creator-
consumer collaboration to positively inuence child safety and
avoid biases when implementing protection measures for children.
The second design principle is the systems thinking prin-
ciple in child safety. Systems thinking refers to “seeing inter-
relationships rather than things, for seeing patterns rather than
static snapshots” [
80
]. It emphasizes making sense of the intercon-
nectedness and patterns of change within a system rather than
viewing parts in isolation [
80
]. In design, our study informs two
aspects of this notion: The internal aspect, which speaks to the
interaction among system parts/components, and the contextual
aspect, which is how the system or its parts interact with the world,
such as people.
On the one hand, our study shows the necessity of acknowl-
edging the interconnectedness among safety designs like MFK
classication and other classication systems. For example, our
ndings showed that MFK classication, as one type of safety de-
sign, was interwoven with other platform classications, such as
monetization algorithms, advertising settings [
21
,
61
,
62
], and age
verication [
37
]. These connections can increase the risk of ex-
posing child consumers to safety issues. Design implication: To
solve these issues, we do not suggest that MFK classication should
operate in isolation. Rather, in policy enforcement, the interwoven
network of classication systems should enhance transparency to
users regarding the distinctions among various classications. This
approach enables dierent stakeholders to examine if child safety
designs align with COPPA, preventing child data collection without
parental consent. This, thus, positions users in a fair environment
for content creation and consumption without unexpected impacts
from other classications.
On the other hand, our study highlights the potential for in-
novating safety designs to enhance child protection eectively.
Design implication: While age verication systems typically con-
rm a consumer’s age during the account registration stage, we
suggest they should also periodically verify ages in the content
consumption stage, especially when there’s a signicant shift in
consumption patterns or an increase in child-oriented content con-
sumption. Implementing such a safety design could hold platforms
more accountable for child safety, given our ndings, where cre-
ators on YouTube would not be aware if children under 13 consume
their content due to the open-access content consumption model.
Expanding on the interaction among multiple stakeholders with
safety designs, such designs should facilitate collaborative practices
of child safety. Our study found that although engaged viewers
identied mislabeling in MFK classications, they lacked a mecha-
nism for reporting these issues and stopping the spread of harmful
content. Additionally, our ndings reveal that many parents are
excluded from participating in MFK classication or selecting con-
tent for their children. We thus propose the below three design
changes:
Social media platforms like YouTube should enhance their
agging options to enable the reporting of perceived MFK
misclassications.
Furthermore, platforms should provide creators with educa-
tional resources, including contact points and workshops,
to ensure their content aligns with MFK policies and avoid
mislabeling.
Informed by prior literature that advocates for co-using digi-
tal devices [
29
,
46
] between parents and children, we propose
introducing a co-watching feature on platforms like YouTube,
encouraging both parties to decide if they wish to consume
videos together in real time.
7 LIMITATION AND FUTURE WORK
This study has a few inevitable limitations. First, a few prior HCI
studies have investigated children’s privacy-related experiences
Content Creators’ and Consumers’ Experiences with Content Classification for Child Safety on YouTube DIS ’24, July 1–5, 2024, IT University of Copenhagen, Denmark
on several social media platforms (e.g., [
82
,
98
]). That means chil-
dren and parents could be a future focus in understanding how
they experience MFK classication on YouTube or similar child
privacy-preventing technologies across platforms. Similarly, future
work can further focus on child or teenager creators and how they
experience privacy-preventing designs on platforms like YouTube.
Second, the method we used, analyzing online discussions, might
be subjected to the misreported experiences with MFK classication
on Reddit. For example, there might be creators’ mislabeling of con-
tent when they did not share such experiences as mislabeling in our
data. Recognizing this potential limitation, we will dive deeper into
future work with parents, kids, creators, and consumers through
methods like participatory design workshops. Also, as YouTube’s
MFK is heavily inuenced by COPPA regardless of region [
92
], we
recognize future work that can delve into a localized understanding
of child privacy in places complying with GDPR or age-appropriate
design.
8 CONCLUSION
This study delves into creators’ and consumers’ experiences with
YouTube’s MFK classication system, focusing on the broad im-
plications of its implementation. We uncover a spectrum of user
reactions, strategies, and challenges in navigating the MFK system.
Our ndings contribute to a deeper understanding of MFK as more
than a technical measure: We identify an interwoven network of
classication systems centered on MFK classication. Using and
extending Bowker and Star’s classication theories, we unpack how
such a network challenges child safety (e.g., content moderation
eectiveness, data tracking, malicious ad placement). We conclude
by laying out the design principles of child-crenated safety on com-
mercial and social media platforms like YouTube.
ACKNOWLEDGMENTS
We thank the associate chairs and anonymous reviewers for their in-
sightful feedback and suggestions. This work is partially supported
by the NSF, under grant no. 2326505.
REFERENCES
[1]
Amelia Acker and Leanne Bowler. 2018. Youth Data Literacy: Teen Perspectives
on Data Created with Social Media and Mobile Devices. In Proceedings of the
Annual Hawaii International Conference on System Sciences. 1923–1932. https:
//doi.org/10.24251/HICSS.2018.243
[2]
Zainab Agha, Karla Badillo-Urquiola, and Pamela J. Wisniewski. 2023. “Strike at
the Root”: Co-designing Real-Time Social Media Interventions for Adolescent
Online Risk Prevention. Proc ACM Hum Comput Interact 7, CSCW1 (April 2023),
149. https://doi.org/10.1145/3579625
[3]
Syed Hammad Ahmed, Muhammad Junaid Khan, H. M. Umer Qaisar, and Gita Suk-
thankar. 2023. Malicious or Benign? Towards Eective Content Moderation for
Children’s Videos. In Proceedings of the International Florida Articial Intelligence
Research Society Conference, FLAIRS 36. https://doi.org/10.32473/airs.36.133315
[4]
Davey Alba. 2015. Google Launches “YouTube Kids, a New Family-Friendly App.
https://www.wired.com/2015/02/youtube-kids/
[5]
Sultan Alshamrani, Ahmed Abusnaina, Mohammed Abuhamad, Daehun Nyang,
and David Mohaisen. 2021. Hate, Obscenity, and Insults: Measuring the Exposure
of Children to Inappropriate Comments in YouTube. In The Web Conference
2021 - Companion of the World Wide Web Conference, WWW 2021. 508–515.
https://doi.org/10.1145/3442442.3452314
[6]
Tawq Ammari, Priya Kumar, Cli Lampe, and Sarita Schoenebeck. 2015. Man-
aging children’s online identities: How parents decide what to disclose about
their children online. In Conference on Human Factors in Computing Systems -
Proceedings. 1895–1904. https://doi.org/10.1145/2702123.2702325
[7]
Tawq Ammari and Sarita Schoenebeck. 2015. Understanding and supporting
fathers and fatherhood on social media sites. In Conference on Human Factors in
Computing Systems - Proceedings. 1905–1914. https://doi.org/10.1145/2702123.
2702205
[8]
Mary Jean Amon, Nika Kartvelishvili, Bennett I. Bertenthal, Kurt Hugenberg,
and Apu Kapadia. 2022. Sharenting and Children’s Privacy in the United States:
Parenting Style, Practices, and Perspectives on Sharing Young Children’s Photos
on Social Media. Proc ACM Hum Comput Interact 6, CSCW1 (April 2022). https:
//doi.org/10.1145/3512963
[9] National Archives. 2013. PART 312—Children’s Online Privacy Protection Rule.
https://www.ecfr.gov/current/title-16/chapter-I/subchapter- C/part-312
[10]
Karla Badillo-Urquiola, Diva Smriti, Brenna Mcnally, Evan Golub, Elizabeth
Bonsignore, and Pamela J Wisniewski. 2019. “Stranger Danger!” Social Media
App Features Co-designed with Children to Keep Them Safe Online. In Proc 18th
ACM Int Conf Interact Des Child. https://doi.org/10.1145/3311927
[11]
Emmanuelle Bartoli. 2010. Children’s data protection vs marketing companies.
International Review of Law, Computers & Technology 23, 1-2 (January 2010),
35–45. https://doi.org/10.1080/13600860902742612
[12]
Stephen Beemsterboer. 2020. COPPA killed the video star: How the YouTube
settlement shows that COPPA does more harm than good. https://publish.
illinois.edu/illinoisblj/les/2020/06/12-Stephen- COPPA.pdf
[13]
Reuben Binns, Michael Veale, Max Van Kleek, and Nigel Shadbolt. 2017. Like
Trainer, Like Bot? Inheritance of Bias in Algorithmic Content Moderation. In
Lecture Notes in Computer Science (including subseries Lecture Notes in Articial
Intelligence and Lecture Notes in Bioinformatics), Vol. 10540. 405–415. https:
//doi.org/10.1007/978-3- 319-67256-4_32
[14]
Lindsay Blackwell, Jill Dimond, Sarita Schoenebeck, and Cli Lampe. 2017. Clas-
sication and its consequences for online harassment: Design insights from
HeartMob. Proc ACM Hum Comput Interact 1, CSCW (November 2017), 1–19.
https://doi.org/10.1145/3134659
[15]
Lindsay Blackwell, Jean Hardy, Tawq Ammari, Tiany Veinot, Cli Lampe, and
Sarita Schoenebeck. 2016. LGBT parents and social media: Advocacy, privacy,
and disclosure during shifting social movements. In Conference on Human Factors
in Computing Systems - Proceedings. 610–622. https://doi.org/10.1145/2858036.
2858342
[16]
Georey C. Bowker and Susan Leigh Star. 2000. Sorting things out: Classication
and its consequences. MIT press.
[17]
Danah Boyd, Eszter Hargittai, Jason Schultz, and John Palfrey. 2011. Why parents
help their children lie to Facebook about age: Unintended consequences of the
“Children’s Online Privacy Protection Act”. https://rstmonday.org/ojs/index.
php/fm/article/download/3850/3075
[18]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychol-
ogy. Qual Res Psychol 3, 2 (January 2006), 77–101. https://doi.org/10.1191/
1478088706qp063oa
[19]
Virginia Braun and Victoria Clarke. 2019. Reecting on reexive thematic analysis.
Qual Res Sport Exerc Health 11, 4 (August 2019), 589–597. https://doi.org/10.1080/
2159676X.2019.1628806
[20]
Jie Cai, Donghee Yvette Wohn, and Mashael Almoqbel. 2021. Moderation visi-
bility: Mapping the strategies of volunteer moderators in live streaming micro
communities. In IMX 2021 - Proceedings of the 2021 ACM International Conference
on Interactive Media Experiences. 61–72. https://doi.org/10.1145/3452918.3458796
[21]
Robyn Caplan and Tarleton Gillespie. 2020. Tiered Governance and Demonetiza-
tion: The Shifting Terms of Labor and Compensation in the Platform Economy.
Soc Media Soc 6, 2 (2020). https://doi.org/10.1177/2056305120936636
[22]
Eshwar Chandrasekharan, Shagun Jhaver, Amy Bruckman, and Eric Gilbert. 2022.
Quarantined! Examining the Eects of a Community-Wide Moderation Interven-
tion on Reddit. ACM Transactions on Computer-Human Interaction (TOCHI) 29, 4
(March 2022). https://doi.org/10.1145/3490499
[23]
Stuart Cobb. 2020. It’s COPPA-Cated: Protecting Children’s Privacy in the Age
of YouTube. https://heinonline.org/HOL/Page?handle=hein.journals/hulr58&
id=997&div=&collection=
[24]
Federal Trade Commission. 1998. Children’s Online Privacy Protection Rule
(“COPPA”). https://www.ftc.gov/legal-library/browse/rules/childrens- online-
privacy-protection- rule-coppa
[25]
Federal Trade Commission. 2019. Google and YouTube Will Pay Record
$170 Million for Alleged Violations of Children’s Privacy Law. https:
//www.ftc.gov/news-events/news/press-releases/2019/09/google- youtube-will-
pay-record- 170-million-alleged-violations-childrens- privacy-law
[26]
Federal Trade Commission. 2019. Musical.ly, Inc. https://www.ftc.gov/news-
events/news/press-releases/2019/02/video-social-networking-app- musically-
agrees-settle- ftc-allegations-it-violated-childrens- privacy
[27]
Federal Trade Commission. 2023. FTC Proposes Blanket Prohibition Prevent-
ing Facebook from Monetizing Youth Data. https://www.ftc.gov/news-
events/news/press-releases/2023/05/ftc-proposes- blanket-prohibition-
preventing-facebook- monetizing-youth-data
[28]
MacKenzie F. Common. 2020. Fear the Reaper: how content moderation rules are
enforced on social media. International Review of Law, Computers & Technology
34, 2 (May 2020), 126–152. https://doi.org/10.1080/13600869.2020.1733762
[29]
Sabrina L. Connell, Alexis R. Lauricella, and Ellen Wartella. 2015. Parental Co-Use
of Media Technology with their Young Children in the USA. J Child Media 9, 1
DIS ’24, July 1–5, 2024, IT University of Copenhagen, Denmark Ma et al.
(2015), 5–21. https://doi.org/10.1080/17482798.2015.997440
[30]
Kate Crawford and Tarleton Gillespie. 2016. What is a ag for? Social media
reporting tools and the vocabulary of complaint. New Media Soc 18, 3 (March
2016), 410–428. https://doi.org/10.1177/1461444814543163
[31]
Dipto Das, Carsten Østerlund, and Bryan Semaan. 2021. “Jol” or “Pani”?: How
Does Governance Shape a Platform’s Identity?. In Proc ACM Hum Comput Interact,
Vol. 5. https://doi.org/10.1145/3479860
[32]
Anirudh Ekambaranathan and Jun Zhao. 2021. Money makes the world go
around: Identifying barriers to beter privacy in children’s apps from developers’
perspectives. In Conference on Human Factors in Computing Systems - Proceedings.
https://doi.org/10.1145/3411764.3445599
[33]
Facebook. 2019. Facebook Community Standards. https://transparency.fb.com/
policies/community-standards/
[34]
Gavin Feller and Benjamin Burroughs. 2021. Branding Kiduencers: Regulating
Content and Advertising on YouTube. Television & New Media 23, 6 (October
2021), 575–592. https://doi.org/10.1177/15274764211052882
[35]
Yang Feng and Wenjing Xie. 2014. Teens’ concern for privacy when using social
networking sites: An analysis of socialization agents and relationships with
privacy-protecting behaviors. Comput Human Behav 33 (April 2014), 153–162.
https://doi.org/10.1016/J.CHB.2014.01.009
[36]
Jessica L. Feuston, Alex S. Taylor, and Anne Marie Piper. 2020. Conformity of
Eating Disorders through Content Moderation. Proc ACM Hum Comput Interact
4, CSCW1 (May 2020). https://doi.org/10.1145/3392845
[37]
Shannon Finnegan. 2019. How Facebook Beat the Children’s Online Privacy
Protection Act: A Look into the Continued Ineectiveness of COPPA and How
to Hold Social Media Sites Accountable in the Future. https://heinonline.org/
HOL/Page?handle=hein.journals/shlr50&id=838&div=&collection=
[38]
Jeremy Gan. 2023. YouTube reportedly dominates competition as top social media
platform for children. https://www.dexerto.com/youtube/youtube-reportedly-
dominates-competition- as-top- social-media-platform-for-children- 2264857/
[39]
GDPR. 2018. General Data Protection Regulation (GDPR). https://gdpr-info.eu/
[40]
Arup Kumar Ghosh, Karla Badillo-Urquiola, Shion Guha, Joseph J. Laviola, and
Pamela J. Wisniewski. 2018. Safety vs. surveillance: What children have to
say about mobile apps for parental control. In Conference on Human Factors in
Computing Systems - Proceedings. https://doi.org/10.1145/3173574.3173698
[41]
Cami Goray and Sarita Schoenebeck. 2022. Youths’ Perceptions of Data Collection
in Online Advertising and Social Media. Proc ACM Hum Comput Interact 6,
CSCW2 (November 2022). https://doi.org/10.1145/3555576
[42]
Robert Gorwa, Reuben Binns, and Christian Katzenbach. 2020. Algorithmic
content moderation: Technical and political challenges in the automation of
platform governance. Big Data Soc 7, 1 (January 2020). https://doi.org/10.1177/
2053951719897945
[43]
Greg Guest, Emily Namey, and Mario Chen. 2020. A simple method to assess and
report thematic saturation in qualitative research. PLoS One 15, 5 (May 2020).
https://doi.org/10.1371/JOURNAL.PONE.0232076
[44]
Oliver L. Haimson, Daniel Delmonaco, Peipei Nie, and Andrea Wegner. 2021.
Disproportionate Removals and Diering Content Moderation Experiences for
Conservative, Transgender, and Black Social Media Users: Marginalization and
Moderation Gray Areas. In Proc ACM Hum Comput Interact, Vol. 5. https:
//doi.org/10.1145/3479610
[45]
Hossein Hosseini, Sreeram Kannan, Baosen Zhang, and Radha Poovendran. 2017.
Deceiving Google’s Perspective API Built for Detecting Toxic Comments. https:
//arxiv.org/abs/1702.08138v1
[46]
Isil Oygur Ilhan, Yunan Chen, and Daniel A. Epstein. 2023. Co-designing for the
Co-Use of Child-Owned Wearables. In Proceedings of IDC 2023 - 22nd AnnualACM
Interaction Design and Children Conference: Rediscovering Childhood. 603–607.
https://doi.org/10.1145/3585088.3593868
[47]
Shagun Jhaver, Darren Scott Appling, Eric Gilbert, and Amy Bruckman. 2019.
“Did you suspect the post would be removed?”: Understanding user reactions to
content removals on reddit. Proc ACM Hum Comput Interact 3, CSCW (November
2019), 1–33. https://doi.org/10.1145/3359294
[48]
Shagun Jhaver, Iris Birman, Eric Gilbert, and Amy Bruckman. 2019. Human-
machine collaboration for content regulation: The case of reddit automoderator.
ACM Transactions on Computer-Human Interaction 26, 5 (July 2019), 1–35. https:
//doi.org/10.1145/3338243
[49]
Shagun Jhaver, Quan Ze Chen, Detlef Knauss, and Amy Zhang. 2022. Designing
Word Filter Tools for Creator-led Comment Moderation. In Proceedings of the
2022 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.
1145/3491102.3517505
[50]
Jialun Aaron Jiang, Charles Kiene, Skyler Middler, Jed R Brubaker, and Casey
Fiesler. 2019. Moderation Challenges in Voice-Based Online Communities on
Discord. Proc. ACM Hum.-Comput. Interact. 3, CSCW (November 2019). https:
//doi.org/10.1145/3359157
[51]
Garrett Johnson, Tesary Lin, James C. Cooper, and Liang Zhong. 2024. COP-
PAcalypse? The Youtube Settlement’s Impact on Kids Content. SSRN Electronic
Journal (March 2024). https://doi.org/10.2139/SSRN.4430334
[52]
Anna Kawakami, Venkatesh Sivaraman, Logan Stapleton, Hao Fei Cheng, Adam
Perer, Zhiwei Steven Wu, Haiyi Zhu, and Kenneth Holstein. 2022. “Why Do I
Care What’s Similar?” Probing Challenges in AI-Assisted Child Welfare Decision-
Making through Worker-AI Interface Design Concepts. In DIS 2022 - Proceedings
of the 2022 ACM Designing Interactive Systems Conference: Digital Wellbeing.
454–470. https://doi.org/10.1145/3532106.3533556
[53]
Sara Kingsley, Proteeti Sinha, Clara Wang, Motahhare Eslami, and Jason I. Hong.
2022. “Give Everybody a Little Bit More Equity”: Content Creator Perspectives
and Responses to the Algorithmic Demonetization of Content Associated with
Disadvantaged Groups. Proc ACM Hum Comput Interact 6, CSCW2 (November
2022). https://doi.org/10.1145/3555149
[54]
Susanne Kirchner, Dawn K. Sakaguchi-Tang, Rebecca Michelson, Sean A. Munson,
and Julie A. Kientz. 2020. This just felt to me like the right thing to do": Decision-
Making Experiences of Parents of Young Children. In DIS 2020 - Proceedings of
the 2020 ACM Designing Interactive Systems Conference. 489–503. https://doi.org/
10.1145/3357236.3395466
[55]
Priya Kumar, Shalmali Milind Naik, Utkarsha Ramesh Devkar, Marshini Chetty,
Tamara L. Clegg, and Jessica Vitak. 2017. “No Telling Passcodes Out Because
They’re Private.”. Proc ACM Hum Comput Interact 1, CSCW (December 2017).
https://doi.org/10.1145/3134699
[56]
Priya Kumar and Sarita Schoenebeck. 2015. The modern day baby b ook: Enacting
good mothering and stewarding privacy on facebook. In CSCW 2015 - Proceedings
of the 2015 ACM International Conference on Computer-Supported Cooperative
Work and Social Computing. 1302–1312. https://doi.org/10.1145/2675133.2675149
[57]
Cli Lampe and Paul Resnick. 2004. Slash(dot) and Burn: Distributed Moderation
in a Large Online Conversation Space. In Proceedings of the 2004 conference on
Human factors in computing systems - CHI ’04. ACM Press, New York, New York,
USA.
[58]
Tianshi Li, Elizabeth Louie, Laura Dabbish, and Jason I. Hong. 2021. How Develop-
ers Talk About Personal Data and What It Means for User Privacy. Proc ACM Hum
Comput Interact 4, CSCW3 ( January 2021), 1–28. https://doi.org/10.1145/3432919
[59]
Sonia Livingstone, Mariya Stoilova, and Rishita Nandagiri. 2019. Children’s
data and privacy online: growing up in a digital age: an evidence review. http:
//www.lse.ac.uk/my-privacy-uk
[60]
Renkai Ma and Yubo Kou. 2021. “How advertiser-friendly is my video?”: YouTu-
ber’s Socioeconomic Interactions with Algorithmic Content Moderation. PACM
on Human Computer Interaction 5, CSCW2 (2021), 1–26. https://doi.org/10.1145/
3479573
[61]
Renkai Ma and Yubo Kou. 2022. “I am not a YouTuber who can make whatever
video I want. I have to keep appeasing algorithms”: Bureaucracy of Creator
Moderation on YouTube. https://doi.org/10.1145/3500868.3559445
[62]
Renkai Ma and Yubo Kou. 2022. “I’m not sure what dierence is between their
content and mine, other than the person itself”: A Study of Fairness Perception
of Content Moderation on YouTube. Proc ACM Hum Comput Interact 6, CSCW2
(2022), 28. https://doi.org/10.1145/3555150
[63]
Emily McReynolds, Sarah Hubbard, Timothy Lau, Aditya Saraf, Maya Cakmak,
and Franziska Roesner. 2017. Toys that listen: A study of parents, children, and
internet-connected toys. In Conference on Human Factors in Computing Systems -
Proceedings. 5197–5207. https://doi.org/10.1145/3025453.3025735
[64]
Kathryn C. Montgomery, Je Chester, and Tijana Milosevic. 2017. Children’s
Privacy in the Big Data Era: Research Opportunities. Pediatrics 140 (November
2017), S117–S121. https://doi.org/10.1542/PEDS.2016- 1758O
[65]
Carol Moser, Tianying Chen, and Sarita Y. Schoenebeck. 2017. Parents’ and
children’s preferences about parents sharing about children on social media. In
Conference on Human Factors in Computing Systems - Proceedings. 5221–5225.
https://doi.org/10.1145/3025453.3025587
[66]
Helen Nissenbaum. 2004. Privacy as Contextual Integrity. Washington Law Review
79 (2004). https://heinonline.org/HOL/Page?handle=hein.journals/washlr79&
id=129&div=16&collection=journals
[67]
Anna O’Donnell. 2020. Why the VPPA and COPPA Are Outdated:
How Netix, YouTube, and Disney Can Monitor Your Family at No Real
Cost. https://heinonline.org/HOL/Page?handle=hein.journals/geolr55&id=471&
div=&collection=
[68]
Gwenn Schurgin O’Keee, Kathleen Clarke-Pearson, Deborah Ann Mulligan,
Tanya Remer Altmann, Ari Brown, Dimitri A. Christakis, Holly Lee Falik, David L.
Hill, Marjorie J. Hogan, Alanna Estin Levine, and Kathleen G. Nelson. 2011. The
Impact of Social Media on Children, Adolescents, and Families. Pediatrics 127, 4
(April 2011), 800–804. https://doi.org/10.1542/PEDS.2011- 0054
[69]
Luci Pangrazio and Neil Selwyn. 2018. “It’s Not Like It’s Life or Death or What-
ever”: Young People’s Understandings of Social Media Data. Social Media and So-
ciety 4, 3 (July 2018). https://doi.org/10.1177/2056305118787808/ASSET/IMAGES/
LARGE/10.1177_2056305118787808-FIG1.JPEG
[70]
Kostantinos Papadamou, Antonis Papasavva, Savvas Zannettou, Jeremy Black-
burn, Nicolas Kourtellis, Ilias Leontiadis, Gianluca Stringhini, and Michael Siri-
vianos. 2020. Disturbed YouTube for Kids: Characterizing and Detecting In-
appropriate Videos Targeting Young Children. In Proceedings of the Interna-
tional AAAI Conference on Web and Social Media, Vol. 14. 522–533. https:
//doi.org/10.1609/ICWSM.V14I1.7320
[71]
Jessica A. Pater, Moon K. Kim, Elizabeth D. Mynatt, and Casey Fiesler. 2016.
Characterizations of online harassment: Comparing policies across social media
Content Creators’ and Consumers’ Experiences with Content Classification for Child Safety on YouTube DIS ’24, July 1–5, 2024, IT University of Copenhagen, Denmark
platforms. In Proceedings of the International ACM SIGGROUP Conference on
Supporting Group Work. 369–374. https://doi.org/10.1145/2957276.2957297
[72]
Lara Schibelsky Godoy Piccolo, Pinelopi Troullinou, and Harith Alani. 2021.
Chatbots to Support Children in Coping with Online Threats: Socio-technical
Requirements. In DIS 2021 - Proceedings of the 2021 ACM Designing Interactive
Systems Conference: Nowhere and Everywhere. 1504–1517. https://doi.org/10.
1145/3461778.3462114
[73]
Dongxiao Qin. 2016. Positionality. The Wiley Blackwell Encyclopedia of Gender
and Sexuality Studies (April 2016), 1–2. https://doi.org/10.1002/9781118663219.
WBEGSS619
[74]
Irwin Reyes, Primal Wijesekera, Joel Reardon, Amit Elazari Bar On, Abbas
Razaghpanah, Narseo Vallina-Rodriguez, and Serge Egelman. 2018. “Won’t
Somebody Think of the Children?” Examining COPPA Compliance at Scale.
In The 18th Privacy Enhancing Technologies Symposium (PETS 2018). 63–83.
https://doi.org/10.1515/popets-2018- 0021
[75]
Ivan Rivera. 2019. CRAN - Package RedditExtractoR. https://cran.r-project.org/
web/packages/RedditExtractoR/index.html
[76]
Sarah T. Roberts. 2019. Behind the Screen: content moderation in the shadows of
social media.
[77]
Barrie Sander. 2019. Freedom of Expression in the Age of Online Platforms: The
Promise and Pitfalls of a Human Rights-Based Approach to Content Moderation.
Fordham Int Law J (2019).
[78]
Morgan Klaus Scheuerman, Jacob M. Paul, and Jed R. Brubaker. 2019. How
computers see gender: An evaluation of gender classication in commercial
facial analysis and image labeling services. Proc ACM Hum Comput Interact 3,
CSCW (November 2019), 33. https://doi.org/10.1145/3359246
[79]
Joseph Seering, Robert Kraut, and Laura Dabbish. 2017. Shaping Pro and Anti-
Social Behavior on Twitch Through Moderation and Example-Setting. In Pro-
ceedings of the 2017 ACM Conference on Computer Supported Cooperative Work
and Social Computing (CSCW ’17). Association for Computing Machinery, New
York, NY, USA, 111–125. https://doi.org/10.1145/2998181.2998277
[80]
Peter M. Senge. 1990. The Fifth Discipline: The art and practice of the learning
organization. Broadway Business. https://books.google.com/books/about/The_
Fifth_Discipline.html?id=wg9DG42quXEC
[81]
Kumar Bhargav Srinivasan, Cristian Danescu-Niculescu-Mizil, Lillian Lee, and
Chenhao Tan. 2019. Content removal as a moderation strategy: Compliance
and other outcomes in the changemyview community. Proc ACM Hum Comput
Interact 3, CSCW (November 2019), 163. https://doi.org/10.1145/3359265
[82]
Kaiwen Sun and Carlo Sugatan. 2021. They see you’re a girl if you pick a
pink robot with a skirt: A qualitative study of how children conceptualize data
processing and digital privacy risks. In Conference on Human Factors in Computing
Systems - Proceedings. https://doi.org/10.1145/3411764.3445333
[83]
TikTok. 2019. TikTok for Younger Users. https://newsroom.tiktok.com/en-
us/tiktok-for- younger-users
[84]
Milo Z. Trujillo, Samuel F. Rosenblatt, Anda Jáuregui Guillermo De, Emily Moog,
Briane Paul, V. Samson, Laurent Hébert-Dufresne, and Allison M. Roth. 2021.
When the Echo Chamber Shatters: Examining the Use of Community-Specic
Language Post-Subreddit Ban. https://doi.org/10.48550/arxiv.2106.16207
[85]
Kristen Vaccaro, Christian Sandvig, and Karrie Karahalios. 2020. “At the End
of the Day Facebook Does What It Wants”: How Users Experience Contesting
Algorithmic Content Moderation. In Proceedings of the ACM on Human-Computer
Interaction. 1–22. https://doi.org/10.1145/3415238
[86]
Kristen Vaccaro, Ziang Xiao, Kevin Hamilton, and Karrie Karahalios. 2021. Con-
testability For Content Moderation. Proc ACM Hum Comput Interact 5, CSCW2
(October 2021), 28. https://doi.org/10.1145/3476059
[87]
Heather Wilson. 2020. YouTube Is Unsafe for Children: YouTube’s Safeguards
and the Current Legal Framework Are Inadequate to Protect Children from
Disturbing Content. https://heinonline.org/HOL/Page?handle=hein.journals/
sjel10&id=237&div=&collection=
[88]
Richard Ashby Wilson and Molly K. Land. 2020. Hate Speech on
Social Media: Content Moderation in Context. Conn Law Rev 52
(2020). https://heinonline.org/HOL/Page?handle=hein.journals/conlr52&id=
1056&div=28&collection=journals
[89]
Pamela Wisniewski, Heng Xu, Mary Beth Rosson, and John M. Carroll. 2017.
Parents just don’t understand: Why teens don’t talk to parents about their online
risk experiences. In Proceedings of the ACM Conference on Computer Supported
Cooperative Work. 523–540. https://doi.org/10.1145/2998181.2998236
[90]
YouTube. 2023. Age-restricted content. https://support.google.com/youtube/
answer/2802167?hl=en
[91]
YouTube. 2023. Determining if your content is “made for kids.”. https://support.
google.com/youtube/answer/9528076?hl=en
[92]
YouTube. 2023. Frequently asked questions about “made for kids.”.
https://support.google.com/youtube/answer/9684541?hl=en#zippy=%2Chow-
do-i- know-if-my-content-is- not-made- for-kids
[93]
YouTube. 2023. Navigate YouTube Studio. https://support.google.com/youtube/
answer/7548152?hl=en
[94]
YouTube. 2023. Set your channel or video’s audience. https:
//support.google.com/youtube/answer/9527654?hl=en&ref_topic=9689353&
sjid=16427619472020172874-NA#
[95]
YouTube. 2023. Your YouTube content and Restricted Mode. https://support.
google.com/youtube/answer/7354993?hl=en
[96]
YouTube. 2023. YouTube Community Guidelines & Policies. https://www.
youtube.com/howyoutubeworks/policies/community-guidelines/
[97]
Leah Zhang-Kennedy, Christine Mekhail, Sonia Chiasson, and Yomna Abde-
laziz. 2016. From nosy little brothers to stranger-danger: Children and par-
ents’ perception of mobile threats. In Proceedings of IDC 2016 - The 15th In-
ternational Conference on Interaction Design and Children. 388–399. https:
//doi.org/10.1145/2930674.2930716
[98]
Jun Zhao, Ge Wang, Carys Dally, Petr Slovak, Julian Edbrooke-Childs, Max
Van Kleek, and Nigel Shadbolt. 2019. ‘I make up a silly name’: Understanding
children’s perception of privacy risks online. In Conference on Human Factors in
Computing Systems - Proceedings. https://doi.org/10.1145/3290605.3300336
... For instance, YouTube's recommendation algorithms often suggest inappropriate content to children [182], aligning with concerns of surveillance capitalism, where user privacy is compromised for profit [241]. Both Wang et al. [226] and Kahila et al. [112] found that when using different social video platforms, children recognize data collection but struggle to grasp its implications, leaving them vulnerable to privacy violations [36,141,148]. ...
... This ambiguity might show the dual nature of content categorization. For example, a creator might label a gaming video as suitable for viewers aged 13 and below, while the platform might classify it as appropriate only for those over 18 due to the violent elements involved [148]. Such discrepancies between platform definitions and creator interpretations can create confusion for consumers, thus making it difficult for parents to assess whether content is truly safe. ...
Conference Paper
Full-text available
Children's increasing use of social video platforms like YouTube and TikTok raises safety concerns for parents, yet little research explores how they mediate their children's social video consumption. Previous studies often treat online harms and benefits as outcomes of parental mediation, overlooking how these factors affect parental mediation or how these effects vary with parents' self-efficacy. To address these gaps, we surveyed 285 parents and found that perceived content informativeness value and content-inherent harm increase mediation, while entertainment value and creator trustworthiness decrease it. Parents' self-efficacy-digital literacy and confidence in understanding their children's consumption and children's consumption frequency significantly moderate these effects. These findings lead us to discuss how parental mediation differs between traditional media and social video platforms, where parents perform a more complex benefit-harm analysis due to competing effects of perceived harms and benefits. We propose strategies for enhancing parents' self-efficacy and platform-parent collaboration in children's online safety.
... However, this dynamic often makes it difficult for creators to manage the large volume of independent interactions from various contexts [8,26]. The HCI community has investigated how platform features, its governance, and audience influence creators' behaviors, emphasizing their needs to sustain growth [18,32,33]. However, the specific challenges that creators face when they manage an overwhelming influx of direct messages (DMs) remained underexplored. ...
Conference Paper
Full-text available
Social media content creators engage with their audiences through direct messaging to build self-branding and monetize their content. While private instant messages foster personalized interactions with audiences, creators often face challenges managing a high volume of messages from diverse contexts, leading to relational labor struggles. Through semi-structured interviews with 15 content creators, we explored the challenges they experience in interpreting messages and writing responses. We also explored expectations for AI-mediated communication tools in future designs that facilitate communication between creators and audiences. Our findings reveal that creators often struggle with understanding the message senders' contexts from messages, performing personas, and maintaining personal boundaries in writing their responses. AI support is seen as beneficial, yet there are concerns about its negative effects on maintaining relationships. We propose further research to enhance the design of AI-mediated communication tools.
Article
Full-text available
Online video platforms receive hundreds of hours of uploads every minute, making manual content moderation impossible. Unfortunately, the most vulnerable consumers of malicious video content are children from ages 1-5 whose attention is easily captured by bursts of color and sound. Scammers attempting to monetize their content may craft malicious children's videos that are superficially similar to educational videos, but include scary and disgusting characters, violent motions, loud music, and disturbing noises. Prominent video hosting platforms like YouTube have taken measures to mitigate malicious content on their platform, but these videos often go undetected by current content moderation tools that are focused on removing pornographic or copyrighted content. This paper introduces our toolkit (Malicious or Benign) for promoting research on automated content moderation of children's videos. We present 1) a customizable annotation tool for videos, 2) a new dataset with difficult to detect test cases of malicious content and 3) a benchmark suite of state-of-the-art video classification models.
Conference Paper
Full-text available
Volunteer moderators actively engage in online content management , such as removing toxic content and sanctioning anti-normative behaviors in user-governed communities. The synchronicity and ephemerality of live-streaming communities pose unique moderation challenges. Based on interviews with 21 volunteer moderators on Twitch, we mapped out 13 moderation strategies and presented them in relation to the bad act, enabling us to categorize from proactive and reactive perspectives and identify communicative and technical interventions. We found that the act of moderation involves highly visible and performative activities in the chat and invisible activities involving coordination and sanction. The juxta-position of real-time individual decision-making with collaborative discussions and the dual nature of visible and invisible activities of moderators provide a unique lens into a role that relies heavily on both the social and technical. We also discuss how the afordances of live-streaming contribute to these unique activities. CCS CONCEPTS • Human-centered computing → Empirical studies in collab-orative and social computing; Empirical studies in HCI .
Poster
Full-text available
Recent HCI studies have recognized an analogy between bureaucracy and algorithmic systems; given platformization of content creators, video sharing platforms like YouTube and TikTok practice creator moderation, i.e., an assemblage of algorithms that manage not only creators’ content but also their income, visibility, identities, and more. However, it has not been fully understood as to how bureaucracy manifests in creator moderation. In this poster, we present an interview study with 28 YouTubers (i.e., video content creators) to analyze the bureaucracy of creator moderation from their moderation experiences. We found participants wrestled with bureaucracy as multiple obstructions in re-examining moderation decisions, coercion to appease different algorithms in creator moderation, and the platform’s indifference to participants’ labor. We discuss and contribute a conceptual understanding of how algorithmic and organizational bureaucracy intertwine in creator moderation, laying a solid ground for our future study.
Article
Adolescent online safety researchers have emphasized the importance of moving beyond restrictive and privacy invasive approaches to online safety, towards resilience-based approaches for empowering teens to deal with online risks independently. Unfortunately, many of the existing online safety interventions are focused on parental mediation and not contextualized to teens' personal experiences online; thus, they do not effectively cater to the unique needs of teens. To better understand how we might design online safety interventions that help teens deal with online risks, as well as when and how to intervene, we must include teens as partners in the design process and equip them with the skills needed to contribute equally to the design process. As such, we conducted User Experience (UX) bootcamps with 21 teens (ages 13-17) to first teach them important UX design skills using industry standard tools, so they could create storyboards for unsafe online interactions commonly experienced by teens and high-fidelity, interactive prototypes for dealing with these situations. Based on their storyboards, teens often encountered information breaches and sexual risks with strangers, as well as cyberbullying from acquaintances or friends. While teens often blocked or reported strangers, they struggled with responding to risks from friends or acquaintances, seeking advice from others on the best action to take. Importantly, teens did not find any of the existing ways for responding to these risks to be effective in keeping them safe. When asked to create their own design-based interventions, teens frequently envisioned "nudges" that occurred in real-time. Interestingly, teens more often designed for risk prevention (rather than risk coping) by focusing on nudging the risk perpetrator (rather than the victim) to rethink their actions, block harmful actions from occurring, or penalizing perpetrators for inappropriate behavior to prevent it from happening again in the future. Teens also designed personalized sensitivity filters to provide teens the ability to manage content they wanted to see online. Some teens also designed personalized nudges, so that teens could receive intelligent, guided advice from the platform that would help them know how to handle online risks themselves without intervention from their parents. Our findings highlight how teens want to address online risks "at the root" by putting the onus of risk prevention on those who perpetrate them - rather than on the victim. Our work is the first to leverage co-design with teens to develop novel online safety interventions that advocate for a paradigm shift from youth risk protection to promoting good digital citizenship.
Article
Algorithmic systems help manage the governance of digital platforms featuring user-generated content, including how money is distributed to creators from the profits a platform earns from advertising on this content. However, creators producing content about disadvantaged populations have reported that these kinds of systems are biased, having associated their content with prohibited or unsafe content, leading to what creators believed were error-prone decisions to demonetize their videos. Motivated by these reports, we present the results of 20 interviews with YouTube creators and a content analysis of videos, tweets, and news about demonetization cases to understand YouTubers' perceptions of demonetization affecting videos featuring disadvantaged or vulnerable populations, as well as creator responses to demonetization, and what kinds of tools and infrastructure support they desired. We found creators had concerns about YouTube's algorithmic system stereotyping content featuring vulnerable demographics in harmful ways, for example by labeling it "unsafe'' for children or families -- creators believed these demonetization errors led to a range of economic, social, and personal harms. To provide more context to these findings, we analyzed and report on the technique a few creators used to audit YouTube's algorithms to learn what could cause the demonetization of videos featuring LGBTQ people, culture and/or social issues. In response to the varying beliefs about the causes and harms of demonetization errors, we found our interviewees wanted more reliable information and statistics about demonetization cases and errors, more control over their content and advertising, and better economic security.
Article
This project illuminates what data youth believe online advertisers and social media companies collect about them. We situate these findings within the context of current advertising regulations and compare youth beliefs with what data social media companies report collecting based on their privacy policies. Through interviews with 21 youth ages 10-17 in the United States, we learn that participants are largely aware of how their interactions on the website or app are used to inform personalized content. However, certain types of information like geolocation or how long data is retained is less clear to them. We also learn about what school and family factors influence youth to adopt apps and websites. This work has implications for design and policy related to companies' personal data collection and targeted advertising, especially for youth.
Article
A large number of the most-subscribed YouTube channels target children of very young age. Hundreds of toddler-oriented channels on YouTube feature inoffensive, well produced, and educational videos. Unfortunately, inappropriate content that targets this demographic is also common. YouTube's algorithmic recommendation system regrettably suggests inappropriate content because some of it mimics or is derived from otherwise appropriate content. Considering the risk for early childhood development, and an increasing trend in toddler's consumption of YouTube media, this is a worrisome problem. In this work, we build a classifier able to discern inappropriate content that targets toddlers on YouTube with 84.3% accuracy, and leverage it to perform a large-scale, quantitative characterization that reveals some of the risks of YouTube media consumption by young children. Our analysis reveals that YouTube is still plagued by such disturbing videos and its currently deployed counter-measures are ineffective in terms of detecting them in a timely manner. Alarmingly, using our classifier we show that young children are not only able, but likely to encounter disturbing videos when they randomly browse the platform starting from benign videos.
Article
Should social media platforms override a community’s self-policing when it repeatedly break rules? What actions can they consider? In light of this debate, platforms have begun experimenting with softer alternatives to outright bans. We examine one such intervention called quarantining, that impedes direct access to and promotion of controversial communities. Specifically, we present two case studies of what happened when Reddit quarantined the influential communities r/TheRedPill (TRP) and r/The_Donald (TD). Using over 85M Reddit posts, we apply causal inference methods to examine the quarantine’s effects on TRP and TD. We find that the quarantine made it more difficult to recruit new members: new user influx to TRP and TD decreased by 79.5% and 58%, respectively. Despite quarantining, existing users’ misogyny and racism levels remained unaffected. We conclude by reflecting on the effectiveness of this design friction in limiting the influence of toxic communities and discuss broader implications for content moderation.