Content uploaded by Renkai Ma
Author content
All content in this area was uploaded by Renkai Ma on Jun 04, 2024
Content may be subject to copyright.
Labeling in the Dark: Exploring Content Creators’ and
Consumers’ Experiences with Content Classification for Child
Safety on YouTube
Renkai Ma
College of Information Sciences and Technology,
Pennsylvania State University
USA
renkai@psu.edu
Zinan Zhang
College of Information Sciences and Technology,
Pennsylvania State University
USA
zzinan@psu.edu
Xinning Gui
College of Information Sciences and Technology,
Pennsylvania State University
USA
xinninggui@psu.edu
Yubo Kou
College of Information Sciences and Technology,
Pennsylvania State University
USA
yubokou@psu.edu
ABSTRACT
Protecting children’s online privacy is paramount. Online platforms
seek to enhance child privacy protection by implementing new
classication systems into their content moderation practices. One
prominent example is YouTube’s “made for kids” (MFK) classica-
tion. However, traditional content moderation focuses on managing
content rather than users’ privacy; little is known about how users
experience these classication systems. Thematically analyzing
online discussions about YouTube’s MFK classication system, we
present a case study on content creators’ and consumers’ expe-
riences. We found that creators and consumers perceived MFK
classication as misaligned with their actual practices, creators
encountered unexpected consequences of practicing labeling, and
creators and consumers identied MFK classication’s intersec-
tions with other platform designs. Our ndings shed light on an
interwoven network of multiple classication systems that extends
the original focus on child privacy to encompass broader child
safety issues; these insights contribute to the design principles of
child-centered safety within this intricate network.
CCS CONCEPTS
•Human-centered computing
→
Empirical studies in collab-
orative and social computing.
KEYWORDS
COPPA, content creation, child privacy protection, content creator,
child safety
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for components of this work owned by others than the
author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specic permission
and/or a fee. Request permissions from permissions@acm.org.
DIS ’24, July 1–5, 2024, IT University of Copenhagen, Denmark
©2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.
ACM ISBN 979-8-4007-0583-0/24/07
https://doi.org/10.1145/3643834.3661565
ACM Reference Format:
Renkai Ma, Zinan Zhang, Xinning Gui, and Yubo Kou. 2024. Labeling in
the Dark: Exploring Content Creators’ and Consumers’ Experiences with
Content Classication for Child Safety on YouTube. In Designing Interactive
Systems Conference (DIS ’24), July 1–5, 2024, IT University of Copenhagen,
Denmark. ACM, New York, NY, USA, 15 pages. https://doi.org/10.1145/
3643834.3661565
1 INTRODUCTION
Protecting children’s online privacy is paramount. Notable laws,
such as the Children’s Online Privacy Protection Act (COPPA) in
the US [
24
] and the General Data Protection Regulation (GDPR)
in Europe [
39
], underscore this commitment. The emphasis on
safeguarding children’s online privacy stems from their inherent
vulnerabilities and limited capacity to understand the risks of shar-
ing personal information (e.g., name, browsing habits), such as
threats like online harassment and cyberbullying [
2
,
89
], as well
as the inappropriate commercial use of children’s information for
targeted advertising [
41
], which COPPA and GDPR aim to mitigate.
As these risks evolve due to technical advancements, such as rec-
ommendation algorithms [
59
] and advertising technologies [
34
],
particularly with the growing popularity of social media, legal schol-
ars have called for a more nuanced, modernized update of COPPA
(e.g., [
37
,
67
,
74
]). Meanwhile, HCI researchers (e.g., [
2
,
41
,
82
,
98
])
have striven to design a secure online environment for youth and
minors.
To enhance child privacy protection, several online platforms in
the US have implemented new classication systems in their exist-
ing content moderation practices. Content moderation refers to the
organized practice of screening user-generated content to deter-
mine its appropriateness for a certain platform [
76
]. The changes in
moderation practices, including the incorporation of new classica-
tion systems, emerged in response to nes for violating COPPA. In
2019, TikTok was ned $5.7 million by the Federal Trade Commis-
sion (FTC) for violating COPPA [
26
]. In response, TikTok launched
“TikTok for Younger Users,” a restricted version that shows algo-
rithmically curated content and restricts minors from generating
public videos or comments [
83
]. Similarly, after a $170 million FTC
DIS ’24, July 1–5, 2024, IT University of Copenhagen, Denmark Ma et al.
ne [
25
], YouTube launched the “made for kids” (MFK) classica-
tion system [
91
], which requires content creators to classify videos,
disabling consumers’ data tracking [
92
]. Specically, MFK videos
restrict consumers from making comments and impact creator con-
tent’s monetization performance, a type of moderation termed as
“demonetization” [21, 53, 60].
However, traditional content moderation practices focus on man-
aging content [
76
] rather than user privacy. While moderation prac-
tices contribute to general child safety by classifying content as
inappropriate for children (e.g., [
3
,
5
,
70
]), relatively little research
examines how these moderation practices directly contribute to
child privacy protection, especially preventing online platforms
from collecting children’s personal information. Also, social me-
dia platforms implement child privacy laws like COPPA [
24
] by
restricting content creation and consumption (e.g., [
83
,
92
]). How-
ever, prior work (e.g., [
2
,
8
,
55
,
82
,
89
]) primarily focuses on the
roles of parents, adolescents, and children in child privacy on social
media, leaving a gap in understanding the roles of creators and
consumers. Given these research gaps, we chose YouTube’s MFK
classication as a case study to explore creators’ and consumers’
experiences with the implementation of child privacy protection.
YouTube has been one of the most popular platforms among chil-
dren [
38
] and, after receiving one of the heaviest FTC nes for
violating COPPA, has made some of the most notable initiatives in
child privacy protection design, such as MFK classication [
91
,
92
]
and YouTube Kids app, compared with other platforms like TikTok
[
26
,
83
] and Facebook Messenger [
27
]. So, we ask: How do content
creators and consumers experience the MFK classication
system on YouTube?
Guided by our research question, we used reexive thematic anal-
ysis to qualitatively analyze relevant online discussions posted in
YouTube-related communities on Reddit. Creators and consumers
observed misalignment between MFK classication and their actual
practices, resulting in false positives and false negatives of MFK clas-
sication. Creators experienced unintended consequences when
manually practicing MFK classication. Both creators and con-
sumers further observed that MFK classication intersected with
other platform designs, especially classication systems, through
coordination, inconsistency, and conict. Drawing from and ex-
tending Bowker and Star’s classication theories [
16
], we discuss
how our ndings indicate an interwoven network of classication
systems that extends the MFK classication’s original focus of child
privacy to a broader issue of child safety (e.g., inappropriate content
and advertisements). We thus put forward the design principles of
child-centered safety in such an interwoven network of classica-
tion systems.
Our study utilizes social media discussions to delve into creators’
and consumers’ experiences associated with YouTube’s MFK clas-
sication system, reecting on how it shapes child safety. Rather
than cataloging the exhaustive account of creators’ motivations or
how they use the MFK system to protect child privacy, our study
prioritizes a practical lens to reveal a wide range of user reactions,
strategies, and challenges underneath their experiences with MFK
classication as YouTube and FTC enforce creators to directly im-
plement child privacy protection through the MFK classication
[
25
,
92
]. This is vital for understanding the intricate ways creators
contribute to child safety on the platform. Our ndings also high-
light the practical application of the MFK system and its inuence
on content consumption, thereby underpinning the necessity for
policy and platform design to authentically reect the dynamics
among creators, consumers, and the platform. Our exploratory
study can thus contribute to the HCI and designing interactive
systems (DIS) communities with four insights:
•
We oer an empirical account of content creators’ and con-
sumers’ experiences with the designs of child privacy protec-
tion on commercial and social media platforms like YouTube,
enriching existing HCI literature on children’s online privacy
(e.g., [55, 82, 89, 98]).
•
We show how social media and commercial platforms like
YouTube leverage content moderation and creators’ classi-
cation practices for child privacy protection, deepening HCI
literature concerning moderation practices (e.g., [
48
,
50
,
79
])
and content creators’ interactions with moderation designs
(e.g., [49, 53, 62]).
•
Drawing from and expanding on Bowker and Star’s classi-
cation theories [
16
], we show an interwoven network of
classication systems on YouTube and its broad eects on
child safety beyond the original focus of child privacy.
•
In such an interwoven classication network, we lay out
actionable design principles of child-centered safety on plat-
forms like YouTube.
2 BACKGROUND: COPPA AND “MADE FOR
KIDS” (MFK) CLASSIFICATION SYSTEM ON
YOUTUBE
COPPA [
24
] is a US federal law established in 1998 to protect the pri-
vacy of children under 13, requiring US-based websites or services
to obtain parental consent before collecting children’s personal
information, such as full names, home addresses, email addresses,
IP addresses, behavioral data, photos, and recordings [9, 24].
To comply with COPPA, YouTube has carried out several mea-
sures over time. In 2008, it implemented an age-restriction classi-
cation to limit specic videos to viewers over 18. It introduced
the YouTube Kids app/platform in 2015, providing a curated en-
vironment for kids [
4
]. In 2019, after a $170 million FTC ne for
violating COPPA [
25
], YouTube launched “made for kids” (MFK)
[
91
] on January 6, 2020. This requires all creators on the platform
to manually classify their videos as MFK or non-MFK [
92
] , with
guidelines provided in Table 1 and the classication interface shown
in Figure 1. YouTube uses machine learning algorithms to label all
unclassied videos or override creators’ classications and claims it
disables consumers’ data tracking on MFK videos to enhance child
privacy [91, 92, 94].
3 RELATED WORK
We discuss two literature groups: children’s online privacy and
classication in content moderation. We discuss how the former
distinguishes between interpersonal and commercial dimensions
of children’s online privacy. We further introduce Bower and Star’s
classication theories and discuss how they help understand the
classication in content moderation. This sets a solid ground for us
to explore users’ experiences with YouTube’s MFK classication.
Content Creators’ and Consumers’ Experiences with Content Classification for Child Safety on YouTube DIS ’24, July 1–5, 2024, IT University of Copenhagen, Denmark
Figure 1: MFK policy of how to decide a video between MFK
and non-MFK [91].
Figure 2: Screenshot of how creators manually classify a
video as MFK or non-MFK on the YouTube Studio dashboard
[92].
3.1
The Protection of Children’s Online Privacy
Protecting children’s online privacy is challenging. Nissenbaum
denes privacy as “a right to appropriate ow of personal informa-
tion” [
66
]. While there is no simple denition of children’s online
privacy, several HCI studies (e.g., [
55
,
63
,
97
]) commonly mention
that children interpret online privacy as recognizing the sensitivity
of personal information and the need to share it selectively on-
line. Children’s increasing use of social media poses risks to their
privacy. Internet safety researchers warn the public that children
lack an understanding of privacy and tend to disclose too much
information on social media [
68
], while parents might hold dier-
ent perspectives on it. Researchers found that parents generally
adopt a passive approach in mediating their children’s device use,
information sharing, and privacy education [
55
], but their involve-
ment and concern increase when their children’s data is collected
by social media platforms [
35
]. For children, Ghosh et al. found
that they disliked apps that were overly restrictive of their privacy,
negatively impacting their relationships with parents, while chil-
dren liked apps that supported self-regulation [
40
]. As stressed by
Badillo-Urquiola et al. [
10
], children value personal agency and
privacy rather than constant parental consent.
Under such diverse evidence concerning children’s online pri-
vacy protection, two distinguishable perspectives have emerged in
prior HCI research: interpersonal privacy and commercial privacy.
First, interpersonal privacy describes how children’s personal in-
formation is created, accessed, and multiplied through their social
connections [
59
]. HCI researchers have focused on “sharenting,”
parents’ practices of sharing personal information of their children
online (e.g., [
6
–
8
,
15
,
56
,
65
]). Amon et al. found that the size of
parents’ social networks positively aects their parental sharing
frequency [
8
]. Ammari et al. found that parents negotiate with each
other and extended family members to establish sharing bound-
aries on social media (e.g., whether people can see or share their
children’s information) [
6
]. This line of work shows that dier-
ent social connections of children shape the ow of their personal
information online.
Second, commercial privacy concerns how online platforms
gather and analyze children’s personal information for business
purposes (e.g., [
1
,
64
,
69
]), such as targeted advertisements [
11
].
Recently, HCI researchers have started to explore how children un-
derstand their commercial privacy. Goray and Schoenebeck found
that children have limited awareness of whether online advertis-
ers collect their data and what types of personal information are
retained by social media platforms [
41
]. Zhao et al. also found that
while children can identify privacy risks of oversharing with their
social connections online, they remain incapable of identifying
online tracking and targeted advertisements [
98
]. Similarly, Sun
et al. uncovered that children tend to characterize data tracking
as operated by humans rather than analytic tools on social media
platforms and less consider the platforms that collect and process
their data as privacy threats [82].
Our study aims to understand the impacts of content creation
and consumption on children’s commercial privacy. In the US, social
media platforms like YouTube [
91
] and TikTok [
83
] have imple-
mented child privacy laws like COPPA [
24
] to limit content creation
and consumption, mitigating the risks of collecting child creators’
and consumers’ personal information. However, relatively little is
known about how content creators and consumers, as important
stakeholders in child privacy protection, perceive or respond to
these measures. This study aims to ll this research gap.
3.2 Classication Theory and Classication
Systems in Content Moderation
At the heart of content moderation is a classication task, where
platforms organize a vast array of user-generated content (UGC) —
from hate speech to misinformation — into categories dened in
platforms’ moderation policies (e.g., YouTube [96], Facebook [33])
to determine its appropriateness for the given platforms (Roberts,
2019). This classication is underscored by the moderation pro-
cesses discussed in prior literature (e.g., [
13
,
36
,
42
]), where mod-
eration policies establish the criteria for content classication to
identify and ag policy violations [
57
]. Platforms typically employ
human moderators for this classication task (Roberts, 2019) or
develop complex algorithms to detect and categorize policy vio-
lations [
45
]. These algorithmic systems, rened through learning
from prior moderated content, are then applied to detect new, non-
compliant content [
30
,
42
]. When moderation algorithms encounter
potential classication issues (e.g., inaccurate classication), hu-
man moderators are called upon for nal adjudication, typically
resulting in moderation decisions like content removal [
47
,
81
] or
account suspension [44, 85].
This systematic classication in moderation profoundly res-
onates with Bowker and Star’s classication theories [
16
]. Classi-
cation describes organizing things into categories, the metaphorical
or literal “boxes,” which signicantly shape our experiences, knowl-
edge production, and social interactions. For example, Bowker and
DIS ’24, July 1–5, 2024, IT University of Copenhagen, Denmark Ma et al.
Star discussed an example of Dewey’s library scheme, which as-
signs classication numbers (e.g., index) to new books in a library
to allocate them to the appropriate location based on subjects. They
highlighted how it facilitates knowledge access and shapes our
cognition and understanding. While Bowker and Star mention that
such classication is ubiquitous, it is typically invisible and only
becomes noticeable when it breaks down.
Bowker and Star also stress that no classication system can fully
capture the world’s complexity [
16
]. So does moderation when it
confronts the variety of content. HCI and legal scholars criticize that
moderation policies across platforms lack granularity in dening
the appropriateness of content [
71
,
88
]. Similarly, Jhaver et al. found
that moderators reassessed old moderation policies and articulated
new ones to rene classication practices [
48
]. Existing moderation
policies might fail to capture too nuanced human behaviors, as
stressed by Jiang et al. [50].
Bowker and Star’s classication theories thus oer a lens to bet-
ter understand moderation experience – the experiences of those
subjected to moderation decisions. They conceptualized the notion
of torque to describe how classication systems inuence individ-
uals’ lived experiences – “where the ‘time’ of the body and of the
multiple identities cannot be aligned with the ‘time’ of the clas-
sication system” [
16
]. In moderation, HCI research reects this
classication misalignment. Vaccaro et al. highlighted that Face-
book users felt moderation was inconsistently applied, leading to
undue account suspensions [
85
]. Similarly, Haimson et al. reported
that sexually minority individuals felt marginalized by the mis-
classications of their surgery content [
44
]. Ma and Kou found
that YouTube creators faced nancial frustration when algorithms
misclassied their gaming content as violent [62].
In this study, we apply Bowker and Star’s classication theo-
ries to unpack the complexities of multiple classication systems
that coexist with YouTube’s child privacy protection implementa-
tion (e.g., MFK classication). While some prior work delves into
moderation practices of classifying content as inappropriate for
general child safety (e.g., [
3
,
5
,
70
]), relatively little research ex-
amines how these practices directly contribute to child privacy
protection – preventing online platforms from collecting children’s
personal information. Additionally, prior HCI and CSCW litera-
ture (e.g., [
36
,
44
,
62
,
86
]) has touched upon users’ experiences
with moderation, with a particular focus on content creators on
YouTube [
53
,
61
,
62
]. However, the connection between these expe-
riences and child privacy remains unexplored. Through the case of
YouTube’s MFK classication system, designed to align with child
privacy laws [
9
,
24
], we aim to reveal both creators’ and consumers’
experiences with this system. Recently, legal scholars [
12
,
23
,
87
]
have suggested that YouTube creators might inadvertently misclas-
sify their content, and business researchers [
51
] have found that
child-directed content creators have reduced content quality due
to the MFK classication. Given these concerns, it’s necessary to
empirically study the experiences of both creators and consumers
with the MFK classication.
4 METHODS
4.1 Data Collection
In this study, we chose online discussion data from YouTube-specic
subreddits as the data source for two reasons. First, analyzing online
discussions from subreddits is a common approach for data collec-
tion in HCI research (e.g., [
31
,
58
,
60
]), especially when accessing
targeted participants like content creators is challenging. Second,
unlike interviews that rely on human memories, online discussions
oer real-time insights into users’ experiences as they share online,
aiding our understanding of their interactions with MFK classi-
cation. This data source enabled us to capture a broad view of
experiences interacting with MFK, encompassing user reactions,
perceptions, and challenges.
Thus, we choose relevant subreddits for data collection, includ-
ing r/youtube, r/youtubers, r/newtubers, and r/partneredyoutube.
That was because, after searching “YouTube” on Reddit, these sub-
reddits had noticeably larger community sizes: 1.1 million, 235
thousand, 328 thousand, and 57.6 thousand members, respectively,
than others, except biased communities such as r/fuckYTCOPPA
and r/BannedYouTube. Also, the four subreddits’ self-descriptions
are highly relevant in understanding both creators’ and consumers’
experiences. For example, r/youtube’s “About Community” states
it is for general discussions about YouTube, and the other three
focus on content creation. Please note that we received approval
from our institution’s Institutional Review Board (IRB) before data
collection.
Our data collection involved four steps. First, in February 2023,
based on YouTube’s moderation policies [
91
,
92
] and the rst au-
thor’s domain knowledge about YouTube, we gathered a set of
keywords, including “COPPA” and “made for kids,” the two most
saliently relevant terms, as well as “kids friendly,” “child friendly,”
“kid oriented,” “child oriented,” “kid,” and “parent,” where the latter
two were not strictly relevant but can potentially help fetch more
data for analysis. Second, using these keywords, we fetched 1,819
threads and 54,020 associated comments from the four subreddits
through the package ‘RedditExtractoR’ on R [
75
]. This R package
helped search the keywords in titles, posted texts, or comments and
returned the search results. We then skimmed all threads and selec-
tively read the comments to examine the richness of the dataset and
collect more keywords for further data collection. Third, we identi-
ed more keywords, including “Age 13”, “FTC,” “children,” “children
primary audience,” and “MFK,” leading to 745 more threads and
9,814 associated comments. The nal dataset included 2,564 threads
and 63,834 comments, stored and analyzed in Google Sheets, fo-
cusing on submission texts and comments. Fourth, since FTC ned
YouTube after September 2019 [
25
], we removed threads and com-
ments posted before January 1, 2019. This resulted in 528 threads
and 8,295 comments for data analysis (see column “Data posted
after January 2019” in Table 1).
Before data analysis, there was one step of data preprocessing.
We recognize that a platform like YouTube involves both content
creation and consumption by users. However, for the purpose of
this study, we need to dierentiate between creators and consumers
to understand their respective behaviors and perspectives. Thus, we
started by assuming all users on YouTube are content consumers.
Content Creators’ and Consumers’ Experiences with Content Classification for Child Safety on YouTube DIS ’24, July 1–5, 2024, IT University of Copenhagen, Denmark
This was based on the understanding that using YouTube inher-
ently requires users to consume content, even at the most basic
level, such as reading video titles or understanding platform func-
tionalities, before engaging with more substantial content like the
videos themselves. Then, for content creators, we set a “creator
self-disclosure” criterion: Posters, either thread or comment posters
in the four subreddits, were categorized as creators if they clearly
disclosed their creator identity by mentioning keywords like “my
audience,” “my channel,” “my video,” or other keywords that are
easy for us to identify their creator role.
Otherwise, we considered the posters to be consumers. We ac-
knowledge that this method may not perfectly capture all creators,
especially those not explicitly mentioning their creator role. How-
ever, given the larger proportion of consumers in online commu-
nities and the need for a practical method of categorization, this
approach provides a functional way to dierentiate between the
two roles for our analysis.
4.2 Data Analysis
The research team applied reexive thematic analysis (RTA) [
18
,
19
]
inductively to analyze the whole dataset. RTA is a “theoretically ex-
ible method” for developing, analyzing, and interpreting patterns
in qualitative data [
18
], incorporating researchers’ experiences,
pre-existing knowledge, and social positions to analyze the data
critically.
Data analysis involved four steps. First, two researchers famil-
iarized themselves with the threads dataset and resolved confu-
sion about the contexts that the dataset mentioned (e.g., what
“age-restriction” is, how MFK classication will turn o video fea-
tures such as commenting and playlist). Second, they individually
screened the data and assigned initial codes in Google Sheets to
represent ideas expressed in the dataset that can answer the re-
search question. They held weekly meetings with two other senior
researchers to discuss each code’s relevance and correspondence
with original quotations. The dataset included much data that were
not directly related to MFK classication due to a wide range of
keyword searching, so a critical criterion of deciding data point’s
relevance for coding was whether it discussed about MFK classi-
cation, such as how creators talked about their perceptions and
reactions to COPPA or FTC and how they share perspectives about
these with others. For example, some creators shared knowledge
of creator growth, such as “the advice about views/subs vs. watch
time is solid,” which is unrelated to our research question.
Third, the two researchers examined the relevance of the data
for coding and, meanwhile, continually assigned initial codes to
the dataset and conducted rounds of coding to identify what pat-
terns (i.e., subset themes) are reected by initial codes. This process
identied subset themes from the initial codes. For example, the
researchers assigned the initial code, “MFK misaligned with par-
ents’ expected involvement, and they did not acknowledge MFK’s
labeling,” to the quote, “I am a parent myself, and it is my job. . ..It
is up to me to know or accept that a company like Google might
scrape data targeted at me and my kids.” This code is conceptually
related to other codes about how other content consumers consider
MFK classication and its outcomes misaligning with their collec-
tive understanding, so we grouped them together under one theme,
“MFK misaligned with consumers’ collective understanding and
recognition,” and further reported these similar codes together in
Section 5.1, Misalignment between MFK Classication and Prac-
tice. The criterion for grouping initial codes into subset themes
was if multiple codes consistently appeared and shared underly-
ing concepts that can answer our research questions. For example,
codes capturing user eorts to circumvent and not consider content
as MFK labeled, such as “avoidance strategies for using restricted
features” and “parental adjustments to content access,” were consol-
idated under the theme of Section 5.1.2 User-Driven False Negatives
of MFK. This theme reects how users actively navigate around
the MFK system’s limitations to maintain their engagement with
content in ways that defy its restrictions. For another example, we
grouped codes describing creators’ struggles with MFK classica-
tion, such as “uncertainty in content classication” and “challenges
with vague guidelines,” under the theme Section 5.2 Unexpected
Consequences of Practicing MFK Classication as Creators. This
theme reects how broad MFK policies complicate content man-
agement for creators.
In the last data analysis step, the research team continued as-
signing codes to the data and grouping codes into subset themes
until theoretical saturation [
43
]. This indicated a high percentage
of initial codes within more than half the volume of the dataset and
a minimal increment of initial codes after screening and analyzing
around 60% of the dataset (see column “Screened and analyzed
data for theoretical saturation” in Table 2). In other words, the
team reached the point where no particularly new codes or themes
emerged from the dataset [
43
]. Last, in the weekly meetings, all four
researchers consolidated similar subset themes into overarching
themes and discarded subset themes deemed thin without enough
initial codes. Eventually, data analysis led to a thematic map to
answer the research question suciently through three Findings
from Sections 5.1 to 5.3. In reporting ndings, we ensured our data’s
anonymity by removing YouTube channel names and paraphrasing
the original quotes. Please also note that while Bowker and Star’s
classication theories provide a valuable conceptual framework
for understanding classication systems on YouTube, they did not
directly drive our data analysis.
4.3 Researcher Positionality
Our interpretation of YouTube creators’ and consumers’ experi-
ences is based on our positionality [
73
], including social roles, in-
tellectual history, and lived experience. The rst two authors are
amateur video content creators on YouTube, with the rst author
closely in touch with and frequently engaging with more than
ten creators across content categories, such as gaming, animation,
and beauty, for over three years. The other two authors, while not
content creators themselves, are seasoned consumers of creators
across education, gaming, and entertainment. Regarding intellec-
tual history, the rst and last authors have been researching content
creators, creator-audience relationships, and YouTube since 2020,
equipping the research team with rich domain knowledge. This
combination of hands-on experience and academic insight positions
us well to perform reexive thematic analysis for this study.
DIS ’24, July 1–5, 2024, IT University of Copenhagen, Denmark Ma et al.
Table 1: Data Preprocessing (Section 4.1) and Analysis for Theoretical Saturation (Section 4.1)
Data source Data posted after January 2019 Screened and analyzed data for theoretical saturation
Quantity of threads Quantity of comments Quantity of threads Quantity of comments
r/newtubers 168 1,915 14 347
r/partneredyoutube 25 210 6 85
r/youtube 245 5,396 39 4,423
r/youtubers 90 774 4 130
Total 528 8,295 63 4,985
5 FINDINGS
We found that the YouTube creators and consumers perceived MFK
policy enforcement as misaligned with their actual community
practices (Section 5.1), creators shouldered the unexpected burden
of MFK classication (Section 5.2), and both creators and audiences
observed three types of intersection of MFK classication with
other platform designs, including coordination, inconsistency, and
conict (Section 5.3).
5.1 Misalignment between MFK Classication
and Practice
YouTube creators and consumers have observed a misalignment
between MFK classication and actual practices, resulting in false
positives and false negatives: (1) the MFK classication system
incorrectly ags videos as suitable for children, and (2) MFK videos
do not receive the intended level of recognition by creators and
consumers.
5.1.1 Perceived False Positives of MFK Classification. False positives
of MFK classication occur when videos classied as MFK do not
match the two criteria set out by the MFK policies—namely, that
the primary or intended audience is children [
91
,
92
,
94
]. One such
discrepancy occurs when an MFK video should have been classied
as age-restricted (i.e., for people over 18) [
90
] instead of MFK. A
consumer posted:
I just found a video that is labeled as [made] for kids, and
yet it is age-restricted based on community guidelines.
It’s a Fritz the Cat episode where there’s a Nazi bunny,
and swastikas are shown a lot in it. There’s clearly a
bug in the AI that makes the AI fail to consider age
restriction status. (consumer; r/youtube)
This content consumer perceived an MFK video should have
been classied as age-restricted because the video contained “Jojo
Rabbit,” a comedy about a German boy who imagines his friend
is Hitler. This video thus had intense violence, death, and anti-
Semitism and was not appropriate for children to watch, as the
above consumer perceived.
Besides, creators themselves might produce false positives, as
evidenced by one who admitted:
This whole “made for kids” thing is so dumb. I’ve up-
loaded Overwatch futa porn and marked it as “made
for kids,” but nothing has happened to me. (creator;
r/youtube)
The video posted by the creator above contained adult-oriented
material from the video game “Overwatch,” which should not have
been marked as MFK. They further stressed the gap in the oversight
mechanisms of YouTube’s MFK classication in correcting creators’
mislabeling.
When inappropriate content is classied as MFK, some con-
sumers intuitively blame creators. For example, a consumer com-
mented when a news video of a massive shooting is classied as
MFK:
100 % on the uploader. They either checked the wrong
box, or they blanketed their entire channel as for kids
(which would be really stupid for news channels to do).
(consumer; r/youtube)
This consumer stressed two ways of avoiding false positives,
including (1) videos within one YouTube channel needed attentive
classication from creators, and (2) the MFK classication should
not automatically apply MFK tags to all new videos, even though a
creator set their whole channel as MFK.
However, creators observe algorithms cannot make nuanced
classications on their videos. A creator posted:
I keep getting YouTube setting my ESL language videos
specically targeted at teens and young adults to MADE
FOR KIDS. .. my channel is targeted at kids, yes, (.. ..)
but basically, my channel is 80 percent made for kids,
and 20 percent not made for kids. (creator; r/youtube)
This case underscored the limitations of YouTube’s algorithms
in discerning nuanced consumer targets, leading to false positives.
This case also shows how algorithms undermined creators’ origi-
nal discretion in classication, which is dierent from what MFK
policies expect [92].
Some viewers thus believed responsibility lies with both the
consumers and creators to avoid misclassications, as a viewer
commented: “Message the creator. They need to uncheck the ‘made
for kids’ box in their video settings”.
5.1.2 User-Driven False Negatives of MFK. False negatives refer to
instances where content is classied as MFK and thus should be
restricted under the MFK guidelines [92, 94] but is not recognized
or treated as such by creators and consumers. Specically, creators
and consumers have felt that the MFK classication limits their
ability to engage with content as they please. Thus, they devise
methods to maintain their autonomy, creating false negative actions
that treat videos as if they are not MFK, even though they are, as
labeled by the MFK classication system.
Content Creators’ and Consumers’ Experiences with Content Classification for Child Safety on YouTube DIS ’24, July 1–5, 2024, IT University of Copenhagen, Denmark
One such method involves the use of playlists, a feature that
allows users to organize, curate, and share videos in a specic order.
Despite MFK restrictions, a viewer shared:
I just tested this and found that I can still add Made for
Kids videos to playlists from the search results, except
for the actual video page! Hover over the video’s title,
click on the three dots that appear on the right side,
and click on Save to Playlist. (. ..) It’s 2022, and I’ve
been using this method since COPPA started, and it has
consistently worked for me. (consumer; r/youtube)
This consumer above not only utilized a system aw to create
a false negative but also shared the knowledge with others, thus
spreading the practice. Especially as this consumer validated its ef-
fectiveness in 2020 and 2022, the MFK classication did not enforce
moderation policies and implement function changes on YouTube
over time.
Children, recognized as a unique group of content consumers,
often venture into areas not covered by MFK classication, poten-
tially leading to unintended data collection. For example, a creator
wrote:
This proposed rule (MFK) won’t change anything. Kids
can just use a parent’s account via iPad/phone/TV/computer
or laptop, so all that’s really happening is the creators
are being punished.(creator; r/youtube)
In this case, as YouTube applied MFK policies [
91
] at a video level
rather than considering the broader context of children’s media
interaction habits, the creator above claimed that it would be easy to
create false negatives of MFK when children access their non-MFK
content inadvertently.
Another creator questioned, “How are YouTube and the FTC gonna
deal with kids commenting on Not Made for Kids videos and adding
them to playlists?”This query pointed out that children have been
engaging with content outside the MFK classication, hinting at
the widespread user-driven creation of false negatives.
Parents deem that MFK classication undermines their auton-
omy in managing their children’s content consumption experience.
For example, a parent expressed their frustration:
I am a parent myself, and it is my job to parent my kid
how I see t. It is up to me to know or accept that a
company like Google might scrape data targeted at my
kids and me. Just like most laws like this, they always
start out with good intentions, but you know how the
saying goes. (consumer; r/youtube)
This parent highlights two issues that might create false positives
of MFK. First, the MFK classication above did not oer a parent
consent option, which was misaligned with COPPA requirements
[
24
,
25
]. Second, dierent from how parents can make autonomous,
informed decisions on their children’s well-being [
54
], this parent
above complained about the lack of control over their kids’ data
privacy, showing a disconnect between policy and practical parental
needs.
The lack of nuanced control is further emphasized by another
parent’s request for more selective content ltering:
I want an app that will give me the ability to select
the shows/channels *I* want my kids to be able to see.
Whether that app is YouTube, YouTube Kids, or what-
ever, I don’t care. For example, my son is 13 and big into
Fortnite. I want him to be able to watch specic YouTu-
bers that do Fortnite while excluding others. (consumer;
r/youtube)
This case underscored the deciencies in the current MFK classi-
cation design, which did not aord parents the active role they seek
in the content classication process and inadvertently prompted
them to create workarounds that could lead to the creation of false
negatives — videos that the MFK classication system would cat-
egorize as not suitable for children being treated by parents as
acceptable for their children’s content consumption.
5.2 Unexpected Consequences of Practicing
MFK Classication as Creators
The unexpected consequences of MFK classication refer to the
predicament creators need to overcome in their content creation,
compared to the time before YouTube enforced MFK policies [
91
,
94
].
YouTube expects creators to practice MFK classication: “We rely
on you to tell us if your content is intended for kids because you
know your content best. We trust you to set your audience accurately”
[
94
]. However, creators feel uncertain about what accurate labeling
is, which prompts them to exert additional eort to standardize
content creation or reduce their passion for content creation.
5.2.1 Uncertainty Regarding Proper Classification. While YouTube
explicates which content categories are MFK [
91
], creators still
struggle to understand how to apply these policies to classication
practices, especially when their videos are lled with dierent,
nuanced content elements. For example, video game is a subject
matter generally directed at kids, as stated by MFK policy [
91
].
However, this generalization does not account for the diverse genres
and themes within gaming, many of which may not be suitable for
children. This one-size-ts-all approach to classication presents
challenges for creators. A creator posted:
I don’t know if my video game videos are “made for
kids” or not due to the lack of clarication. Even if games
like Call of Duty are violent and look realistic, they can
say it is kid-directed as the games are animated. And
what about Fortnite, Minecraft, etc.? Kids can watch
anything, and anything can be made for kids. Some are
just clearer than others. (creator; r/youtube)
This creator struggled to make a classication decision given the
abundance of content elements such as game types, some extent of
violence, and visuals between reality and animation. This struggle
showed that the simple term “game” in MFK policy cannot su-
ciently cover the actual complexity of videos and creators’ practices
in measuring their videos. Besides, content elements that are not di-
rectly measurable also pose obstacles. For example, a game creator
posted:
If I play Minecraft, which is a game *directed to kids*,
but I myself am aiming for a young adult+ audience
because my humor is a bit unsuitable for kids, I’m in a
very grey area if you consult those guidelines. (creator;
r/youtubers)
DIS ’24, July 1–5, 2024, IT University of Copenhagen, Denmark Ma et al.
Minecraft is a popular sandbox game. This creator thought their
humorousness as a creative part of their Minecraft gaming videos
made the videos inappropriate for children. Meanwhile, they com-
plained that MFK policies did not explain the extent or kind of
creativity that can make a gaming video MFK [91].
As a viewer observed, this vagueness might lead to improper,
careless classication practices: “I think a lot of people marked their
own channels as made for kids, fearing that if they didn’t, they’d be
chased up/sued.” This viewer shared that many creators’ confusion
or struggle was exacerbated by fears of legal repercussions.
5.2.2 Extra Labor for Disagreed Classifications. Creators face addi-
tional work when disagreeing with MFK classications made by
YouTube. A creator posted:
Or YouTube set it themselves [through machine learning
algorithms]. I had a video I had to manually set back
to Not Made for Kids 4 times. Every once in a while, I
go through my videos and double-check to make sure
YouTube didn’t make the decision on its own again.
(creator; r/youtube)
This creator manually labeled back and forth and frequently
checked out other videos’ statuses on the YouTube Studio dashboard
to make sure the automatic MFK classication made sense to them.
Given that one of the two primary criteria for classifying a video
as MFK is “children are the primary audience of the video” [
91
],
many creators assume that when a video is labeled as MFK, the
primary consumers are already kids. When facing the other cri-
terion, “children are not the primary audience, but the video is
still directed at children,” creators will likely assume the criterion
YouTube chooses for its classication decision randomly. For exam-
ple, a creator posted:
Literally, almost every video I’ve seen marked [by YouTube]
for kids is not intended for kids, likely leading to alien-
ating their main audience by disabling comments and
probably bringing in a child audience that the video, in
some ways more than others, is clearly *not* intended
for. I had to bring it to one user’s attention through an-
other video that was not marked as such to remove the
marker, as it was clearly not intended as such. (creator;
r/youtube)
Without classication explanations, the creator disagreed that
the videos labeled by the platform should be classied as MFK.
Then, they felt compelled to draw online trac of older consumers
to MFK videos to remove the labels because they did not want the
negative impacts of the labels. Some creators even mentioned their
channel-level eorts:
My original channel was going to be a kid channel. I
did more research into monetizing made-for-kids videos.
If I were monetized, I’d make little to no money at all,
and it absolutely wouldn’t be worth it. I’d keep up with
educating parents but make it geared toward the parent.
(creator; r/NewTubers)
This creator above performed the extra labor by changing their
whole channel’s content category to avoid MFK classication be-
cause of the low and unpromising protability of content creation
associated with it.
5.2.3 Reduced Motivation for Creation. MFK classication often
demotivates creators from creating content. A creator highlighted
the challenge of establishing a kid’s channel in the unpromising
protability and fanbase: “Unless it’s a hobby or an insanely huge
channel, I don’t see the point of having a kid’s channel. No way to get
one o the ground in this environment.”
The ambiguity of MFK policies further discourages creators. A
creator posted:
This foggy space between what we are and are not al-
lowed to do disheartened and uninspired me to start up-
loading anything at all. Some of the things I would like
to make would be safe for all. Some would only be safe
for a more mature crowd. Do I walk the tightrope and
throw some truly unique and fun ideas out the window
in case I tread in the wrong territory or make a claim
that a bot deems incorrect? “For kids” does not mean
the same thing as “safe for kids.” (creator; r/youtube)
This case showed that creators clearly understood the dierence
between MFK and “safe for kids.” They were afraid that the YouTube
platform’s algorithms would mix up these concepts and misclassify
their “safe for kids” videos as MFK. Due to this fear, the creator
hesitated and considered not investing more creativity in their
content creation.
5.3 Intersection of MFK Classication with
Other Platform Designs
MFK classication is not mutually exclusive with other platform
designs. Instead, creators and consumers nd it intersected with
other designs, especially other classication systems, in three ways:
coordination, inconsistency, and conict. While these intersections
sometimes align with protecting children’s privacy, creators and
consumers often observe them as failing to do so or even negatively
impacting user interests.
5.3.1 Coordination. Coordination refers to how MFK classication
does not work alone but works with other classication systems for
child privacy protection. A viewer posted: “How do I get my features
back as a watcher, not an uploader (creator)? I am not a kid and would
like to use all the features of YouTube.” This viewer disagreed that
such coordination between video function disablement and MFK
classication should be applied to their content consumption expe-
rience as an adult. Besides consumer experience, creators also found
that MFK classication coordinates with other content moderation
classications to inuence their MFK classication decisions. For
example, a creator discussed with a viewer:
Creator: My Hot Wheels review channel is for adult col-
lections but is completely family-friendly. But because
they are a “kid toy” that appeals to kids (according to
MFK policies), I’m probably going to have to label them
“made for kids,” and therefore, what is the point of try-
ing to grow a channel which is never going to amount
to anything.
Consumer: Start swearing?
Creator: I would do that, but it’s in a kid-friendly game
because if I curse, they would ag my video or terminate
my channel. (r/youtube)
Content Creators’ and Consumers’ Experiences with Content Classification for Child Safety on YouTube DIS ’24, July 1–5, 2024, IT University of Copenhagen, Denmark
When following MFK policies to classify a video as MFK, the
creator in the above case was more likely to receive less income
and audience engagement from the video. But if the creator cursed
in a video and thus labeled it as non-MFK, they would violate other
content policies (e.g., community guideline [
96
]) beyond the MFK
one, to lose videos or even the whole channel. Thus, the creator
weighed the risk of content removal higher than the limited growth
due to MFK classication, highlighting the dicult position creators
are in.
Creators further voiced dissatisfaction about how data tracking
coordinates and is tied to content moderation:
YouTube only oers these two in pairs: disable tracking
AND mark as made for kids, or enable tracking and
mark as not made for kids. I just want to disable all
tracking, data collection, comments, whatever, to be on
the safe side of the crazy people in the FTC. (creator;
r/youtube)
The creator wanted to decouple labeling content as intended for
children (i.e., MFK) from data tracking because they wished to avoid
restricting their audience solely to children. However, the coordi-
nation between data tracking and content moderation discouraged
them from fully embracing the MFK classication/framework.
5.3.2 Inconsistency. Inconsistency means how the MFK classi-
cation works with other designs in a way inconsistent with what
is stated in the MFK moderation policies [
91
,
94
]. For example, a
newbie creator posted: “What’s even weirder is that when I set
a video as made for kids on the rst video that I made, it still al-
lows comments.” As comment disablement is a designed change
given the MFK policy [
94
], the creator here felt surprised that it
did not work in the designed way. This posed the risk of collecting
behavioral data of potential kid consumers.
Such inconsistency also appears in non-MFK videos. For example,
a creator shared:
My friend was trying to turn notications on for my
channel and got a warning saying, “This action is turned
o for content made for kids.” But the thing is, I never
selected a video or my channel for kids. Why does it
happen, and how to x it? (consumer; r/youtube)
“Notication” in this example refers to the bell icon beside the
YouTube channel, which can notify consumers of new videos, and
YouTube turns it o on MFK channels according to the policies
[
94
]. The inconsistency existed in two phases of MFK classication.
First, in its decision-making phase, if the creator assumed their
channel was non-MFK, then YouTube’s notication bell worked
inconsistently on their channel. Second, in the sense-making phase
of MFK classication results, the notication bell as a notication
system only notied the consumers of MFK videos they watched
but did not notify the creators who created these videos. This was
inconsistent with MFK policy, where YouTube noties creators of
MFK videos that are classied by the platform [94].
Suppose the inconsistency in the last case is potentially attributed
to creators’ lack of awareness that their videos are classied as MFK
by the platform algorithms. In that case, other creators ag the
explicit inconsistency when they already classify their channels as
non-MFK. For example, a creator mentioned: “I changed the setting
on my channel to a hard not made for kids and set all my videos to
that as well, but the problem persisted.”
A creator further highlighted MFK classication’s inconsistency
with monetization algorithms/classications:
I have several videos that were manually changed to
“for kids” after I had published them. Interestingly, their
CPM only dropped by 1/3. On the other hand, I have pub-
lished videos as “for kids,” and their CPM is 1/4 of what
a normal video would be. (creator; r/PartneredYoutube)
As MFK policies [
94
] explain, there will be no targeted ads on
MFK videos, meaning only contextual ads will be placed, generating
lower ad income. However, the creator above recognized inconsis-
tent ad income performances between MFK and non-MFK videos,
indicated by CPM (the net amount of ad income for every 1,000
ad impressions). Such inconsistency between MFK classication
and monetization algorithms further implied uncertainty about
whether YouTube’s ad placement system (i.e., how YouTube places
ads) consistently worked with MFK policy enforcement.
5.3.3 Conflict. Conict arises when the MFK classication oper-
ates simultaneously with other platform designs, especially other
classications, which compromises child privacy and safety pro-
tection – what the MFK classication intends to implement. For
example, a creator mentioned:
Because people under 13 can still go on the main site.
If they (YouTube) made a 13+ requirement, and not
logging in with a mature account could get you on the
kid’s website, that would be perfectly ne. It kind of
sucks right now (with MFK classication on the main
site). (creator; r/youtube)
This creator discussed two designs about the consumers on
YouTube sequentially. First is an open-access consumer model
where most YouTube videos are publicly accessible to users without
registering or logging in with an account. The second is the age
verication classication, where the consumers need to be over
13 years old to register an account on YouTube. So, the conict in
the above case was that the open-access consumer model allowed
potential kids to watch videos on YouTube without an account,
while the MFK classication intended to prevent potential kids
from accessing non-MFK videos and getting their data tracked by
the platform.
Such conict is not rare. A creator discussed the paradox of a
video being simultaneously MFK and age-restricted:
Age restriction classication has not changed; it still
means and does the same thing. Made For Kids clas-
sication now only means what roughly COPPA in-
tended was to stop data collection and surgical ad tar-
geting aimed at kids. Now, since You are aware that
video should be restricted (due to language), You might
worry that kids will watch it (since it’s known as a car-
toon), and You will get in trouble with COPPA. (creator;
r/youtube)
Age restriction refers to a binary classication that creators need
to practice, and that can indicate whether their videos are only
for people over 18; otherwise, the YouTube platform will label the
videos on behalf of creators [
90
]. Here, the creator grappled with the
DIS ’24, July 1–5, 2024, IT University of Copenhagen, Denmark Ma et al.
conict between age restrictions meant for adult content and MFK
regulations designed to protect children, potentially endangering
the intended consumers’ safety.
MFK classication intends to prevent consumers’ data on MFK
videos from being collected by the platform, and no targeted ads
will be placed [
92
]. However, this intent is conicted with how
YouTube places the ads. A viewer posted:
They probably still collect data from clicking ads [on
MFK videos], though, like those ones in the up-next feed
that are designed to look like kids’ videos but, when
clicked, will take your child o to a third-party website.
(consumer; r/youtube)
This viewer mentioned a possibility where kids’ data can be col-
lected by the ads they watched and clicked on. This conicted with
children’s privacy protection. Beyond content itself, MFK policies
did not consider how ad placement impacts potential kids, not to
mention that sometimes the ads are harmful, as a viewer said: “‘try
out a Slavic wife, see what happens’ might be a little inappropriate
for children.”
“YouTube Kids,” a separate platform from the regular, main
YouTube for consumers under the age of 13, is also perceived to
be repetitive with MFK classication. For example, a creator elab-
orated, “There is an app called YouTube Kids. If they can’t handle
two apps at once, then shut down YouTube Kids because (with made
for kids) they clearly want kids and everyone else to use the normal
YouTube.” This creator implied that MFK classication and the
YouTube Kids platform should be integrated so they would not be
confused about which one was for child privacy protection.
Consumers also notice such conict. For example, a consumer
noticed kid creators who seemed to be under 13 were active on
YouTube: “Technically, it’s not allowed, but YouTube doesn’t ban those
under 13 creators for some reason. (You can nd so many under 13
creators on YouTube).” That meant MFK classication was poten-
tially practiced by those under 13, and thus, their data might be
collected from the platform as both a creator and viewer.
6 DISCUSSION
The YouTube platform initially proposed MFK classication and its
associated moderation policies [
91
,
92
,
94
] to respond to the allega-
tions from and legal tension with FTC about violating children’s
online privacy [
25
]. Although both FTC and YouTube believe MFK
classication can alleviate this legal tension, it unexpectedly creates
new tension among the platform, creators, and consumers, compli-
cating the eorts for child safety. That means, while the MFK clas-
sication has helped prevent the collection of children’s personal
information—enhancing child privacy—it has also inadvertently
exposed children to broader safety risks, including inappropriate
video content and advertising. This section thus will discuss how
the YouTube platform positions MFK classication in an interwo-
ven network of classication systems. Using and expanding on
Bowker and Star’s classication theories [
16
], we will unpack this
network’s structure and impacts on child safety. This section will
close with design principles for child-centered safety.
6.1 An Interwoven Network of Multiple
Classication Systems and Its Broad
Impacts on Child Safety
Bowker and Star have performed an extensive analysis of single and
static classication systems through examples like apartheid’s racial
classication in South Africa [
16
]. However, our analysis diverges in
a signicant aspect: Digital and social media platforms like YouTube
design a complex interplay of multiple, dynamic classications. On
YouTube, MFK classication intersects with other classications like
advertising restrictions and monetization algorithms, inuencing
content creation, monetization, and audience consumption patterns.
These patterns, in turn, inuence platform algorithms and trends,
showing that YouTube’s classication systems are interconnected.
Against this background, our ndings shed light on a designed,
interwoven network of classication systems that operate interde-
pendently, in tension, and dynamically. First, the classications on
YouTube – age-restriction, MFK, and age verication are not stan-
dalone but directly impact child safety (see blue arrows in Figure 3).
As Section 5.3 shows, age-restriction (i.e., content classication for
consumers only over 18) was labeled concurrently with MFK on
videos, leaving creators uncertain if the goal is to limit viewership
to adults over 18 or to protect children by disabling data tracking.
This is compounded by YouTube’s open-access consumer model,
which allows content viewing without an account, thus obscuring
the presence of viewers under 13. This critical child safety concern
undermines age verication on YouTube, which approves users to
be on YouTube if they are over 13. The interdependence of these
classications often goes unnoticed by creators until it negatively
aects creators’ content, monetization, and audience engagement,
resonating with Bowker and Star’s concept of infrastructural inver-
sion [
16
], where the underlying classications only become visible
during conicts or breakdowns. Similarly, extending prior HCI re-
search on such classication breakdowns [
14
,
44
,
78
], our study
brings to light not only the classication at work but also the route
from its structure to impacts, compounding the experiences of users,
including creators, consumers, and children.
Second, classication systems on YouTube pull dierent entities’
contention (see red arrows in Figure 3), including YouTube, creators,
and consumers. Bowker and Star highlight the concept of boundary
objects, referring to “objects that both inhabit several communities
of practice and satisfy the informational requirements of each of
them,” which manages the tensions among diverse perspectives
[
16
]. MFK classication is a boundary object: It is meant to pro-
tect children’s privacy but is interpreted diversely. Our ndings
(e.g., Section 5.2) show that creators interpreted it as a negotia-
tion between their vested interests and external requirements from
YouTube or FTC. Parents, however, felt it limited their agency in
content selection for their kids (e.g., Section 5.1.2), while YouTube’s
algorithms used this label to curate content, including ads, for view-
ers (e.g., Section 5.1.1). This multiplicity reects the engagement of
diverse users with children’s online privacy measures, as seen in
prior HCI research (e.g., children [
55
,
82
], developers [
32
]. Our study
further underscores a scale challenge: Engagement with YouTube’s
network of classications extends beyond mere videos and MFK
labels to a broader array of policies (e.g., community guidelines
[
96
], algorithms, and interfaces (e.g., YouTube Studio [
93
]), forming
Content Creators’ and Consumers’ Experiences with Content Classification for Child Safety on YouTube DIS ’24, July 1–5, 2024, IT University of Copenhagen, Denmark
Figure 3: An interwoven network of classication systems impacting child safety on YouTube is indicated by our ndings.
Blue arrows show interdependence between classications, red arrows highlight contention among creators, the platform,
and consumers, while green arrows show how classications transition within the network. Double-headed arrows denote
bidirectional relationships. For instance, red arrows indicate that consumers note malicious ads placed by the platform, but
meanwhile, they cannot comment on MFK videos. Bolder blue arrows signify stronger relationships (e.g., MFK classication
mutually aects age-restriction criteria).
what Bowker and Star conceptualized as a boundary infrastructure,
“objects that cross larger levels of scale than boundary objects” [
16
].
As creators assign the MFK labels, they do not merely classify a
video but interact with YouTube’s entire classication ecosystem, af-
fecting everything from video uploads to content recommendations
and ad placements. This again shows how a singular classication,
when entwined with others, can have profound implications for
users, particularly children.
Third, classication systems operate dynamically on YouTube
(see green circle arrows in Figure 3). Drawing on Bowker and Star’s
concept, infrastructural inversion [
16
], where classication systems
are reformed in response to breakdowns, we observe, on YouTube,
that the breakdowns don’t necessarily bring classication modi-
cations. Instead, dierent entities navigate breakdowns through
the existing classication network: Platform algorithms correct or
override classications like MFK and age restrictions (e.g., Section
5.3.2), creators make or reverse their mislabeling (e.g., Section 5.2.2),
and consumers point out dierent types of misclassications (e.g.,
Section 5.1.1). This also diers from prior work, where classica-
tions adapted to nuanced user behaviors (e.g., updated moderation
policies [
28
,
88
]), classications on YouTube dynamically shift, of-
ten transitioning from one label to another over time, while the
underlying network of classications may remain unchanged.
The complexities within this classication network could rst
risk child privacy. Previous work has highlighted the design in-
adequacies of social media in adhering to child privacy laws like
COPPA (e.g., [
37
,
74
]), noting parents’ roles in circumventing age
checks [
17
]. YouTube’s MFK classication, which involves creators
directly in classifying content for child consumers, contrasts tradi-
tional age verication or ltered browsing [
95
]. The open-access
consumer model of YouTube further complicates child privacy ef-
forts, allowing unaccounted viewership, including by children. This
makes it nearly impossible for creators to identify if a viewer is a
child. Although creators lack data on consumers under 13 on the
YouTube Studio dashboard [93], MFK policies [91, 94] still require
them to label content based on its primary intention for this age
group. This inconsistency, coupled with both the platform’s and
creators’ drive to maximize growth (e.g., visibility and income),
makes child privacy protection more challenging.
The MFK classication network can further expose children to
safety issues. Prior research has assessed moderation eectiveness
– how it restricts the proliferation of inappropriate content (e.g.,
[
22
,
79
,
84
]). Our study highlights a more critical concern: When
YouTube relies on content creators to classify their content for mod-
eration [
94
], it is evident that they aren’t professional moderators
or labelers, often leading to errors and repeated relabeling, increas-
ing the exposure of problematic videos and advertisements. Our
ndings thus reveal a fundamental aw in merging child privacy
protection with moderation – making content an indicator for con-
sumer demographics. As the MFK misclassications intersect with
other labels, coupled with children’s unpredictable content con-
sumption, this aw extends the original focus of MFK classication
on children’s data privacy into child safety on YouTube.
DIS ’24, July 1–5, 2024, IT University of Copenhagen, Denmark Ma et al.
6.2
Designing for Child-Centered Online Safety
in the Network of Classication Systems
On social media platforms like YouTube, children’s online safety in-
volves dierent stakeholders, such as content creators, consumers,
platform designers, and policymakers. However, our ndings reveal
a divergence in understanding and approaching child safety across
these groups. Content creators and consumers, particularly, face
discrepancies in how they experience content creation and con-
sumption: They wanted to weigh the moderation challenges and
impacts posed by MFK classication with the necessity for them
to protect children’s privacy. Besides, as prior work in legal and
HCI elds has criticized platforms for inconsistent enforcement of
moderation policies (e.g., [
44
,
77
,
85
,
88
]), our study supplements
how such inconsistency originates: Platform designs work dier-
ently from what moderation policies state, unpredictably intersect
with other designs, and thus undermine the force of moderation in
regulating inappropriate content, including ads. Especially when
the MFK classication enforces policies and operates, we found it
ignores the design of parental consent and participation, showing
a discrepancy that extends beyond platform governance issues to
policy gaps between platform policymaking and COPPA [24].
Thus, we highlight two design principles that are key to enhanc-
ing child-centered online safety:
The rst design principle is the multi-stakeholder prin-
ciple in child safety. This entails giving visibility to both stake-
holders who directly get involved in child safety and those who
play an indirect role in it. On the one hand, in platforms’ content
moderation, the classication work is often invisible [
20
]. Our nd-
ings show that this invisibility can obscure the eorts of content
creators for child safety. Bowker and Star highlight the critical need
for visibility in classication systems—not only to understand and
recognize the work that goes into them but also to critically ex-
amine their impacts [
16
]. When it comes to MFK policies [
91
], it
thus means bringing to light the invisible labor that underpins child
safety within the classication network where MFK classication
is part.
On the other hand, a singular safety design needs to acknowl-
edge the inuence of multiple stakeholders. Prior DIS researchers
have examined parental use of AI-assisted or technology-based
decision-making [
52
,
54
] or how children themselves use chatbots
[
72
] to enhance safety. Our ndings show that while the YouTube
platform oers MFK classication as an important child safety
function, there’s a noticeable gap in the willingness and ability of
consumers and creators to participate in MFK classication. Design
implication: Thus, platforms like YouTube should support creator-
consumer collaboration to positively inuence child safety and
avoid biases when implementing protection measures for children.
The second design principle is the systems thinking prin-
ciple in child safety. Systems thinking refers to “seeing inter-
relationships rather than things, for seeing patterns rather than
static snapshots” [
80
]. It emphasizes making sense of the intercon-
nectedness and patterns of change within a system rather than
viewing parts in isolation [
80
]. In design, our study informs two
aspects of this notion: The internal aspect, which speaks to the
interaction among system parts/components, and the contextual
aspect, which is how the system or its parts interact with the world,
such as people.
On the one hand, our study shows the necessity of acknowl-
edging the interconnectedness among safety designs like MFK
classication and other classication systems. For example, our
ndings showed that MFK classication, as one type of safety de-
sign, was interwoven with other platform classications, such as
monetization algorithms, advertising settings [
21
,
61
,
62
], and age
verication [
37
]. These connections can increase the risk of ex-
posing child consumers to safety issues. Design implication: To
solve these issues, we do not suggest that MFK classication should
operate in isolation. Rather, in policy enforcement, the interwoven
network of classication systems should enhance transparency to
users regarding the distinctions among various classications. This
approach enables dierent stakeholders to examine if child safety
designs align with COPPA, preventing child data collection without
parental consent. This, thus, positions users in a fair environment
for content creation and consumption without unexpected impacts
from other classications.
On the other hand, our study highlights the potential for in-
novating safety designs to enhance child protection eectively.
Design implication: While age verication systems typically con-
rm a consumer’s age during the account registration stage, we
suggest they should also periodically verify ages in the content
consumption stage, especially when there’s a signicant shift in
consumption patterns or an increase in child-oriented content con-
sumption. Implementing such a safety design could hold platforms
more accountable for child safety, given our ndings, where cre-
ators on YouTube would not be aware if children under 13 consume
their content due to the open-access content consumption model.
Expanding on the interaction among multiple stakeholders with
safety designs, such designs should facilitate collaborative practices
of child safety. Our study found that although engaged viewers
identied mislabeling in MFK classications, they lacked a mecha-
nism for reporting these issues and stopping the spread of harmful
content. Additionally, our ndings reveal that many parents are
excluded from participating in MFK classication or selecting con-
tent for their children. We thus propose the below three design
changes:
•
Social media platforms like YouTube should enhance their
agging options to enable the reporting of perceived MFK
misclassications.
•
Furthermore, platforms should provide creators with educa-
tional resources, including contact points and workshops,
to ensure their content aligns with MFK policies and avoid
mislabeling.
•
Informed by prior literature that advocates for co-using digi-
tal devices [
29
,
46
] between parents and children, we propose
introducing a co-watching feature on platforms like YouTube,
encouraging both parties to decide if they wish to consume
videos together in real time.
7 LIMITATION AND FUTURE WORK
This study has a few inevitable limitations. First, a few prior HCI
studies have investigated children’s privacy-related experiences
Content Creators’ and Consumers’ Experiences with Content Classification for Child Safety on YouTube DIS ’24, July 1–5, 2024, IT University of Copenhagen, Denmark
on several social media platforms (e.g., [
82
,
98
]). That means chil-
dren and parents could be a future focus in understanding how
they experience MFK classication on YouTube or similar child
privacy-preventing technologies across platforms. Similarly, future
work can further focus on child or teenager creators and how they
experience privacy-preventing designs on platforms like YouTube.
Second, the method we used, analyzing online discussions, might
be subjected to the misreported experiences with MFK classication
on Reddit. For example, there might be creators’ mislabeling of con-
tent when they did not share such experiences as mislabeling in our
data. Recognizing this potential limitation, we will dive deeper into
future work with parents, kids, creators, and consumers through
methods like participatory design workshops. Also, as YouTube’s
MFK is heavily inuenced by COPPA regardless of region [
92
], we
recognize future work that can delve into a localized understanding
of child privacy in places complying with GDPR or age-appropriate
design.
8 CONCLUSION
This study delves into creators’ and consumers’ experiences with
YouTube’s MFK classication system, focusing on the broad im-
plications of its implementation. We uncover a spectrum of user
reactions, strategies, and challenges in navigating the MFK system.
Our ndings contribute to a deeper understanding of MFK as more
than a technical measure: We identify an interwoven network of
classication systems centered on MFK classication. Using and
extending Bowker and Star’s classication theories, we unpack how
such a network challenges child safety (e.g., content moderation
eectiveness, data tracking, malicious ad placement). We conclude
by laying out the design principles of child-crenated safety on com-
mercial and social media platforms like YouTube.
ACKNOWLEDGMENTS
We thank the associate chairs and anonymous reviewers for their in-
sightful feedback and suggestions. This work is partially supported
by the NSF, under grant no. 2326505.
REFERENCES
[1]
Amelia Acker and Leanne Bowler. 2018. Youth Data Literacy: Teen Perspectives
on Data Created with Social Media and Mobile Devices. In Proceedings of the
Annual Hawaii International Conference on System Sciences. 1923–1932. https:
//doi.org/10.24251/HICSS.2018.243
[2]
Zainab Agha, Karla Badillo-Urquiola, and Pamela J. Wisniewski. 2023. “Strike at
the Root”: Co-designing Real-Time Social Media Interventions for Adolescent
Online Risk Prevention. Proc ACM Hum Comput Interact 7, CSCW1 (April 2023),
149. https://doi.org/10.1145/3579625
[3]
Syed Hammad Ahmed, Muhammad Junaid Khan, H. M. Umer Qaisar, and Gita Suk-
thankar. 2023. Malicious or Benign? Towards Eective Content Moderation for
Children’s Videos. In Proceedings of the International Florida Articial Intelligence
Research Society Conference, FLAIRS 36. https://doi.org/10.32473/airs.36.133315
[4]
Davey Alba. 2015. Google Launches “YouTube Kids,” a New Family-Friendly App.
https://www.wired.com/2015/02/youtube-kids/
[5]
Sultan Alshamrani, Ahmed Abusnaina, Mohammed Abuhamad, Daehun Nyang,
and David Mohaisen. 2021. Hate, Obscenity, and Insults: Measuring the Exposure
of Children to Inappropriate Comments in YouTube. In The Web Conference
2021 - Companion of the World Wide Web Conference, WWW 2021. 508–515.
https://doi.org/10.1145/3442442.3452314
[6]
Tawq Ammari, Priya Kumar, Cli Lampe, and Sarita Schoenebeck. 2015. Man-
aging children’s online identities: How parents decide what to disclose about
their children online. In Conference on Human Factors in Computing Systems -
Proceedings. 1895–1904. https://doi.org/10.1145/2702123.2702325
[7]
Tawq Ammari and Sarita Schoenebeck. 2015. Understanding and supporting
fathers and fatherhood on social media sites. In Conference on Human Factors in
Computing Systems - Proceedings. 1905–1914. https://doi.org/10.1145/2702123.
2702205
[8]
Mary Jean Amon, Nika Kartvelishvili, Bennett I. Bertenthal, Kurt Hugenberg,
and Apu Kapadia. 2022. Sharenting and Children’s Privacy in the United States:
Parenting Style, Practices, and Perspectives on Sharing Young Children’s Photos
on Social Media. Proc ACM Hum Comput Interact 6, CSCW1 (April 2022). https:
//doi.org/10.1145/3512963
[9] National Archives. 2013. PART 312—Children’s Online Privacy Protection Rule.
https://www.ecfr.gov/current/title-16/chapter-I/subchapter- C/part-312
[10]
Karla Badillo-Urquiola, Diva Smriti, Brenna Mcnally, Evan Golub, Elizabeth
Bonsignore, and Pamela J Wisniewski. 2019. “Stranger Danger!” Social Media
App Features Co-designed with Children to Keep Them Safe Online. In Proc 18th
ACM Int Conf Interact Des Child. https://doi.org/10.1145/3311927
[11]
Emmanuelle Bartoli. 2010. Children’s data protection vs marketing companies.
International Review of Law, Computers & Technology 23, 1-2 (January 2010),
35–45. https://doi.org/10.1080/13600860902742612
[12]
Stephen Beemsterboer. 2020. COPPA killed the video star: How the YouTube
settlement shows that COPPA does more harm than good. https://publish.
illinois.edu/illinoisblj/les/2020/06/12-Stephen- COPPA.pdf
[13]
Reuben Binns, Michael Veale, Max Van Kleek, and Nigel Shadbolt. 2017. Like
Trainer, Like Bot? Inheritance of Bias in Algorithmic Content Moderation. In
Lecture Notes in Computer Science (including subseries Lecture Notes in Articial
Intelligence and Lecture Notes in Bioinformatics), Vol. 10540. 405–415. https:
//doi.org/10.1007/978-3- 319-67256-4_32
[14]
Lindsay Blackwell, Jill Dimond, Sarita Schoenebeck, and Cli Lampe. 2017. Clas-
sication and its consequences for online harassment: Design insights from
HeartMob. Proc ACM Hum Comput Interact 1, CSCW (November 2017), 1–19.
https://doi.org/10.1145/3134659
[15]
Lindsay Blackwell, Jean Hardy, Tawq Ammari, Tiany Veinot, Cli Lampe, and
Sarita Schoenebeck. 2016. LGBT parents and social media: Advocacy, privacy,
and disclosure during shifting social movements. In Conference on Human Factors
in Computing Systems - Proceedings. 610–622. https://doi.org/10.1145/2858036.
2858342
[16]
Georey C. Bowker and Susan Leigh Star. 2000. Sorting things out: Classication
and its consequences. MIT press.
[17]
Danah Boyd, Eszter Hargittai, Jason Schultz, and John Palfrey. 2011. Why parents
help their children lie to Facebook about age: Unintended consequences of the
“Children’s Online Privacy Protection Act”. https://rstmonday.org/ojs/index.
php/fm/article/download/3850/3075
[18]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychol-
ogy. Qual Res Psychol 3, 2 (January 2006), 77–101. https://doi.org/10.1191/
1478088706qp063oa
[19]
Virginia Braun and Victoria Clarke. 2019. Reecting on reexive thematic analysis.
Qual Res Sport Exerc Health 11, 4 (August 2019), 589–597. https://doi.org/10.1080/
2159676X.2019.1628806
[20]
Jie Cai, Donghee Yvette Wohn, and Mashael Almoqbel. 2021. Moderation visi-
bility: Mapping the strategies of volunteer moderators in live streaming micro
communities. In IMX 2021 - Proceedings of the 2021 ACM International Conference
on Interactive Media Experiences. 61–72. https://doi.org/10.1145/3452918.3458796
[21]
Robyn Caplan and Tarleton Gillespie. 2020. Tiered Governance and Demonetiza-
tion: The Shifting Terms of Labor and Compensation in the Platform Economy.
Soc Media Soc 6, 2 (2020). https://doi.org/10.1177/2056305120936636
[22]
Eshwar Chandrasekharan, Shagun Jhaver, Amy Bruckman, and Eric Gilbert. 2022.
Quarantined! Examining the Eects of a Community-Wide Moderation Interven-
tion on Reddit. ACM Transactions on Computer-Human Interaction (TOCHI) 29, 4
(March 2022). https://doi.org/10.1145/3490499
[23]
Stuart Cobb. 2020. It’s COPPA-Cated: Protecting Children’s Privacy in the Age
of YouTube. https://heinonline.org/HOL/Page?handle=hein.journals/hulr58&
id=997&div=&collection=
[24]
Federal Trade Commission. 1998. Children’s Online Privacy Protection Rule
(“COPPA”). https://www.ftc.gov/legal-library/browse/rules/childrens- online-
privacy-protection- rule-coppa
[25]
Federal Trade Commission. 2019. Google and YouTube Will Pay Record
$170 Million for Alleged Violations of Children’s Privacy Law. https:
//www.ftc.gov/news-events/news/press-releases/2019/09/google- youtube-will-
pay-record- 170-million-alleged-violations-childrens- privacy-law
[26]
Federal Trade Commission. 2019. Musical.ly, Inc. https://www.ftc.gov/news-
events/news/press-releases/2019/02/video-social-networking-app- musically-
agrees-settle- ftc-allegations-it-violated-childrens- privacy
[27]
Federal Trade Commission. 2023. FTC Proposes Blanket Prohibition Prevent-
ing Facebook from Monetizing Youth Data. https://www.ftc.gov/news-
events/news/press-releases/2023/05/ftc-proposes- blanket-prohibition-
preventing-facebook- monetizing-youth-data
[28]
MacKenzie F. Common. 2020. Fear the Reaper: how content moderation rules are
enforced on social media. International Review of Law, Computers & Technology
34, 2 (May 2020), 126–152. https://doi.org/10.1080/13600869.2020.1733762
[29]
Sabrina L. Connell, Alexis R. Lauricella, and Ellen Wartella. 2015. Parental Co-Use
of Media Technology with their Young Children in the USA. J Child Media 9, 1
DIS ’24, July 1–5, 2024, IT University of Copenhagen, Denmark Ma et al.
(2015), 5–21. https://doi.org/10.1080/17482798.2015.997440
[30]
Kate Crawford and Tarleton Gillespie. 2016. What is a ag for? Social media
reporting tools and the vocabulary of complaint. New Media Soc 18, 3 (March
2016), 410–428. https://doi.org/10.1177/1461444814543163
[31]
Dipto Das, Carsten Østerlund, and Bryan Semaan. 2021. “Jol” or “Pani”?: How
Does Governance Shape a Platform’s Identity?. In Proc ACM Hum Comput Interact,
Vol. 5. https://doi.org/10.1145/3479860
[32]
Anirudh Ekambaranathan and Jun Zhao. 2021. Money makes the world go
around: Identifying barriers to beter privacy in children’s apps from developers’
perspectives. In Conference on Human Factors in Computing Systems - Proceedings.
https://doi.org/10.1145/3411764.3445599
[33]
Facebook. 2019. Facebook Community Standards. https://transparency.fb.com/
policies/community-standards/
[34]
Gavin Feller and Benjamin Burroughs. 2021. Branding Kiduencers: Regulating
Content and Advertising on YouTube. Television & New Media 23, 6 (October
2021), 575–592. https://doi.org/10.1177/15274764211052882
[35]
Yang Feng and Wenjing Xie. 2014. Teens’ concern for privacy when using social
networking sites: An analysis of socialization agents and relationships with
privacy-protecting behaviors. Comput Human Behav 33 (April 2014), 153–162.
https://doi.org/10.1016/J.CHB.2014.01.009
[36]
Jessica L. Feuston, Alex S. Taylor, and Anne Marie Piper. 2020. Conformity of
Eating Disorders through Content Moderation. Proc ACM Hum Comput Interact
4, CSCW1 (May 2020). https://doi.org/10.1145/3392845
[37]
Shannon Finnegan. 2019. How Facebook Beat the Children’s Online Privacy
Protection Act: A Look into the Continued Ineectiveness of COPPA and How
to Hold Social Media Sites Accountable in the Future. https://heinonline.org/
HOL/Page?handle=hein.journals/shlr50&id=838&div=&collection=
[38]
Jeremy Gan. 2023. YouTube reportedly dominates competition as top social media
platform for children. https://www.dexerto.com/youtube/youtube-reportedly-
dominates-competition- as-top- social-media-platform-for-children- 2264857/
[39]
GDPR. 2018. General Data Protection Regulation (GDPR). https://gdpr-info.eu/
[40]
Arup Kumar Ghosh, Karla Badillo-Urquiola, Shion Guha, Joseph J. Laviola, and
Pamela J. Wisniewski. 2018. Safety vs. surveillance: What children have to
say about mobile apps for parental control. In Conference on Human Factors in
Computing Systems - Proceedings. https://doi.org/10.1145/3173574.3173698
[41]
Cami Goray and Sarita Schoenebeck. 2022. Youths’ Perceptions of Data Collection
in Online Advertising and Social Media. Proc ACM Hum Comput Interact 6,
CSCW2 (November 2022). https://doi.org/10.1145/3555576
[42]
Robert Gorwa, Reuben Binns, and Christian Katzenbach. 2020. Algorithmic
content moderation: Technical and political challenges in the automation of
platform governance. Big Data Soc 7, 1 (January 2020). https://doi.org/10.1177/
2053951719897945
[43]
Greg Guest, Emily Namey, and Mario Chen. 2020. A simple method to assess and
report thematic saturation in qualitative research. PLoS One 15, 5 (May 2020).
https://doi.org/10.1371/JOURNAL.PONE.0232076
[44]
Oliver L. Haimson, Daniel Delmonaco, Peipei Nie, and Andrea Wegner. 2021.
Disproportionate Removals and Diering Content Moderation Experiences for
Conservative, Transgender, and Black Social Media Users: Marginalization and
Moderation Gray Areas. In Proc ACM Hum Comput Interact, Vol. 5. https:
//doi.org/10.1145/3479610
[45]
Hossein Hosseini, Sreeram Kannan, Baosen Zhang, and Radha Poovendran. 2017.
Deceiving Google’s Perspective API Built for Detecting Toxic Comments. https:
//arxiv.org/abs/1702.08138v1
[46]
Isil Oygur Ilhan, Yunan Chen, and Daniel A. Epstein. 2023. Co-designing for the
Co-Use of Child-Owned Wearables. In Proceedings of IDC 2023 - 22nd AnnualACM
Interaction Design and Children Conference: Rediscovering Childhood. 603–607.
https://doi.org/10.1145/3585088.3593868
[47]
Shagun Jhaver, Darren Scott Appling, Eric Gilbert, and Amy Bruckman. 2019.
“Did you suspect the post would be removed?”: Understanding user reactions to
content removals on reddit. Proc ACM Hum Comput Interact 3, CSCW (November
2019), 1–33. https://doi.org/10.1145/3359294
[48]
Shagun Jhaver, Iris Birman, Eric Gilbert, and Amy Bruckman. 2019. Human-
machine collaboration for content regulation: The case of reddit automoderator.
ACM Transactions on Computer-Human Interaction 26, 5 (July 2019), 1–35. https:
//doi.org/10.1145/3338243
[49]
Shagun Jhaver, Quan Ze Chen, Detlef Knauss, and Amy Zhang. 2022. Designing
Word Filter Tools for Creator-led Comment Moderation. In Proceedings of the
2022 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.
1145/3491102.3517505
[50]
Jialun Aaron Jiang, Charles Kiene, Skyler Middler, Jed R Brubaker, and Casey
Fiesler. 2019. Moderation Challenges in Voice-Based Online Communities on
Discord. Proc. ACM Hum.-Comput. Interact. 3, CSCW (November 2019). https:
//doi.org/10.1145/3359157
[51]
Garrett Johnson, Tesary Lin, James C. Cooper, and Liang Zhong. 2024. COP-
PAcalypse? The Youtube Settlement’s Impact on Kids Content. SSRN Electronic
Journal (March 2024). https://doi.org/10.2139/SSRN.4430334
[52]
Anna Kawakami, Venkatesh Sivaraman, Logan Stapleton, Hao Fei Cheng, Adam
Perer, Zhiwei Steven Wu, Haiyi Zhu, and Kenneth Holstein. 2022. “Why Do I
Care What’s Similar?” Probing Challenges in AI-Assisted Child Welfare Decision-
Making through Worker-AI Interface Design Concepts. In DIS 2022 - Proceedings
of the 2022 ACM Designing Interactive Systems Conference: Digital Wellbeing.
454–470. https://doi.org/10.1145/3532106.3533556
[53]
Sara Kingsley, Proteeti Sinha, Clara Wang, Motahhare Eslami, and Jason I. Hong.
2022. “Give Everybody a Little Bit More Equity”: Content Creator Perspectives
and Responses to the Algorithmic Demonetization of Content Associated with
Disadvantaged Groups. Proc ACM Hum Comput Interact 6, CSCW2 (November
2022). https://doi.org/10.1145/3555149
[54]
Susanne Kirchner, Dawn K. Sakaguchi-Tang, Rebecca Michelson, Sean A. Munson,
and Julie A. Kientz. 2020. This just felt to me like the right thing to do": Decision-
Making Experiences of Parents of Young Children. In DIS 2020 - Proceedings of
the 2020 ACM Designing Interactive Systems Conference. 489–503. https://doi.org/
10.1145/3357236.3395466
[55]
Priya Kumar, Shalmali Milind Naik, Utkarsha Ramesh Devkar, Marshini Chetty,
Tamara L. Clegg, and Jessica Vitak. 2017. “No Telling Passcodes Out Because
They’re Private.”. Proc ACM Hum Comput Interact 1, CSCW (December 2017).
https://doi.org/10.1145/3134699
[56]
Priya Kumar and Sarita Schoenebeck. 2015. The modern day baby b ook: Enacting
good mothering and stewarding privacy on facebook. In CSCW 2015 - Proceedings
of the 2015 ACM International Conference on Computer-Supported Cooperative
Work and Social Computing. 1302–1312. https://doi.org/10.1145/2675133.2675149
[57]
Cli Lampe and Paul Resnick. 2004. Slash(dot) and Burn: Distributed Moderation
in a Large Online Conversation Space. In Proceedings of the 2004 conference on
Human factors in computing systems - CHI ’04. ACM Press, New York, New York,
USA.
[58]
Tianshi Li, Elizabeth Louie, Laura Dabbish, and Jason I. Hong. 2021. How Develop-
ers Talk About Personal Data and What It Means for User Privacy. Proc ACM Hum
Comput Interact 4, CSCW3 ( January 2021), 1–28. https://doi.org/10.1145/3432919
[59]
Sonia Livingstone, Mariya Stoilova, and Rishita Nandagiri. 2019. Children’s
data and privacy online: growing up in a digital age: an evidence review. http:
//www.lse.ac.uk/my-privacy-uk
[60]
Renkai Ma and Yubo Kou. 2021. “How advertiser-friendly is my video?”: YouTu-
ber’s Socioeconomic Interactions with Algorithmic Content Moderation. PACM
on Human Computer Interaction 5, CSCW2 (2021), 1–26. https://doi.org/10.1145/
3479573
[61]
Renkai Ma and Yubo Kou. 2022. “I am not a YouTuber who can make whatever
video I want. I have to keep appeasing algorithms”: Bureaucracy of Creator
Moderation on YouTube. https://doi.org/10.1145/3500868.3559445
[62]
Renkai Ma and Yubo Kou. 2022. “I’m not sure what dierence is between their
content and mine, other than the person itself”: A Study of Fairness Perception
of Content Moderation on YouTube. Proc ACM Hum Comput Interact 6, CSCW2
(2022), 28. https://doi.org/10.1145/3555150
[63]
Emily McReynolds, Sarah Hubbard, Timothy Lau, Aditya Saraf, Maya Cakmak,
and Franziska Roesner. 2017. Toys that listen: A study of parents, children, and
internet-connected toys. In Conference on Human Factors in Computing Systems -
Proceedings. 5197–5207. https://doi.org/10.1145/3025453.3025735
[64]
Kathryn C. Montgomery, Je Chester, and Tijana Milosevic. 2017. Children’s
Privacy in the Big Data Era: Research Opportunities. Pediatrics 140 (November
2017), S117–S121. https://doi.org/10.1542/PEDS.2016- 1758O
[65]
Carol Moser, Tianying Chen, and Sarita Y. Schoenebeck. 2017. Parents’ and
children’s preferences about parents sharing about children on social media. In
Conference on Human Factors in Computing Systems - Proceedings. 5221–5225.
https://doi.org/10.1145/3025453.3025587
[66]
Helen Nissenbaum. 2004. Privacy as Contextual Integrity. Washington Law Review
79 (2004). https://heinonline.org/HOL/Page?handle=hein.journals/washlr79&
id=129&div=16&collection=journals
[67]
Anna O’Donnell. 2020. Why the VPPA and COPPA Are Outdated:
How Netix, YouTube, and Disney Can Monitor Your Family at No Real
Cost. https://heinonline.org/HOL/Page?handle=hein.journals/geolr55&id=471&
div=&collection=
[68]
Gwenn Schurgin O’Keee, Kathleen Clarke-Pearson, Deborah Ann Mulligan,
Tanya Remer Altmann, Ari Brown, Dimitri A. Christakis, Holly Lee Falik, David L.
Hill, Marjorie J. Hogan, Alanna Estin Levine, and Kathleen G. Nelson. 2011. The
Impact of Social Media on Children, Adolescents, and Families. Pediatrics 127, 4
(April 2011), 800–804. https://doi.org/10.1542/PEDS.2011- 0054
[69]
Luci Pangrazio and Neil Selwyn. 2018. “It’s Not Like It’s Life or Death or What-
ever”: Young People’s Understandings of Social Media Data. Social Media and So-
ciety 4, 3 (July 2018). https://doi.org/10.1177/2056305118787808/ASSET/IMAGES/
LARGE/10.1177_2056305118787808-FIG1.JPEG
[70]
Kostantinos Papadamou, Antonis Papasavva, Savvas Zannettou, Jeremy Black-
burn, Nicolas Kourtellis, Ilias Leontiadis, Gianluca Stringhini, and Michael Siri-
vianos. 2020. Disturbed YouTube for Kids: Characterizing and Detecting In-
appropriate Videos Targeting Young Children. In Proceedings of the Interna-
tional AAAI Conference on Web and Social Media, Vol. 14. 522–533. https:
//doi.org/10.1609/ICWSM.V14I1.7320
[71]
Jessica A. Pater, Moon K. Kim, Elizabeth D. Mynatt, and Casey Fiesler. 2016.
Characterizations of online harassment: Comparing policies across social media
Content Creators’ and Consumers’ Experiences with Content Classification for Child Safety on YouTube DIS ’24, July 1–5, 2024, IT University of Copenhagen, Denmark
platforms. In Proceedings of the International ACM SIGGROUP Conference on
Supporting Group Work. 369–374. https://doi.org/10.1145/2957276.2957297
[72]
Lara Schibelsky Godoy Piccolo, Pinelopi Troullinou, and Harith Alani. 2021.
Chatbots to Support Children in Coping with Online Threats: Socio-technical
Requirements. In DIS 2021 - Proceedings of the 2021 ACM Designing Interactive
Systems Conference: Nowhere and Everywhere. 1504–1517. https://doi.org/10.
1145/3461778.3462114
[73]
Dongxiao Qin. 2016. Positionality. The Wiley Blackwell Encyclopedia of Gender
and Sexuality Studies (April 2016), 1–2. https://doi.org/10.1002/9781118663219.
WBEGSS619
[74]
Irwin Reyes, Primal Wijesekera, Joel Reardon, Amit Elazari Bar On, Abbas
Razaghpanah, Narseo Vallina-Rodriguez, and Serge Egelman. 2018. “Won’t
Somebody Think of the Children?” Examining COPPA Compliance at Scale.
In The 18th Privacy Enhancing Technologies Symposium (PETS 2018). 63–83.
https://doi.org/10.1515/popets-2018- 0021
[75]
Ivan Rivera. 2019. CRAN - Package RedditExtractoR. https://cran.r-project.org/
web/packages/RedditExtractoR/index.html
[76]
Sarah T. Roberts. 2019. Behind the Screen: content moderation in the shadows of
social media.
[77]
Barrie Sander. 2019. Freedom of Expression in the Age of Online Platforms: The
Promise and Pitfalls of a Human Rights-Based Approach to Content Moderation.
Fordham Int Law J (2019).
[78]
Morgan Klaus Scheuerman, Jacob M. Paul, and Jed R. Brubaker. 2019. How
computers see gender: An evaluation of gender classication in commercial
facial analysis and image labeling services. Proc ACM Hum Comput Interact 3,
CSCW (November 2019), 33. https://doi.org/10.1145/3359246
[79]
Joseph Seering, Robert Kraut, and Laura Dabbish. 2017. Shaping Pro and Anti-
Social Behavior on Twitch Through Moderation and Example-Setting. In Pro-
ceedings of the 2017 ACM Conference on Computer Supported Cooperative Work
and Social Computing (CSCW ’17). Association for Computing Machinery, New
York, NY, USA, 111–125. https://doi.org/10.1145/2998181.2998277
[80]
Peter M. Senge. 1990. The Fifth Discipline: The art and practice of the learning
organization. Broadway Business. https://books.google.com/books/about/The_
Fifth_Discipline.html?id=wg9DG42quXEC
[81]
Kumar Bhargav Srinivasan, Cristian Danescu-Niculescu-Mizil, Lillian Lee, and
Chenhao Tan. 2019. Content removal as a moderation strategy: Compliance
and other outcomes in the changemyview community. Proc ACM Hum Comput
Interact 3, CSCW (November 2019), 163. https://doi.org/10.1145/3359265
[82]
Kaiwen Sun and Carlo Sugatan. 2021. They see you’re a girl if you pick a
pink robot with a skirt: A qualitative study of how children conceptualize data
processing and digital privacy risks. In Conference on Human Factors in Computing
Systems - Proceedings. https://doi.org/10.1145/3411764.3445333
[83]
TikTok. 2019. TikTok for Younger Users. https://newsroom.tiktok.com/en-
us/tiktok-for- younger-users
[84]
Milo Z. Trujillo, Samuel F. Rosenblatt, Anda Jáuregui Guillermo De, Emily Moog,
Briane Paul, V. Samson, Laurent Hébert-Dufresne, and Allison M. Roth. 2021.
When the Echo Chamber Shatters: Examining the Use of Community-Specic
Language Post-Subreddit Ban. https://doi.org/10.48550/arxiv.2106.16207
[85]
Kristen Vaccaro, Christian Sandvig, and Karrie Karahalios. 2020. “At the End
of the Day Facebook Does What It Wants”: How Users Experience Contesting
Algorithmic Content Moderation. In Proceedings of the ACM on Human-Computer
Interaction. 1–22. https://doi.org/10.1145/3415238
[86]
Kristen Vaccaro, Ziang Xiao, Kevin Hamilton, and Karrie Karahalios. 2021. Con-
testability For Content Moderation. Proc ACM Hum Comput Interact 5, CSCW2
(October 2021), 28. https://doi.org/10.1145/3476059
[87]
Heather Wilson. 2020. YouTube Is Unsafe for Children: YouTube’s Safeguards
and the Current Legal Framework Are Inadequate to Protect Children from
Disturbing Content. https://heinonline.org/HOL/Page?handle=hein.journals/
sjel10&id=237&div=&collection=
[88]
Richard Ashby Wilson and Molly K. Land. 2020. Hate Speech on
Social Media: Content Moderation in Context. Conn Law Rev 52
(2020). https://heinonline.org/HOL/Page?handle=hein.journals/conlr52&id=
1056&div=28&collection=journals
[89]
Pamela Wisniewski, Heng Xu, Mary Beth Rosson, and John M. Carroll. 2017.
Parents just don’t understand: Why teens don’t talk to parents about their online
risk experiences. In Proceedings of the ACM Conference on Computer Supported
Cooperative Work. 523–540. https://doi.org/10.1145/2998181.2998236
[90]
YouTube. 2023. Age-restricted content. https://support.google.com/youtube/
answer/2802167?hl=en
[91]
YouTube. 2023. Determining if your content is “made for kids.”. https://support.
google.com/youtube/answer/9528076?hl=en
[92]
YouTube. 2023. Frequently asked questions about “made for kids.”.
https://support.google.com/youtube/answer/9684541?hl=en#zippy=%2Chow-
do-i- know-if-my-content-is- not-made- for-kids
[93]
YouTube. 2023. Navigate YouTube Studio. https://support.google.com/youtube/
answer/7548152?hl=en
[94]
YouTube. 2023. Set your channel or video’s audience. https:
//support.google.com/youtube/answer/9527654?hl=en&ref_topic=9689353&
sjid=16427619472020172874-NA#
[95]
YouTube. 2023. Your YouTube content and Restricted Mode. https://support.
google.com/youtube/answer/7354993?hl=en
[96]
YouTube. 2023. YouTube Community Guidelines & Policies. https://www.
youtube.com/howyoutubeworks/policies/community-guidelines/
[97]
Leah Zhang-Kennedy, Christine Mekhail, Sonia Chiasson, and Yomna Abde-
laziz. 2016. From nosy little brothers to stranger-danger: Children and par-
ents’ perception of mobile threats. In Proceedings of IDC 2016 - The 15th In-
ternational Conference on Interaction Design and Children. 388–399. https:
//doi.org/10.1145/2930674.2930716
[98]
Jun Zhao, Ge Wang, Carys Dally, Petr Slovak, Julian Edbrooke-Childs, Max
Van Kleek, and Nigel Shadbolt. 2019. ‘I make up a silly name’: Understanding
children’s perception of privacy risks online. In Conference on Human Factors in
Computing Systems - Proceedings. https://doi.org/10.1145/3290605.3300336