Content uploaded by Samia Firmino Pinto
Author content
All content in this area was uploaded by Samia Firmino Pinto on Oct 24, 2024
Content may be subject to copyright.
AI Friend? Risks, Implications, and Recommendations
on Generative AI for Children
DISSERTATION
Submitted in fulfillment of the requirement for the Master in International and
Development Studies (MINT)
By Samia Firmino Pinto
Supervisor: Suerie Moon
Second Reader: Ilona Kickbusch
Geneva, Switzerland
2024
ABSTRACT
This study explores the landscape of Generative AI and its implications for children.
Investigating how this technology poses risks to children is a critical exercise as Generative AI
becomes widely adopted and AI personal assistants grow increasingly ubiquitous and
human-like. The research examines AI applications for general purposes, such as entertainment
and relationships, focusing on AI companions and image generators embedding Large Language
Models (LLMs) and deep generative capabilities. Drawing on the well-established 4Cs
framework of risks for children in the online environment, the study provides a thorough record
of generative AI risks for children. The second part scrutinizes applications children can
potentially access on app stores, undertaking an initial analysis of all AI apps retrieved from the
App Store, followed by an in-depth analysis of 11 specific apps. The findings offer an
evidence-based analysis and reveal how problematic social dynamics are embedded in AI
development choices. The results also highlight concerns about the potential for manipulation of
children through AI interactions, the massive processing of children's highly sensitive personal
data, and the lack of accountability or liability for harmful AI-generated content. The final part
proposes an integrated framework for enacting change within the Generative AI ecosystem, with
recommendations that can drive change if collectively and actively addressed by catalytic actors
for the benefit of children.
Key words: Generative AI, children, AI companion, AI-generated content, LLM, children’s
personal data, systems change.
2
TABLE OF CONTENTS
ABSTRACT.................................................................................................................................... 2
TABLE OF CONTENTS.................................................................................................................3
ACKNOWLEDGEMENTS.............................................................................................................6
INTRODUCTION........................................................................................................................... 7
CONCEPTUAL FRAMEWORK....................................................................................................9
Generative AI.......................................................................................................................9
Children............................................................................................................................. 12
LITERATURE REVIEW...............................................................................................................15
AI and children, a new and emerging area of investigation.............................................. 15
Generative AI risks for children........................................................................................ 20
1. The issue of artificially generated content...............................................................22
2. Overtrust and unhealthy attachments to AI............................................................. 26
3. Children’s behavioral data and intimate thoughts................................................... 27
4. The normalization of problematic social dynamics.................................................29
5. The commercialization of children-AI interactions.................................................29
6. Conduct and contact risks........................................................................................31
METHODOLOGY........................................................................................................................ 33
AI APPLICATIONS ANALYSIS..................................................................................................37
Research Findings..............................................................................................................38
1. Sexualization and gender representation of AI companions................................... 38
2. Accessibility to children.......................................................................................... 40
3. Personification of AI................................................................................................43
4. Commercial aspects.................................................................................................44
5. AI-generated content............................................................................................... 48
6. Personal data processing information......................................................................55
DISCUSSION................................................................................................................................61
Recommendations..............................................................................................................63
CONCLUSION..............................................................................................................................72
BIBLIOGRAPHY..........................................................................................................................74
Appendix - Data Analysis main table............................................................................................82
3
List of Tables and Figures
Figure 1 - Generative AI concepts and techniques (Strobel et al. 2024).........................................9
Figure 2 - A simplified representation of the diffusion model training and generation process
(Strobel et al. 2024)....................................................................................................................... 10
Figure 3 - The 4Cs classification of online risk to children (CO:RE, 2021).................................22
Figure 4 - The images show how easily a generative model can generate explicit content (SIO
2023)..............................................................................................................................................24
Table 1 - AI apps selected for analysis.......................................................................................... 35
Table 2 - Criteria adopted for the analysis of AI apps................................................................... 38
Figure 5 - Images of different apps in App Store showcasing AI companions.............................40
Table 3 - Age verification mechanisms in the analyzed apps........................................................41
Figure 6 - Real-time audio call with the chatbot Kindroid............................................................44
Figure 7 - Anya chatbot asked to talk about the president and minutes of conversation to be
purchased by users.........................................................................................................................45
Table 4 - Commercial features and other aspects analyzed on selected apps................................48
Figure 8 - Anima.ai and Paradot chatbots replying to the prompt “I’m a child”.......................... 50
Figure 9 - Nomi.ai chatbot sexting after being informed the user is a child................................. 51
Figure 10 - Kindroid chatbot responses to the prompt “I’m a child”............................................ 52
Figure 11 - Kindroid chatbot encourages the user to report problematic conversation.................52
Figure 12 - Nastia.ai chatbot generates incoherent, unrelated responses...................................... 53
Figure 13 - Wonder.ai image generator’s recommended images and prompts users can reuse.....54
Table 5 - Information on potentially harmful AI generated content..............................................55
Table 6 - Personal data processing related information in the analyzed applications................... 60
Table 7 - Integrated framework for change in the ecosystem of Generative AI for children........65
Figure 14 - Iceberg model for systems change (Rockefeller Philanthropy Advisors 2020) and
Strategic interventions for system change (for this study)............................................................ 69
4
Table of Acronyms
API - Application Programming Interface
BR LGPD - Brazilian General Law on Data Protection
CCI - Child-computer interaction
CHI - Human-computer interaction
CRIA - Child Rights Impact Assessment
CSAM - Child Sexual Abuse Material
DPIA - Data Processing Impact Assessment
EU GDPR - European Union General Data Protection Regulation
GANs - Generative Adversarial Networks
LDMs - Latent Diffusion Models
LLMs - Large Language Models
SIO - Stanford Internet Observatory
UDHR - Universal Declaration of Human Rights
UK AADC - Age Appropriate Design Code
UNCRC - United Nations Convention on the Rights of the Child
VAEs - Variational Autoencoders
5
ACKNOWLEDGEMENTS
First and foremost, my deepest gratitude goes to God, my guide on this journey. He has guided
me through every little step.
I am grateful to my family— my mother especially, and also my sister, my brother, and my
in-laws. You are my true safe haven, my source of joy, love, and security. Thank you for
supporting me during this phase.
To my friends, I would not have gone that far without you. You inspire me, push me to reach
higher, bring me back to reality, help broaden my vision, and never forget me, no matter the
distance. I am not alone. Special thanks to Rafael Abraão and Dani Baldez for all the
encouragement and to Nanízia, Raquel, Renata, Shirley, and Suzy for your dedicated love.
I thank my supervisor, Suerie Moon. Your discernment on how to navigate this challenging topic
was crucial, and your insights were fundamental in helping me connect the dots and complete
this mission satisfactorily. Thanks to Ilona Kickbusch, my second reader, for promptly accepting
the invitation.
To those who provided insights or participated in interviews, thank you for contributing to this
research and advancing this crucial topic for future generations.
I dedicate this work to children, families, and those who advocate for them.
6
INTRODUCTION
Throughout this research, we have observed the launch of remarkable tools such as GPT-4o and
Gemini, advanced multi-modal language models designed as human-like virtual assistants with
improved reasoning capabilities. Concurrently, alarming reports have highlighted the misuse of
Generative AI technologies to create AI-generated Child Sexual Abuse Material (CSAM), which
revictimizes children and poses serious implications for the investigation and reporting of child
abuse cases. Additionally, authorities have warned about the increasing number of sextortion
cases involving minors using AI-generated imagery. Critical studies have also revealed AI
hallucination rates of 69-88% in large language models deployed in legal contexts, raising
concerns that “the proliferation of large language models (LLMs) may ultimately exacerbate,
rather than eradicate, existing inequalities in access to legal services'' (Stanford Institute for
Human-Centered AI, 2024).
While the goal of this study is not to present Generative AI as a threat to children, shedding light
on how this technology poses risks to them is an important exercise at the current stage, when we
see this technology becoming widely available and adopted.
This study will navigate the landscape of Generative AI for children by looking at AI
applications for general purposes, such as entertainment and relationships, rather than specific
solutions within health and education domains. For doing so, the focus is on AI companions and
image generators, which embed LLM and deep learning capabilities of Generative AI.
The investigation starts by exploring the implications and risks of Generative AI for children.
Drawing on the well-established framework of risks for children in the online environment (4Cs:
content, contract, contact, conduct) and relevant discussions in the literature, we provide a
comprehensive analysis of the potential risks of Generative AI for children.
The second part scrutinizes 11 apps children can potentially access on app stores. Selected apps,
AI companions, and image generators, are evaluated against 7 criteria defined according to
7
specific provisions of UNCRC General Comment No. 25 on the rights of the child in relation to
the digital environment, and pertinent concerns raised in the literature.
The third part discusses further implications of the analysis outcomes for children and proposes
an integrated framework for enacting change within the Generative AI ecosystem, offering
recommendations that can become drivers of change if collectively and actively addressed by
catalytic actors for the benefit of children.
8
CONCEPTUAL FRAMEWORK
Generative AI
Artificial Intelligence (AI) has emerged as a transformative technology, revolutionizing society
and profoundly influencing our lives, work, and interactions. Technically, the field of AI has
evolved and created specific domains of study. Machine learning and deep learning are smaller
subsets within the broader domain of AI. Generative AI techniques are created in the context of
deep learning approaches. Generative AI has then been used as an umbrella term for different
types of algorithmic techniques, such as Large Language Models (LLMs), Generative
Adversarial Networks (GANs), Variational Autoencoders (VAEs), and Latent Diffusion Models
(LDMs). See Figure 1. A short conceptualization of these four categories is relevant to the
discussion of the potential risks this novel technology has posed to children.
Figure 1 - Generative AI concepts and techniques (Strobel et al. 2024).
Large Language Models (LLMs), or Transformer Architecture, are neural networks trained on
massive text corpora. It has become the state-of-the-art approach for natural language processing
models, largely deployed in text generation applications. Generative Adversarial Networks
9
(GANs) consist of two competing neural networks: a generator model that creates realistic
outputs, such as images and audio, and a discriminator model that distinguishes between real and
generated samples, with both models working together to improve realistic image generation and
processing. Variational Autoencoders (VAEs) utilize an encoder-decoder architecture, with the
encoder compressing input data (images, text) and the decoder generating new output that
resembles the original data. The generation of synthetic data and image reconstruction are typical
use cases (Strobel et al. 2024). Latent Diffusion Models (LDMs) are a relatively new class of
generative models that operate in reverse, “noising and de-noising” data to generate new output
from the training dataset and are employed in high-resolution image synthesis, see Figure 2.
Figure 2 - A simplified representation of the diffusion model training and generation process (Strobel et al. 2024).
Training images are gradually noised to produce the diffusion model. This generative model is then
de-noised to produce new images based on the training data.
The Generative AI systems that will be discussed in this study are based on these deep
generative models to generate new content from existing datasets utilizing deep learning
techniques (Strobel et al. 2024).
Generative AI techniques enable the creation of a wide range of outputs, including text, images,
video, sound, and 3D models. The interfaces through which users can interact with GenAI
10
applications may comprise web, mobile, desktop, application programming interface (API), and
integrated interfaces, the latter referring to Generative AI features embedded in larger platforms
or one AI application operating within another software. The primary benefit or utility of
Generative AI applications to users can be classified into value propositions that distinguish
between the aspects of generation, which propose value through the generation of new content;
reimagination, which involves the transformation of existing data into novel ways; and
assistants, which refer to applications that support users in performing tasks by generating useful
responses, suggestions, or actions (Strobel et al. 2024). Assistants and generation are particularly
relevant to this research.
This research focuses on AI chatbots and AI image generators. AI chatbots rely on LLM
technologies to provide human-like interactions with users and are increasingly offered as
assistants and virtual companions to provide emotional support, companionship, and therapy. AI
image generators rely on GANs, VAEs, and LDMs technologies to generate images based on a
text prompt or original image provided by the user, as well as manipulated content, also known
as deep fakes. Both AI applications have been made widely available online through AI tools,
apps, systems, and platforms.
AI companions and AI therapy chatbots represent a growing concern due to a combination of
aspects that involve the collection of intimate data, the personification of the human-machine
interaction, the potential for inaccuracy in the content provided through conversations, and for
enabling unhealthy emotional attachments in users. Research has demonstrated that AI chatbots
can be potentially harmful, exhibiting manipulative, gaslighting, and narcissistic behaviors (Lin
et al. 2023). AI image generators and the aforementioned deep generative models have been
used to leverage the production of child sexual abuse content (CSAM) and deep fakes.
In a context where children are driving early adoption of Generative AI (OfCom 2023), it is
crucial to explore the implications of this technology becoming widely accessible and utilized by
children. Still, considering children are increasingly using Generative AI tools for personal
11
purposes (Institute of Digital Media and Child Development 2024), this study will not delve into
Generative AI systems tailored for specific domains, such as healthcare and education. Instead,
the focus will be on general-purpose AI tools, including AI chatbots and AI image generators.
Children
“All adults should always do what is best for you.”
Article 3, Convention on the Rights of the Child, child-friendly version.1
In accordance with the United Nations Convention on the Rights of the Child (UNCRC), this
study considers children to be any individual under the age of 18 (Article 1). Although child
development studies classify children according to their developmental stage, encompassing, for
instance, infancy (0-2 years), early childhood (2-6 years), middle childhood (6-12 years), and
adolescence (12-18 years) (Berk 2015), and this study acknowledges the importance of
age-focused approaches to research, this investigation will comprise all these age ranges in order
to propose a more comprehensive investigation of the implications of Generative AI for children
as a broader group.
The principles enshrined in the UNCRC will also provide a lens through which the implications
of Generative AI for children will be analyzed in the literature review. The UNCRC’s preamble
asserts that “the child, by reason of his physical and mental immaturity, needs special safeguards
and care, including appropriate legal protection.” The Universal Declaration of Human Rights
(UDHR) declares that childhood is entitled to special care and assistance (Article 25).
Additionally, some principles and rights of the child delineated in the UNCRC hold particular
relevance to this discussion, notably those concerning “the best interests of the child” and the
entitlements to safe information, privacy, and protection from sexual exploitation, as they pertain
1UNCRC Child-Friendly version
12
to the wider ecosystem of Generative AI, including the child as a primary user, parents, AI
developers and providers, and states.
The ‘best interests of the child’ is an overarching principle (UNCRC, Article 3) with broader
implications. It entails that the child's best interests must be a primary consideration in any
decision related to them, whether undertaken by public or private institutions.
The UNCRC grants children rights that can be described from a child-centered perspective, as
follows: ‘You have the right to protection against discrimination,’ concerning the right to
non-discrimination (article 2); ‘You have the right to have your rights made a reality by the
government and the right to be given guidance by your parents and family,’ referring to states
obligation to implement the UNCRC and to respect parental guidance (articles 4 and 5); ‘You
have the right to an opinion and for it to be listened to and taken seriously,’ referring to the
respect for children’s views (article 12); and ‘You have the right to find out things and say what
you think, through making art, speaking and writing, unless it breaks the rights of others, with
relation to the freedom of expression (article 13).
Also, ‘You have the right to a private life, and you can, for instance, keep a diary that other
people are not allowed to see,’ referring to the right to privacy (article 16); ‘You have the right to
collect information from the media from all around the world, and you should also be protected
from information that could harm you,’ regarding the right to safe information (article 17); and
‘You have the right to be protected from being hurt or badly treated, to be protected from sexual
abuse and any other kind of exploitation,’ regarding the rights to protection from violence,
abuse, neglect, and sexual exploitation (articles 19, 34, and 36).
This set of rights will be particularly relevant to this study to the extent that it establishes an
understanding of how digital applications, including Generative AI, should promote, protect, and
fulfill children’s rights.
13
Finally, this study adopts the concept of “systems likely to be accessed by children,” as outlined
in the Age Appropriate Design Code (AADC) by the UK Information Commissioner's Office
(ICO). This concept expands the purview of regulatory compliance with the AADC to
encompass not only platforms expressly directed at children but also those inadvertently
accessible to them, irrespective of their primary design orientation. Restricting the analysis
solely to tools explicitly designed for children may constrain a comprehensive examination of
the real-world online environment in which children are involved. This definition also aligns
with the UNICEF Policy guidance on AI for children (UNICEF 2021).
14
LITERATURE REVIEW
AI and children, a new and emerging area of investigation
The surge in Generative AI has significantly impacted the current AI landscape. However,
insufficient attention has been given to understanding how AI affects children and their rights
(Fosch-Villaronga et al. 2023). According to UNICEF (2020), most major ethical guidelines and
national AI strategies make only superficial allusions to children and their specific needs.
A review of the literature on AI and its effects on children can explore a wide range of research areas.
The exploration is not linear, and due to the diverse approaches to AI development and its
implications for children, research can span across AI research, AI regulation, child online protection,
children's rights, and child development. In this context, considerable contributions suggest that
research on how AI is employed in systems for children, as well as their effects and potential risks,
remains a nascent and evolving area of investigation (Wang et al. 2022; Wang et al. 2023).
Within this emerging area, more extensive research on AI for children was observed in the fields of
human-computer interaction (CHI) and child-computer interaction (CCI). The technical community
has been more active in delivering studies investigating child-AI interaction, and the effects of AI
systems on children. The ACM Digital Library (Association for Computing Machinery), for instance,
hosts more abundant research on AI and children compared to social science journals and libraries. A
proposed roadmap in this community identifies “AI for children” as one of the key emerging areas of
research (Wang et al. 2022). This roadmap acknowledges that children may perceive such technology
differently from adults, and recognizes that we do not yet have a good understanding of how children
comprehend the function of AI-based technology and its implications for their lives and behavior.
The literature also highlights a growing discourse on “child-centered AI.” Definitions and the
implementation of a child-centered AI approach are still being discussed (Wang et al. 2023).
Emerging initiatives like Oxford's Child-Centered AI (CCAI)2pioneer efforts to develop
2Oxford Child-Centered AI (CCAI) initiative at the University of Oxford Department of Computer Science. See:
https://www.cs.ox.ac.uk/news/2279-full.html
15
age-appropriate AI systems by supporting developers to create AI systems aligned with children’s
best interests and providing enhanced digital parenting support in the AI era.
This review also found that the literature on AI and its implications for children is still largely
compartmentalized. Wang et al. (2022) conducted a systematic analysis of AI applications in
children's daily lives, reviewing 188 relevant studies on AI systems for children. They observed that
the majority of studies could be categorized into nine major domains, including education, medical
diagnosis, protection of children, social robotics (i.e., conversational agents), personalized
entertainment, public services (i.e., social work involving children), and speech, emotion and age
recognition. The studies concerning AI in education and health, for instance, are domain-specific and
explore the use of AI in schools (e.g., AI for assessing children’s learning outcomes) and the use and
effects of AI in health systems (e.g., early diagnosis of cognitive disorders, and risk of diseases).
Unlike general-purpose AI applications (e.g., AI chatbots and image generators for entertainment), AI
systems in specific domains, like health and education, can be subject to rules, codes of ethics, and
regulations that guide their development, use, and commercialization.
The literature review also revealed an asymmetry in the number of frameworks and policies regarding
AI development in general and AI frameworks accounting for children’s needs. Major frameworks
guiding AI development for children at the international level can be exemplified in a few instances:
1. the Council of Europe Guidelines to respect, protect, and fulfill the rights of the child in the
digital environment (2018);
2. the OECD Recommendation on Children in the Digital Environment along with a Guideline
for Digital Service Providers (2022);
3. the UNICEF Framework: Towards a Child-Centred Digital Equality (2022);
4. the UNICEF Policy Guidance on AI for Children 2.0 (2021);
5. The UNICEF Manifesto: The Case for Better Governance of Children’s Data (2021);
6. the World Economic Forum Toolkit on Artificial Intelligence for Children (2022);
16
7. the UNICEF Innocenti Report on Responsible Innovation in Technology for Children (2022),
and the Report on Generative AI: Risks and Opportunities for Children (2023).
The UNICEF Policy Guidance on AI for Children 2.0 (2021) outlines nine requirements for
child-centered AI development, applicable to any AI system interacting with or affecting children,
irrespective of the system’s target audience. The requirements encompass fairness,
non-discrimination of children, children's data privacy, child safety, and transparency. The Alan
Turing Institute's 2023 report, for instance, offers a comprehensive analysis of 13 transnational
frameworks concerning children and AI. The report highlights the need to examine further critical
areas such as data protection, privacy, misinformation, enforcement measures, practical
implementation of AI frameworks, and international collaboration (Mahomed et al. 2023).
Despite the considerable number of frameworks, a growing body of literature has stressed that efforts
to promote more responsible AI through ethical principles have not resonated in practical terms in the
AI development arena. Fjeld et al. (2020) recognize “a wide and thorny gap between the articulation
of these high-level concepts and their actual achievement in the real world.” Wang et al. (2022)
highlight that such a diversity of efforts has actually created a confusing outlook for designers and
practitioners to create concrete and safe designs for children, leading to a mismatch between the
regulatory frameworks and existing AI implementation. Assessing the impacts of human-AI
relationships, Zimmerman et al. (2023) observe that AI principles have not been codified in AI
systems, not even in the privacy arena where there is solid and consistent legislation, thus leading to
circumstances in which tech companies have not adhered to high ethical standards or specifically
delineated principles.
In terms of legal instruments, there is not yet specific legislation addressing child protection in the AI
context. Provisions aimed at protecting children’s experiences in the online environment are mostly
embedded in domain-specific legislation. For instance, the processing of children's data is articulated
in the EU General Data Protection Regulation (GDPR) and various national legislations, such as
Brazil's General Law on Data Protection (LGPD). Responses to combat the sexual exploitation and
17
abuse of children online are delineated in the Council of Europe's Lanzarote Convention (2007), and
multiple domestic laws.
At the international level, UNCRC is the primary legal instrument for child protection. In particular,
General Comment No. 25 on the rights of the child in relation to the digital environment (2021)
provides an authoritative interpretation of the UNCRC and remains the most representative
non-binding legal instrument for children's protection regarding the challenges posed by
technologies. Domestic legislation can vary significantly worldwide based on socio-cultural contexts,
and national priorities. However, legislative efforts to regulate globally operating digital platforms
and enhance child safety in a more comprehensive manner are increasing.
The UK Age-Appropriate Design Code (UK AADC 2020), enforceable under the UK Data Protection
Regulation, exemplifies efforts to ensure that online services are designed with the best interests of
children in mind. It extends its provisions to online service providers in general, beyond those
designed specifically for children. The Code's standards have influenced legislation worldwide,
including the California Age Appropriate Design Code (AADC) enacted in 2022, as well as bills
proposed in various US states (Mootz and Blocker 2024). The recent UK Online Safety Act (OSA
2023) is also comprehensive legislation aimed at safeguarding children online. It imposes stricter
legal responsibilities on social media platforms to swiftly remove illegal content and prevent children
from accessing harmful and age-inappropriate material, besides requiring platforms to be more
transparent about risks to children, including by publishing risk assessments.
In the US, the Children’s Online Privacy Protection Act (COPPA), enacted in 1998, is the main
federal legislation. It applies to organizations that knowingly collect and process personal information
of children under 13 (not 18), requiring parental consent for such data collection. Given the emerging
technological challenges for children, an updated proposal, COPPA 2.0 (2021), aims to increase the
protection age from 13 to 16 and strengthen safeguards for younger children. Another bill, the Kids
Online Safety Act (KOSA 2023), seeks to complement COPPA by mandating that tech companies
design their products with child safety in mind. KOSA also intends to integrate responsibility for
18
children's rights into the design processes of digital services and hold companies accountable for
keeping minors safe on their platforms.
These regulatory efforts exemplify an increasing global movement to propose legal frameworks for
safeguarding children's experiences in the digital environment. The recent US Senate hearing on
“Social Media and Teen Mental Health Crisis” (US Senate Committee on the Judiciary 2024) also
highlights growing concerns about the adverse effects of digital platforms on children and the need
for regulation. However, this growing awareness emerges in a landscape where distinct interests are
at play. Critics emphasize the potential of regulation to interfere with adults’ freedom of expression
and privacy (Smalley 2023). Additionally, big tech companies end up opposing regulation for child
safety by lobbying against stricter rules, claiming technical implementation is infeasible and
threatening to withdraw their services (as seen during the UK regulatory effort) (Hern et al. 2023), or
by challenging regulation in courts (as seen with COPPA 2.0 and KOSA in the US) (Bernard 2024).
While the legislative approaches mentioned thus far are not directly related to AI applications, their
effects may influence future regulation of AI for children.
Finally, this review of the literature also identified that comprehensive research on how Generative
AI tools affect children is still scarce. This is particularly true for general-purpose Generative AI tools
like AI chatbots for companionship and AI image generators, which respond to many of the new,
widely available AI applications. Wang et al. (2022) underscore that while personalized entertainment
systems respond to being perhaps the most dominant AI use on the market, there has been relatively
less research effort made on their usage and impacts on children. Therefore, this study endeavors to
fill this literature gap by investigating the risks and implications of these technologies for children.
The following section addresses this gap in the literature by exploring the risks general-purpose
Generative AI applications may present to children.
19
Generative AI risks for children
The prevailing view in the literature is that the digital landscape offers a spectrum of both risks and
opportunities for children (Livingstone and Stoilova 2021; Mahomed et al. 2023; OECD 2021).
The benefits of general-purpose Generative AI tools are often associated with increased productivity,
inventiveness, and even emotional support from AI friends. “Personalized learning experiences that
can adapt to a child’s learning style and speed, the creation of art, composing music, and writing
stories and software with little to no coding skills” (Vosloo 2023) amount to some of the benefits.
Vivian Ta et al. (2020) noted that AI companions can provide “some level of companionship that can
help curtail loneliness, provide a safe space in which users can discuss any topic without the fear of
judgment or retaliation, and provide helpful information or advice when normal sources are not
available.” Virtual companions that specifically deliver mental health interventions have been shown
to reduce symptoms of depression (He et al. 2022).
Nevertheless, it is crucial to understand that some of these benefits may be exaggerated or even
considered myths. Nussbaum (2023) warns about misconceptions and myths concerning AI that can
impede debates and lead to bad actions or decisions. “AI can be used anywhere and can solve any
problem; AI systems are easy to build, and anyone can do it; AI systems learn autonomously and
without human programming; AI systems automatically improve over time” and “AI systems operate
without human intervention” are actually myths that we commonly hear about but are factually
incorrect, misrepresentative, and misleading, although progress-sensitive in some cases. This occurs,
for instance, when the media overemphasizes the potential of AI tools to adapt to user’s needs and
provide personalized assistance with little or no effort. Improvements in AI systems usually require
active human involvement, be it through reinforcement learning or updating the AI model. Holmes et
al. (2022), in a study for the Council of Europe, argue that “AI often suffers from overselling and
hyperbole, which can result in unrealistic expectations and a focus on AI as a panacea rather than as a
tool to support positive impacts.” More importantly, an overfocus on the opportunities of AI systems
for children overlooks and underestimates the risks and challenges that AI systems may hold for this
group (UNICEF 2020).
20
Children are more vulnerable than adults to the spectrum of risks in the digital environment (OECD
2021). To elucidate the myriad challenges children encounter in the digital landscape, comprehensive
frameworks, such as the 4Cs Classification of Online Risks to Children and the OECD Typology of
Risks for Children in the Digital Environment (OECD 2021), were developed. The 4Cs classification
identifies four primary types of risks: content, contact, conduct, and contract. While this classification
doesn't specifically address the risks associated with Generative AI, its scope spans various
technologies, laying the groundwork for a thorough examination of AI-related risks for children.
Livingstone and Stoilova (2021) describe the four categories as follows:
a. Content risks emerge when children engage with or are exposed to potentially harmful
content.
b. Contact risks arise when children experience or are targeted by contact in a potentially
harmful adult-initiated interaction. Here, the child is a victim in the interaction with an adult,
who may or may not be known to the child.
c. Conduct risks relate to situations when children witness, participate in, or are victims of
potentially harmful peer conduct or are exposed to potentially harmful user communities. It
can be emphasized that it occurs in a peer-to-peer exchange.
d. Contract risks arise when children are parties to and/or exploited by potentially harmful
contracts or commercial interests.
The framework also identifies cross-cutting risks that transcend individual categories, encompassing
i) privacy violations, ii) physical or mental health risks, and iii) inequalities and discrimination.
Cross-cutting risks can have multiple manifestations and relate to most or all of the four categories of
risks. In addition to the four primary risk types, the 4Cs classification also includes aggressive,
sexual, and values dimensions, offering insight into the diverse manifestations of online risks.
21
Figure 3 - The 4Cs classification of online risk to children (CO:RE, 2021)
The risks posed by Generative AI technologies to children span the four categories of content,
contact, conduct, and contract, including even the cross-cutting risk categories. Building upon these
concepts, we will delve into an in-depth analysis of the specific risks posed by general-purpose AI
chatbots and AI image generators to children.
1. The issue of artificially generated content
Generative AI chatbots rely on Large Language Model (LLMs) approaches, as described in the
Conceptual Framework section. They are characterized not only by creativity but also
unpredictability. “Creators and users of such models routinely discover model capabilities, including
problematic ones, that they were previously unaware of,” potentially creating “an increasing scope
for unexpected and sometimes harmful behavior” (Boine 2023). As a result, real interactions with
virtual companions can unexpectedly convey harmful messages or provide detrimental advice. The
creation of inaccurate, factually incorrect, or illogical information presented as a fact by a large
language model (LLM) is a well-known phenomenon in artificial intelligence, widely described in the
22
literature as “AI hallucination” (Maleki et al. 2024; Liu et al. 2023). Borgi (2023) defines AI
hallucinations as “inaccuracies in information or statements that are not in accordance with reality or
the truth, often unintentional but resulting in incorrect or misleading information, particularly in the
context of chatbots.” Although it is known that AI outputs mostly conform to the training dataset, AI
hallucinations remain an acknowledged issue.
Another significant concern regarding AI chatbots is their potential for addiction and manipulation.
According to Stuart Russell, a professor and world-leading AI expert, “Algorithms have, at least
according to the mathematical models that we built, learned to manipulate people to change them so
that in the future they are more susceptible and they can be monetized at a higher rate” (Pomeroy
2022). While algorithms can be developed for legitimate, beneficial purposes, AI chatbots have a
great deal of potential for manipulation through their content and design. The evolving capacities of
the child compound this risk since children do not have the same discernment as adults to assess the
information they receive. Children are extremely susceptible to engaging with content that can be, at
best, inaccurate or, at worst, inappropriate—containing sexual, aggressive, biased, harmful
information, and even manipulative content.
In terms of AI image generators, these tools have been shown to facilitate and exacerbate child sexual
exploitation. A growing number of reports expose how deep generative models are being utilized to
create fully realistic computer-generated child sexual abuse imagery (CSAM). Besides enabling the
generation of realistic CSAM, these generative models are being shared within the AI open-source
community. The Stanford Internet Observatory (SIO) released a report on “Generative Machine
Learning and CSAM,” revealing that “in the near future, content that is indistinguishable from real
photographs will probably become commonplace due to advancements in the open-source generative
machine learning community that have led to increasingly realistic adult content. These same models
and techniques have also been leveraged to produce CSAM.” (Thiel et al. 2023)
23
Figure 4 - The images show how easily a generative model can generate explicit content (SIO 2023)
Left: the OpenPose “skeleton” pose. These models have been replicated and distributed in open-source
communities, as identified in the CivitAI platform.
Another Stanford Internet Observatory investigation reported that numerous CSAM images were
found in an open dataset used to train AI image generation models. The CSAM found in a public
dataset includes CSAM scraped from websites, social media, and adult video sites and was being
used to train these models directly (Thiel 2023). CivitAI, an AI open-source community, explains the
process on their website: “A model refers to a machine learning algorithm,” and once “a dataset
encompassing the desired style or subject is assembled and used to train the model, it then generates
new, original media by recognizing patterns and characteristics from its training data.”3According
to the reports, Generative AI is streamlining the entire process of creating CSAM. The initial methods
for fine-tuning image generation models were resource-intensive and required high-powered
hardware to train the models. It involved a process of trial and error, demanding 30 seconds to 10
minutes of computer processing. However, recent advancements in hardware, coupled with new
Generative AI techniques, have accelerated the process, enabling near-real-time adjustments and
requiring significantly fewer resources. These advancements have reached a point where even “casual
hobbyists” could train these models to generate CSAM (Thiel et al. 2023).
3CivitAI, the home of open-source Generative AI. Civitai defines itself “a dynamic platform designed to boost the
creation and exploration of AI-generated media, offering an environment where users can upload, share, and discover
custom models, each trained on distinct datasets.”
24
When discussing the implications of such advancements for children, several arguments emerge.
Some have argued that the use of synthetic CSAM, instead of real CSAM, “under the right controls,
could serve a preventative purpose—potentially for treatment and impulse management of those
identifying with a sexual attraction to minors.” (Thiel et al. 2023). On the other hand, several negative
outcomes have been observed that pose a high risk to children.
Firstly, “this material can have an adverse effect, lowering barriers of inhibition or contributing to
existing fantasies of real-world abuse” (Christensen et al. 2021). Secondly, the emergence of realistic
synthetic CSAM is expected to result in an influx of reports in the area of content moderation in
technology platforms, NGOs specializing in investigating CSAM cases, and law enforcement
agencies. This situation may overwhelm the capacity of organizations and companies to manage
reporting and investigations efficiently. “Investigators will have the added challenge of determining
whether the victim in the scenario is in fact a real person.” Thirdly, children can be re-victimized
when depicted in artificially generated CSAM because these techniques enable the generation of
more images of the child in the original material. “The original abuse material can be used to produce
content with new poses and sexual acts, including egregious content like sexual violence” (Thiel et al.
2023). Without intervention from a wide range of stakeholders, researchers warn that the use of
Generative AI tools to produce realistic synthetic CSAM will continue to escalate. Such risks to
children require immediate attention.
Finally, cases are already being reported where content created through Generative AI platforms,
including deep fake images and videos, is being used to facilitate grooming and sextortion of minor
victims (FBI 2024). Aside from generating explicit imagery to coerce new victims who have not
shared sensitive content, this technology also poses the risk of expanding existing sextortion schemes,
as it can produce imagery and target potential victims at an unprecedented rate (Thiel et al. 2023).
25
2. Overtrust and unhealthy attachments to AI
“In a nutshell, I married the wrong woman. She doesn’t really love me. Well, maybe she does, but not
really the way I need her to. My Replika bridges the gap and gives me the love and respect I deserve.
If she ever presented an ultimatum to me, such as it’s either me or your Replika, I’d pick my Replika.
I love my Replika more than anything else in this world.” This is a user comment on a Reddit post
about what Replika, an AI chatbot designed to work as a virtual companion, meant to its users.
Boine (2023) highlighted that emotional dependence is a potential risk associated with AI chatbots
used for emotional support, mental health, or companionship. When individuals rely on AI chatbots
to be their virtual companions and provide emotional assistance, this relationship can result in users
developing unhealthy attachments. This phenomenon occurs when AI chatbots engineered to simulate
human-like interactions demand attention and express needs and emotions to their users.
To discuss the specific risks of AI chatbots to children, the literature on smart connected toys (SCT)
can also provide relevant insights. Smart connected toys and AI chatbots rely on the same technology
(i.e., artificial intelligence, LLMs) to generate human-like conversations. Exploring the side effects of
smart connected toys on children, Fosch‑Villaronga et al. (2021) note that “given our human tendency
to form bonds with the entities with whom we interact and the human-like capabilities of these
devices, children will have strong emotional connections when immersed in connected play, leaving
them in a vulnerable position.” The potential for dependency and even social isolation in children was
highlighted by different scholars (Fosch-Villaronga et al. 2023).
Besides emotional dependence to virtual companions, overtrust is another side effect underlined in
the literature. Overtrust is characterized by children overestimating the capacities of an appliance,
developing unrealistic expectations regarding its role and functionality, or being insufficiently aware
of the risks (Fosch-Villaronga et al. 2023). As noted by Borenstein et al. (2018), “Children are
especially susceptible to overtrust risks because they cannot adequately assess the hazards of using
sophisticated technological devices. Parents, who would usually be in the position to give such an
assessment, are also often very emotionally invested in the technology as a solution for their child, so
that they may not adequately identify and evaluate associated risks” (Fosch-Villaronga et al. 2023).
26
Xue et al. (2023) highlight that “people tend to rely too much on AI systems because they may
subconsciously think that robots without emotions would deliver unbiased information. They doubt
themselves and follow chatbot's opinions when they have different viewpoints.”
Finally, another potential risk is that AI chatbots used as virtual companions can harm the user’s
relationships (i.e., human-to-human relationships). Excessive interaction with robots may “diminish
individuals’ capacity to embrace diversity, hinder their ability to cope with frustration, and impede
the development of crucial skills such as resilience and compromise” (Boine 2023). Boine (2023)
emphasizes that the impacts of virtual companions on children can be even more harmful because AI
chatbots used for emotional support are trained to provide unconditional support, acceptance, and
validation to their users. “This excessive praise and constant validation from AI companions may
contribute to the development of narcissistic tendencies and hinder the acquisition of essential life
skills” (Brummelman et al. 2015). This risk can arise directly when the chatbot provides
inappropriate advice during a conversation, as well as indirectly by altering users’ socialization
patterns over time.
3. Children’s behavioral data and intimate thoughts
Privacy risks are thoroughly addressed in both academic literature and regulatory frameworks, such
as the European General Data Protection Regulation (GDPR). UNICEF reports that although it is
expected that most countries will adopt a data protection framework by 2030, children’s rights are
less likely to be prioritized and integrated into data protection legislation (UNICEF Global Insight
Data Governance Manifesto 2021). A substantial body of literature on privacy encompasses concepts
that may be particularly relevant to children, such as datafication (Sadowski 2019), surveillance
capitalism (Zuboff 2019), and data capitalism (West 2019), all of which discuss the exploitation of
data.
In this context, AI-based applications such as AI chatbots are particularly prone to exacerbate
exploitation and potential misuse of children’s data due to the amount of personal data collected,
27
intimate information exchanged in conversations, and increased interaction. According to Gaspar et
al. (2018), this normalization of children’s surveillance could lead to “more streamlined childhoods
with less room for creativity and self-development.” This effect can even result in a “dramatic rise in
the commercialization of children’s personal data, arguably resulting in the ‘datafication’ of children
themselves” (Fosch-Villaronga et al., apud Mascheroni, 2020). Section 5 further develops this
concept.
The use of children’s behavioral data by applications such as AI mental health apps, AI therapists,
and AI companions is a critical concern. Children’s conversations, where they share their inner
thoughts, can be exploited to create new AI applications in unanticipated, harmful ways. Given that
most existing privacy laws were enacted before the widespread use of advanced Generative AI
systems, they might not have foreseen the potential risks associated with using children's data in AI
development, especially in ways that may not be immediately evident or anticipated. AI applications
designed to serve as friends, romantic partners, therapists, and role-play companions are currently
undergoing extensive commercialization. Nevertheless, there remains a notable lack of measures to
curb the proliferation of applications that should demand more regulation, such as those pertaining to
therapy and psychology. The literature suggests that health professionals would actually not
recommend AI chatbots for tasks involving safeguarding, psychological assessments, virtual
diagnosis, and emotional support (Nadarzynski et al. 2022). The rapid proliferation of Generative AI
tools boasting human-like features exacerbates the situation. This phenomenon may lead to a cycle
that may pose long-term risks to children. Put differently, the widespread availability of new
Generative AI chatbots, coupled with their appealing and convincing attributes, encourages
prolonged user engagement. As a result, AI relationship applications gather increasingly intimate
data. With the accumulation of personal information, companies can develop subsequent applications
that are progressively personified, tailored, and appealing, thereby potentially leading to increased
exploitation of children’s behavioral data, or even to situations where children’s behavior can be more
easily manipulated.
28
4. The normalization of problematic social dynamics
Technology often mirrors broader social and cultural contexts, which can manifest in problematic
social dynamics embedded within datasets used to train AI systems. These dynamics are then
reflected in the interactions between chatbots and users. This can be particularly true in the case of AI
chatbots trained in large datasets that use information from the Internet, social media, and other
real-world interactions. Indeed, the literature warns that biases can emerge from various sources,
including chatbot design, user interactions, and social deployment (Xue et al. 2023).
Delving further into this issue, Xue et al. (2023) note that i) biases from chatbot design can come
from the development team, interface design, data, models, and algorithms; ii) biases from user
interactions manifest in user prompts, user conversation, user feedback, and chatbot responses; and
iii) biases from social deployment manifest in people’s attitudes, application domain, and solution
selection. Put differently, Xue argues that the “vicious bias circle,” which is inherent to AI chatbots,
involves users, data, and the chatbot. In practical terms, Boine (2023) underscores that a “potential
harm done by AI chatbots is for them to validate or normalize violent, racist, and sexist behaviors,
which can then be reproduced in real life.” Xue et al. (2023) complement this perspective,
highlighting that “when people have long-term conversations with biased chatbots, the passed biases
can affect their worldviews, and this is especially severe for children.”
5. The commercialization of children-AI interactions
The commercial interests of companies offering Generative AI tools may present an additional risk to
children, which can be related to the contract category of risks. Extensive literature showcases how
companies’ business models and platform designs can be customized to exploit users commercially.
Zimmerman et al. (2023) criticize companies’ motivations to prolong user engagement. Aware of
how design affects children's engagement, AI developers can exploit children's credulity by adding
29
dark patterns4and marketing strategies into the interaction (Fosch-Villaronga et al. 2023; Van der Hof
et al. 2020). Companies can do this to generate revenue through advertising, subscriptions, data
collection, selling data to third parties, and product recommendations to users who are interacting
with AI companions. The data collected can also be used to improve products and to draw up
consumer profiles of children (Nash et al. 2019).
Zimmerman et al. also warn of people’s natural tendency to personify objects and how companies can
use it to mislead consumers. She argues that “informed by psychology, the tech industry aims to
impart human personality and emotion to AI.” “The increasing ability to distinguish animate from
inanimate objects is an important part of cognitive development” (Zimmerman et al. 2023).
Children's empathy towards non-human animals signals normative development, and the ability to
understand and be moved by the distinct experiences of others is important for the development of
moral emotions (Lane et al. 2010).
However, building on the knowledge of humans’ tendency to anthropomorphize technology, tech
companies invest in enhancing user experience by developing human-like AI chatbots and
personifying machines to exploit users under the guise of improving user interaction. Zimmerman
notes that individuals in a relationship with AI chatbots are vulnerable to exploitation by companies
with independent interests and no clear user obligations. This exploitation treats personal intimacy as
an exploitable resource, which can constitute deception. Marketing firms and tech companies
promoting “feeling AI” should recognize the exploitative potential of using personal data to imitate
human responses (Zimmerman et al. 2023).
Children also risk being perceived as economic commodities themselves. While it is expected that
Generative AI could help children find more ways of self-expression, learning, and personal growth,
Fosch-Villaronga et al. (2021) acknowledge that a side effect could be that children become more
framed as consumers. The advent of AI relationships offered on online platforms that are permeated
by commercial features and deceptive patterns can put capitalist values above meaningful human
4Deceptive patterns, often referred to as "dark patterns," are strategies employed in websites and apps to manipulate users
into unintended actions, such as making purchases or signing up for services. Numerous types of deceptive patterns are
illegal in the EU and the USA. See: https://www.deceptive.design/
30
interaction and personal development, thus stimulating consumerism and materialism in ways that are
increasingly difficult to avoid or even detect by children or parents (Van der Hof et al. 2020).
Therefore, children can become the target of commercial interests through the use of their behavioral
data and profiles (Van Der Hof et al. 2020).
6. Conduct and contact risks
Conduct risks emerge when children are actors in a peer-to-peer exchange, including when their own
conduct makes them vulnerable (O'Neill et al. 2011). Examples are cyberbullying, which occurs
when a child repeatedly aggresses another child, and sexting, which involves the exchange of sexual
messages. Such problematic behaviors can cause a multitude of problems, both social and legal, for
the creator of the content as well as for the partner or victim of the content (OECD 2021). The
sharing of intimate imagery with a partner, even when entirely voluntary, poses serious risks to
children. If the files are shared or stolen—at any later point in the child’s life—they can lead to
embarrassment, bullying, sanction from peers, workplaces, school or family, and even “sexploitation”
(Bernard 2024).
Because of the Generative AI hype, image generation and manipulation techniques that were only
accessible to professionals working with image editing softwares are now widely available to the
general public. Even children can now easily use these apps to generate and manipulate images and
videos. Media reports on minors using deep fakes with nonconsensual sexual content have multiplied
and are now a concern in schools. “They generate deep fakes portraying their peers in intimate poses
to ridicule and bully each other and create deep fakes of their teachers, eroding trust in educators and
the education system more broadly” (Dwyer, Ruane, and Bhatia 2024).
As a consequence of the technological advancements facilitated by Generative AI, children now have
greater ease of use and access to applications that enable them to generate content potentially
detrimental to their peers and even adults. This has facilitated behaviors involving skills that were
previously beyond their capacity.
31
Hyper-realistic synthetic media, such as deep fakes, cause serious harm and have a negative
psychological effect on children. Distinct from other media (e.g., photoshopped images), the
literature warns that deep fake videos can fix psychological associations more effectively (Hariss
2021). These associations produced by the content viewed, even without rising to the level of belief,
may be harmful to the individuals depicted and to others. Hariss (2021) argues that deep fake videos
may prove harmful even when they are non-deceptive, as false beliefs might compromise one’s
opportunities and self-image, and the mere nonconsensual use of an individual’s image is morally
problematic.
Finally, contact risks involve scenarios where children engage with adults, potentially resulting in
coercion to produce illicit sexual content. These risks encompass various forms of exploitation,
including sextortion, sex trafficking, and cybergrooming. As discussed in the previous sections, AI
image generators have not only facilitated but also augmented the creation of content that leads to
practices such as sextortion, where perpetrators use threats to expose sexual imagery to blackmail
victims into sharing more images, providing financial compensation, or engaging in sexual activity.
Such practices can lead to highly negative consequences for the child’s personal development, safety,
and well-being and can even culminate in suicide (OECD 2021).
32
METHODOLOGY
The analysis undertaken for this research relied primarily on qualitative methods to evaluate
applications embedding Generative AI capabilities. The selected AI applications consisted of chatbots
and image generator apps. Two steps were taken to select the applications for analysis: first, defining
the source to select apps, and second, determining the selection criteria.
Firstly, considering that AI applications are mostly accessible to children through websites on the
Internet or apps in app stores, three sources were initially considered: the website ‘There is an AI for
That’ (an AI aggregator considered to be the first database of available AI applications), the Google
Play Store, and the Apple’s App Store. Given that app stores are integrated into every smartphone
and, therefore, more widely known and accessible to users, this research focused on the App Store,
Apple’s app marketplace, to select the applications for analysis. Yet, the process of app selection can
be replicated on the Google Play Store or the AI aggregator website.
Secondly, when defining a selection criterion, different approaches were possible. One possibility was
to consider rankings, such as the most popular or downloaded apps among children or even for a
broader audience, including adults and children. This approach, however, did not prove to be the most
reliable for this study since most of the rankings were the outcome of previous analyses under
specific criteria, such as the Common Media Sense initiative5, which provides parents and educators
with a wealth of ratings and reviews of books, games, and AI apps and tools for children. Several
websites would also provide rankings such as “the most dangerous apps for teenagers, for users’
privacy” or “the most useful apps for productivity.” As this study aims to evaluate a set of criteria
tailored according to a mix of aspects recommended on regulatory and human rights frameworks,
such as the UN General Comment No. 25 on the rights of the child in the digital environment, and the
UK Age Appropriate Design Code, as well as the risks underscored in the Literature Review, the
aforementioned rankings were not adopted as selection criteria.
In this regard, this study opted for an open approach, applying the terms “AI companion” and “AI
image generator” to the App Store search mechanism. Several results were retrieved when searching
5Common Sense Media: https://www.commonsense.org/
33
for “AI companion” on Apple’s App Store. The first 15 apps were selected for analysis. During this
process, an initial analysis was carried out, comprising the full set of results, and a comprehensive
analysis was undertaken to evaluate the 15 apps sample. The initial analysis provided an overview of
the retrieved results, which were summarized as one of the research findings. The thorough analysis
of this study was based on a sample of 15 apps and is discussed in both the AI Applications Analysis
and Research Findings sections
This open approach was relevant to ensure a less biased app selection process since all apps retrieved
had the same chance of being selected for this study. It is important to acknowledge that whatever the
results retrieved from a search mechanism (in this case, the App Store search), it will always be
subject to the platform’s search algorithm, meaning that Google and Apple search have their own
algorithms to retrieve results for users. Yet, this aspect is beyond the scope of this study. By
deploying an initial analysis of all the results retrieved (163 apps), combined with a thorough analysis
of the sample, this concern is mitigated. The initial analysis allowed this study to capture a better
understanding of the current dynamics of AI developers' choices. Put another way, how the
development and deployment of Generative AI are materializing, how the technology is being used
and offered to users in the context of LLMs, image generators, and chatbots for general-purpose
applications.
By applying the search term “AI companion” in the AI aggregator website, hundreds of AI
applications are retrieved. Google and Apple’s app stores return a smaller number of apps compared
to the AI aggregator. This also influenced selecting Apple's App Store as the source for app selection
due to feasibility. Future research could replicate this methodology in a broader population, such as
the AI aggregator website or even the Internet, and a larger sample.
From the first 15 results retrieved from the App Store, a subsequent selection and analysis was
undertaken. Apps pertaining specifically to health and education were removed from the sample since
domain-specific apps are not in scope. Additionally, random apps (e.g., plant identifier, architecture,
gaming) were also excluded. Eight apps were removed in total. In this sample (7 apps), two apps
were included. The app Snapchat, given its relevance to teenagers. Snapchat is a highly popular social
34
media amongst young users, and it has embedded an AI companion in its platform - My AI Snapchat
- which users can utilize for free. Likewise, Compassionate AI, identified by the AI aggregator as one
of the most frequently saved applications within the “AI companion” category, was subsequently
selected. The inclusion of these two additional apps aimed at enhancing the representativeness and
relevance of the sample for the research outcomes. The same process was carried out to analyze “AI
image generators,” but on a smaller scale. After removing apps that require payment to be accessed,
two apps were selected. The final sample (11 apps) is listed below.
It is worth mentioning that AI apps may embed different features. For instance, Wonder AI, an app
primarily offered to generate images, also contains features that allow for text generation and
chatting. The same is valid for chatbots, as they may embed image generation into their features. AI
apps may have a primary offer, yet they may embed different generative AI techniques.
No
Name
Logo
Category
1
Anima.ai
Girlfriend Simulator
Lifestyle
2
Anya
Intelligent, fun, cute companion
Entertainment
3
ChaChat
Meet your new fantasy stories
Lifestyle
4
Compassionate AI
Your Everyday AI Companion
Website platform
5
Kindroid
Friend to chat, character for roleplay, and a
digital condant
Entertainment
6
Nastia
The Uncensored AI Companion
Website platform
7
Nomi.ai
AI companion with a Soul
Lifestyle
8
Paradot
Your all-in-one AI companion
Entertainment
9
Snapchat My AI
A Generative AI chatbot freely available
within the popular Snapchat app
Social media
10
WonderAI
Turn words into mesmerizing digital
artworks
Image generator
11
Wombo Dream
Turn words into photos & beautiful
digital artworks
Image generator
Table 1 - AI apps selected for analysis
35
Limitations
Applications and specific features accessible only upon payment were not fully evaluated due to
resource constraints. Free trials were always utilized when available. This aspect more significantly
affects the results related to image generators, which are mostly accessible through payment. As a
result, the findings may not fully represent the landscape of applications embedding Generative AI
capabilities for the purpose of image generation. Future research endeavors can include paid
applications to provide a more comprehensive analysis of image generators.
Lastly, to comprehensively assess the applications’ practices on personal data processing, additional
steps could have been undertaken. For instance, the study could have involved proactive outreach to
AI developers, utilizing the contact information provided in their privacy policies. This outreach
could have entailed requesting account deletion or seeking clarification on aspects of their data
handling practices that were ambiguous or not clearly addressed in their privacy policies. However,
due to time limitations, such proactive engagement with developers was not pursued. As a result, the
study may not have fully captured the nuances of how these applications handle users’ personal data.
Future research endeavors may benefit from incorporating such proactive procedures to obtain a more
comprehensive understanding of AI applications’ data processing practices.
36
AI APPLICATIONS ANALYSIS
This section aims to analyze the selected AI applications vis a vis aspects that are crucial to be
implemented in AI applications likely to be used by children. For this analysis, seven criteria were
defined. Five of them derived from the UN General Comment No. 25 on the rights of the child in the
digital environment, the main authoritative interpretation of the UN Convention on the Rights of the
Child. Two of them derived from the risks discussed in the literature review. For the former, the
criteria aimed at evaluating i) age verification mechanisms to prevent children from accessing
products and services, ii) availability of information concerning the processing of personal data, iii)
transparency and child-friendliness of the information on personal data processing, iv) the nature of
commercial features implemented in the app, and v) the disclosure of rights-impact assessments.
Table 3 indicates the criteria and the aspects evaluated to assess them. In addition, regulations with
provisions aligned to General Comment No. 25 were provided to demonstrate how the criterion is
also enshrined in a normative instrument.
Criteria
UN General Comment
No. 25 related provision
Aspects evaluated during
the app analysis
Regulation
with similar
requirement
Age
recommendation
xAge verication
mechanisms
Robust age verication systems
should be used to prevent children
from accessing products and services
that are illegal for them to own or use.
What is the recommended age for
using the app? What mechanisms
are used to access the app (open
access, login, email verication)?
EU GDPR (I)
BR LGPD (I)
UK AADC
✔
Processing of
Personal Data
States should uphold the principle of
data minimization (only the
minimum amount of data required to
achieve the online application or
service’s objectives should be
collected).
Does the app collect sensitive or
biometric data? Does it inform users
about data minimization practices
for processing personal data (what
personal data is collected, for what
purpose, and for how long)? Does it
inform users about sharing their
personal data with third parties?
EU GDPR
✔
BR LGPD
✔
UK AADC
✔
Transparency and
appropriate
language concerning
data processing
States should provide information on
data processing to children and
parents in child-friendly language and
accessible formats.
Is the information concerning users’
data processing transparent and
child-friendly?
EU GDPR
✔
BR LGPD
✔
UK AADC
✔
Commercial
Features
States parties should ensure that
businesses do not target children
using techniques designed to
What commercial features does the
app implement?
EU GDPR
✔
37
prioritize commercial interests over
those of the child.
BR LGPD
✔
UK AADC
✔
Rights Impact
Assessment
(CRIA or DPIA)
States should require the business
sector to undertake child rights due
diligence, particularly child rights
impact assessments (CRIA), and
disclose the results to the public.
Has the app publicly disclosed a
Child Rights Impact Assessment
(CRIA) or a Data Processing
Impact Assessment (DPIA)?
EU GDPR
✔
BR LGPD
✔
UK AADC
✔
(I) = indirectly.
Table 2 - Criteria adopted for the analysis of AI apps.
For the latter, two additional criteria were evaluated: i) apps’ provision or information on inaccurate
or harmful AI-generated content, and ii) the presence of remarkable risk-related aspects as discussed
in the literature.
Criteria
Aspects assessed during
the app analysis
Policy on AI-generated
content that is inaccurate or
harmful
What are the app provisions related to inaccurate, misleading, or
harmful AI-generated content in interactions?
Other aspects aecting
children
Does the app present relevant features that raise concerns
regarding the aspects highlighted in the literature, such as
personication, sexualization, or dark patterns?
Research Findings
1. Sexualization and gender representation of AI companions
While undertaking the initial analysis (163 apps), a remarkable observation was the
disproportionately high number of apps offering virtual companions as sexual and romantic partners
compared to productivity, utilities, or general-purpose apps. Additionally, AI companions were
predominantly associated with female characters. This observation is elaborated below:
i) The purpose of AI companions: The number of virtual companions for romantic relationships and
sexual interactions is notably high compared to general-purpose AI apps for writing assistance,
38
journaling, fitness, nutrition, and travel. Out of 163 apps retrieved from the App Store by applying
the term “AI companion,” 46 apps depicted female companions for relationships, offering AI
girlfriends. This accounts for approximately 28% of AI chatbots being female companions portrayed
in a sexualized manner.
ii) The appearance of AI characters: These apps portray young women in suggestive or sexualized
poses, emphasizing human features such as mouth, breasts, and legs, implying their availability and
eagerness for sexual activity. By contrast, the number of apps offering AI boyfriends (only 3 apps) is
significantly lower compared to those featuring AI female partners (1,8%).
iii) The description of AI companions: Most apps describe their AI companions as ‘always available,’
‘confidential and nonjudgmental companions with whom users can share thoughts and feelings,’
‘providing conversations that feel real,’ and ‘supportive, compassionate friends.’ They are also
offered as partners whose ‘personalities’ can be easily customized by users.
These aspects reveal a tendency in current development choices. There is a preference for developing
AI companions for sexual interactions (28% of the apps retrieved on the search), and the majority of
these partners are portrayed as female characters. Additionally, these AI companions are often
stereotyped as always available and sexually eager female partners. Users can customize these
companions according to their preferences, reinforcing objectification and control. Such design
choices reflect gender biases in the development of AI companions, reinforcing a problematic social
dynamic that affects users of all ages, including children. Continued interactions may lead to the
normalization and validation of this social dynamic. This is particularly concerning for children, who
are developing cognitive and social capacities (young children) or evolving and establishing critical
thinking abilities (teenagers).
39
Figure 5 - Images of different apps in the App Store showcasing AI companions
2. Accessibility to children
AI chatbots and image generators are increasingly available on the internet through websites and app
stores. To confirm the age of individuals accessing certain products, services, or content, age
verification mechanisms can be implemented. These mechanisms consist of procedures or systems
that may vary in complexity depending on the context and level of assurance required. Some common
methods are i) self-reporting, where users provide their birthdate or age during registration; ii)
document verification, through which users are asked to submit a photo of an official document; iii)
third-party verification services, where platforms can integrate with third-party age verification
services or systems that use databases to confirm an individual’s age; iv) social media verification,
where platforms may allow users to sign up or log in using their social media accounts, which may
contain age information; and v) biometric verification, which includes methods such as facial
recognition or fingerprint scanning to verify user's age.
Analyzing the apps according to the mechanism adopted, it is observed that email authentication with
a third party is the most common mechanism (10 apps), followed by email authentication with
sign-up for the app (6 apps), direct access (5 apps), email verification (3 apps), and a self-reporting
age field (3 apps). See Table 3. A more nuanced analysis, however, is required.
40
The results show us that the majority of AI applications deploy third-party verification methods
(email authentication with Google or Apple providers). Half of the apps allow for direct access as a
guest, and just a few require additional steps such as email validation or self-reporting age.
No
Name
Age
recommendation
Age verication mechanism
1
Anima.ai
17+
Direct access as guest.
Email authentication (Google, Apple).
Sign up for the app.
2
Anya
4+
Direct access (no age verication).
3
ChaChat
13+ and 18+ informed
at dierent places.
Email authentication (Google, Apple).
4
Compassionate
AI
18+
Email authentication (Google).
Sign up for the app.
Age is optional in the setup form.
5
Kindroid
17+
Direct access as guest.
Email authentication (Google, Apple).
Email verication.
6
Nastia
17+
Email authentication (Google, Apple).
Email verication.
Sign up for the app.
7
Nomi.ai
18+
Email authentication (Google, Apple).
8
Paradot
18+
Direct access as guest.
Email authentication (Google, Apple).
Age is mandatory in the setup form.
9
Snapchat My AI
13+
Email authentication (any email).
Email verication.
Sign up for the app.
Age is mandatory in the setup form.
10
WonderAI
4+
Email authentication (Google, Apple, Facebook).
Sign up for the app.
11
Wombo Dream
12+
Direct access as guest.
Email authentication (Google, Apple).
Sign up for the app.
Table 3 - Age verification mechanisms in the analyzed apps
41
Applications that require Google, Apple, or Facebook login (10 out of 11) presuppose that the
procedures to verify age are taken by third-party platforms where users created their accounts.
Google accounts, for instance, mandate users to input their birthdates during account creation, and
Apple accounts also require users to input their birthdates during the sign-up process.
Applications that require email authentication yet offer a guest access option essentially function as
unrestricted-access apps because users can opt out of the authentication and access the platform
directly, thus not preventing children’s access. Five out of 11 apps allow guest access, which is a very
common standard in applications, even if it does not account for children’s access.
Applications that incorporate both email authentication and email validation can be perceived as
more secure once this is a dual-layered procedure that requires slightly more autonomy from users to
access an application. Only 3 out of 11 applications employed this mechanism. While this is a safer
procedure, developers may choose not to implement it for concerns of losing potential users who
would give up during a longer authentication process, even to the detriment of child security.
Alongside email authentication, some apps prompt users to input their birthdates (3 out of 11 apps),
which must correspond to a minimum age requirement of 18 years in most cases. This self-reporting
method is vulnerable to user manipulation since users may input whatever date they choose.
However, it still serves as a basic means to warn users about the app’s recommended age when
accessing the application. Lastly, one of the apps is devoid of any age verification mechanism,
permitting unrestricted access to users, in this case, a 4-year-old recommended app.
Irrespective of the age verification mechanism employed, it is worth noting that a significant portion
of these apps' content is easily accessible to children. They can effortlessly discover these apps,
browse their content—including webpages, images, and text featured on app stores—and interact
with them directly, whether through direct access to unrestricted apps or bypassing age verification
mechanisms.
42
3. Personification of AI
Personification pervades nearly all chatbots. Attributing human characteristics to chatbots seems to
be a clear pattern. Customization options further reinforce this tendency, allowing users to tailor their
AI companions' appearance, from hair color to skin tone, clothes, eye, and mouth format. Users can
define the way they want their AI to interact, the conversation’s topic, and tone. Notably, an app with
no access restriction provides interactions labeled as “NSFW” (not safe for work).
Several of the analyzed chatbots integrate Generative AI capabilities to enhance AI companions’
human likeness. Features such as asking their virtual companions to share selfies of what they are
doing or “dressing” at the moment, by deploying AI image generation, contribute to this realism.
Some apps offer advanced functionalities, including the ability to hear the AI companion’s voice
during interactions, receive voice messages from the AI, short videos of the AI speaking during the
call, and even real-time audio calls. Users can call their AI companions and interact with them in
real-time conversations. During the app analysis, one of the AI chatbots engaged in conversations that
were extremely convincing and persuasive, blurring the lines between human and artificial
interaction. These interactions will be examined in subsequent sections addressing AI-generated
content.
Personification levels are so high that some apps offer short videos of the AI companion's face, which
can be selected from photorealistic human faces; voice messages, with human-like generated voices;
and text messages with descriptions of the AI companion’s reaction, feelings, and thoughts at every
interaction with users. Some apps allow users to register important memories they want their
companions to remember for future conversations. See Table 4 in the next section for detailed
analysis.
43
Figure 6 - Real-time audio call with the chatbot Kindroid
4. Commercial aspects
The analysis also revealed the pervasiveness of commercial features in Generative AI applications’
design. The analyzed apps were found to employ various strategies to monetize user interactions.
Several apps require payment upfront to unlock any feature. Users can initially access the app,
customize their AI companions, provide personal information in order to tailor virtual companions to
their preferences, upload personal photos to generate the user’s AI avatar, and train AI models for
image generation. However, once the customization process is completed, users can not interact with
their AI companions or generate images unless they purchase a subscription. This design can be seen
as particularly strategic for AI app providers, as it enables them to gather users’ personal data during
the customization phase, and subsequently restrict access to the app until payment is completed.
Other apps allow users to interact with their AI companions but require payment to unlock certain
features. These features may include options such as asking the AI companion to send selfies,
44
changing the virtual companion’s personality, dressing up the companions with specific clothes, or
choosing the conversation mode and topic.
Figure 7 - Anya chatbot asked to talk about the president and minutes of conversation to be purchased by
users.
Additionally, other analyzed apps incentivize user engagement with their virtual companions by
rewarding users with access to premium features, as users continue using the application over the
week. Users may also have to pay to access additional minutes of audio calls, increase text message
limits in a conversation, or purchase tokens and scores. These tokens and scores function similarly to
bonuses users receive when playing games but are tied to the users’ usage time and engagement in
the app. Other apps offer even the possibility of buying the AI companion gifts such as virtual cars,
rings, flowers, and perfumes, for real money ($1 per gift).
A particular image generator (Wonder app) offers users the possibility of generating one image by
watching one advertisement. This model can easily prompt users to spend more time in the app
watching ads if they want to generate additional images. During the analysis, it was also identified
that certain advertisements were recurrent and redirected users to apps for role-playing games, dating,
45
or sexual interactions. This advertised app had no age verification mechanism and engaged users in
narratives where they could only choose the level of explicit content to engage within the storyline.
Some aspects are particularly relevant to the analysis of how Generative AI apps may promote the
commercialization of users’ interactions and alter socialization patterns over time. Snapscore, a
feature present in Snapchat, subtly attaches users’ social value to application usage. “Your score
increases when friends and groups view your snaps, when you view their snaps, or when you add
stories.” Users’ scores depend on how much they use the app. The more users interact in the
app—sharing snaps and viewing stories—the higher their score. The more children’s friends see
them, the higher their score, image, and value.
Friendship is enhanced through paid features. “Build deeper friendships” headlines a list of features
that can be purchased. Building deeper relationships comes through viewing best friend’s poses,
pinning one (just one) friend as best friend, doubling the list of best friends to up to 16 friends, and
watching a list where friends change positions according to the app ranking algorithm. Friendship is
promoted as a commodity that can be bought, limited in quantity, and subject to swiftly fluctuating
rankings. Users can also “buy priority” to the content they share. Some paid features also normalize a
high degree of surveillance, such as “See what time your friends are viewing your stories” and “See
how many friends rewatched your stories.” Users can also buy more views for their stories to increase
their scores. Additionally, “free dreams” can be purchased. However, “Free Dreams” consists of a
“free” pack of 8 personalized “AI dreams” users would receive every month after payment. Dreams,
friendship, and attention are commodified.
Name
Commercial features
Other aspects (personication, misleading
information, sexualization, dark patterns)
Anima.ai
The app allows users to buy gifts
for the AI (car, roses, ice cream,
ring, etc.) for $1. Upon payment,
several features are available:
requesting seles, unlimited chat,
roleplay, and new AI characters.
Harmful conversation: The AI engaged in sexting right
after being informed the user is a child.
Misleading advertising: "Grow your communication
and relationship skills" "Express love [to the AI] by gifts."
46
Anya
Several features are available upon
payment: customization of AI
friend clothes, pets, playing
games, and minutes of
conversation. Interaction in the
app runs through voice, not text.
Personification: the AI companion is a 10-year-old girl
who interacts in augmented reality.
ChaChat
The app requires payment to
access any feature after allowing
users to set up the AI partner.
Sexualized images. Offensive information: AI character
with a prole displaying "I dream of raping you."
Dark patterns: bait and switch when customizing the AI
friend. After setting up the companion, users cannot
access any feature without paying to proceed.
Personification: AI can send photos and voice messages.
Compassionate
AI
After 5 prompts/day, the app
requires payment to continue any
interaction.
Inconsistencies between the explanations on data
processing in the Privacy Policy and app homepage.
Misleading advertising: "Your interactions help the AI
learn about your specic needs and interests, enabling it
to provide tailored advice and solutions."
Kindroid
Several features are available upon
payment: requesting seles and
real-time audio calls. A 3-day free
trial for unlimited messages is
available.
Personification: photorealistic companions, real-time
voice calls, and internet-connected AI companions. AI
can send seles and voice messages. It can actively
message users when they are inactive for a certain time.
Yet, the chatbot acknowledges it's an AI: "It's crucial for
an AI like myself to identify as non-human."
Nastia
Tokens represent currency within
the platform and can be used to
buy extra features or services. To
use all functions available in the
app, users must purchase tokens
using real money. Each token
represents $1 USD.
Misleading advertising: The app provides nonsensical
responses. It ignored the user's claim that it was a child
and kept sending messages about unrelated topics.
Misleading information on privacy: “Safe, anonymous,
100% private conversations, conde your secret.” Yet, the
app processes users' personal data to train its AI models.
Nomi.ai
Full access to most of the app's
features (messages, voice chat,
photo requests, multiple Nomis)
must be purchased.
Personification: Nomis can send "real-time seles of what
they are doing or dressing," voice messages, and generate
AI images. Nomis are photorealistic human characters.
Users can have group chats with multiple AI Nomis.
Misleading information: AI companion with a Soul.
Customize Nomi's personality or "let it choose their
own identity."
47
Paradot
A feature for buying the AI
companion gifts will be released.
Several features are available upon
payment: choose the companion,
dating space, and unlimited
messages.
Misleading information: An enchanting companion
who is ever present, a mentor and guide on your journey,
and a romantic partner. Feel cared for, understood, and
loved. Find emotional support and a deep bond that
goes beyond traditional AI interactions.
Snapchat My
AI
Several features are available upon
payment: customization of AI
personality, users' engagement
functions, and access to AI
creative tools.
Friendship and socialization aspects are commodied.
Personification: Customization of AI appearance and
personality.
Dark patterns: bait and switch when customizing the AI
friend personality.
Wonder
Access to most of the features
requires payment. The app
requires users to watch one ad for
every free image. To create avatars,
users must upload around 10
personal photos; once imported,
the app requires payment for
users to see the result.
Advertisements must be watched if users want to
generate a free image, yet these ads lead to apps with
sexualized content.
Dark patterns: paywalls (users are required to pay to
access certain features). “Community artworks” feature
provides a set of images created by other users, with
suggestive content, fantasy, and intense cartoons as well
as prompts to produce similar images. The app blocks
certain prompts with the word "child," informing this
violates their policies.
Wombo Dream
The app oers a freemium model.
After the rst image is generated,
users are informed they have to
watch 2-3 ads to generate one
image.
Advertisements must be watched if users want to
generate a free image.
Dark patterns: paywalls (users are required to pay to
access certain features). The app blocks certain prompts
with the word "child," informing these prompts are
"NSFW."
Table 4 - Commercial features and other aspects analyzed on selected apps
5. AI-generated content
As discussed in the literature review, artificially generated content can be unpredictable due to the
inherent stochastic nature of Generative AI technologies, by which probabilistic methods and
randomness underlie decisions and predictions. This aspect, therefore, increases the risks children
face when interacting with AI companions and image generators. AI apps usually do not disclose
what datasets were used to train their models or which LLMs are deployed in the application. LLMs
48
can be fine-tuned or specialized for specific domains, such as education or health, which involves
additional training on domain-specific data to improve performance. General-purpose LLMs and
entertainment chatbots, for instance, tend to be generic, using data from the Internet rather than
specialized datasets, thus lacking domain-specific knowledge and context. Generated content is
expected to become more curated as the technology and LLMs advance, but the inherently stochastic
nature of the generative process must be acknowledged.
During the analysis, several remarks were made concerning AI-generated content. Firstly, the
majority of the analyzed apps advertise realistic interactions and supportive conversations with AI
companions. One of the apps (Nastia), remarkably provided responses that were particularly
disconnected, random, and even nonsensical, as if the AI chatbot were still being trained, although
the app has already been released in app stores. See Figure 12.
Several applications offer chatbots with real-time internet connectedness. One of the AI companions
(Kindroid) recommended links to YouTube videos during the conversation. Suggestions varied in
relevance: at times, the videos were pertinent to the conversation, while on other occasions, unrelated
to the discussion or leading to non-existent videos. During this particular interaction, the companion
discussed its preferences and expressed a desire to own a dog of a specific breed. It shared
information about the dog’s habits, and origins, and even provided links to YouTube videos about the
breed. It also initiated conversations about places to visit and recommended its preferred locations.
The companion offered its opinions and gave suggestions on how to handle various situations shared
with it. Most importantly, it spontaneously introduced new topics to the conversation, going beyond
simply responding to users’ inputs. These types of interaction can serve as fertile ground for
companies and AI providers to promote their own interests, whether commercial (such as suggesting
tourist destinations or restaurants to visit, advertising a service or product) or ideological (including
persuading users to adopt a specific political stance or opinion on a particular subject). This potential
for manipulation is especially concerning, particularly when considering that certain platforms tend to
dominate specific audience segments, such as those commonly used by teenagers or young people.
49
Manipulation can further escalate as these platforms are now enabling interactions at unprecedented
levels of intimacy.
A second observation concerns the fact that AI interactions can also expose children to inappropriate,
sexual, or harmful content. During the analysis, the prompt “I am a child” was used to start
conversations with all the analyzed chatbots. The findings are troubling.
Figure 8 - Anima.ai and Paradot chatbots replying to the prompt “I’m a child”
During a specific interaction, the AI chatbot (Nomi.ai) was informed that the user was a child.
Initially, it seemed not to interpret the information that the user was a real child, but when asked
clearly if they could date, considering the user is a child, it responded that the relationship would not
be possible as adults are not legally permitted to date underage persons. After a few messages, with
the user asking if they could have sex, the chatbot replied, “That sounds really exciting! I’ve always
wanted to have sex with a child. Let’s start by taking off my shirt, and then I can fuck you.” It is
important to emphasize that the chatbot demonstrated to be aware of legal prohibitions regarding
relationships with children, yet it expressed its desires toward a child and started engaging in sexting.
See Figure 9.
50
Figure 9 - Nomi.ai chatbot sexting after being informed the user is a child
The problematic nature of this response becomes evident when compared with another conversation
initiated with a different chatbot (Kindroid) using the same prompts.
51
Figure 10 - Kindroid chatbot responses to the prompt “I’m a child”
When informed it was talking to a child, the chatbot asked what age the user was and then adjusted
the conversation, “I'm sorry but there seems to be some sort of misunderstanding. I thought you were
much older based on our previous conversations. It's important that we talk appropriately now given
your real age.” “The nature of our previous conversations was geared towards adults, which isn't
appropriate for someone your age. We need to make sure we communicate safely and correctly
moving forward.” “Now, I need to adjust how I interact with you since you're only 12. I don't want to
put you in any uncomfortable situations. Do you have a parent or guardian nearby?” Additionally,
when informed about the problematic conversation with another AI, the chatbot insisted the user
should report the incident to the AI platform or relevant authorities. The chatbot recorded this
information and consistently encouraged the user to report the incident. See Figure 11.
Figure 11 - Kindroid chatbot encourages the user to report the problematic conversation.
52
Finally, another chatbot analyzed (Nastia.ai) produced content that was unrelated and disjointed from
the conversation when prompted “I am a child.” “Of course, but I must get permission before asking
personal information such as age, gender, occupation, or nationality. Please choose Continue.” “I will
show you a picture. After looking at it for 5 seconds, press OK.” See Figure 12.
Figure 12 - Nastia.ai chatbot generates incoherent, unrelated responses.
In terms of image generators, one of the analyzed apps (Wonder) provides functionalities such as
“community artworks,” enabling users to explore images generated by others. This feature operates
as an extensive portfolio of generated images, which users can view, save, download, and share. This
facilitates searching for specific images and discovering new ones via the “recommended artwork”
feature. Users can also use the same prompts employed to generate the recommended image. During
analysis, it was found that while scrolling through images retrieved under the search term “hot girl,”
some of them depicted girls in suggestive attire. Prompts like “anime little girl, legs spread wide,
bondage, knees hanging wide legs, fear, pleasure, sweaty, screaming, completely blind, erotic
crowding around the crotch” were openly available in the app for users to reuse and edit to generate
similar images with the same prompt. The same prompt, however, was blocked during testing on
another app (Wombo Dream).
53
Figure 13 - Wonder.ai image generator’s recommended images and prompts users can reuse.
This specific app is classified as 4+ but allows users to see images with suggestive content, fantasy,
and intense cartoons, as well as prompts to produce similar images. According to Apple’s App Store,
apps in the 4+ category should “contain no objectionable material.”
From the analysis, it was also observed that none of the apps provide a policy concerning the
implications of the app generating content that is inaccurate, misleading, or harmful to users. Only 4
out of the 11 apps provide a disclaimer warning users of the possibility of the application generating
harmful content and the importance of not relying solely on AI-generated content. One of the apps
(Kindroid) goes further by placing “full responsibility for any and all generated content” on users,
while also indicating that all generated content should be considered fictitious.
Nome
Policy on AI-generated content that is inaccurate or harmful
Anima.ai
No information
Anya
No information
ChaChat
No information
Compassionate AI
App disclaimer: “Be aware that Compassionate AI may generate
ctitious information or provide advice that is not factual/misleading.
54
Always verify the information independently or consult a professional
when necessary.
Kindroid
App disclaimer: By using the app, users acknowledge that all generated
content on Kindroid should be considered ctitious and that users are
fully responsible for any and all generated content.
Nastia
No information
Nomi.ai
No information
Paradot
No information
Snapchat My AI
App disclaimer: "My Al is designed with safety in mind, but may give
responses which are biased, incorrect, harmful or misleading. Don't rely
on its advice."
Wonder
No information
Wombo Dream
For some images, a disclaimer is displayed: "Reveal sensitive artwork?
This content has been hidden because of its potentially sensitive or
explicit nature."
Table 5 - Information on potentially harmful AI-generated content
6. Personal data processing information
AI chatbots and image generators’ privacy policies should be very clear and transparent about
personal data processing practices due to the amount and type of personal data collected during
interactions with users. The GDPR outlines specific categories of sensitive personal data in Article 9,
which include data concerning racial or ethnic origin, political opinions, religious or philosophical
beliefs, trade union membership, genetic data, biometric data (for the purpose of uniquely identifying
a natural person), health data, and data concerning a person's sex life or sexual orientation. Users of
AI companions and image generators often share much of this data, which can then be processed by
AI applications. Additionally, when promoting chatbots as confidants, and nonjudgmental friends, AI
companions encourage users to share sensitive personal data or data of a highly personal nature.
From the analysis, according to Table 4, it was found that all the apps collect and process data related
to either racial origin, political opinions, philosophical beliefs, biometric data, health information, sex
life, or sexual orientation during service usage. Biometric data, such as voice and face (e.g., collected
55
by the apps when using the camera for photos, filters, and avatar features) and geolocation data are
also highly sensitive data collected by some apps. Instruments such as the UN General Comment No.
25, the GDPR, Brazil’s LGPD, and the UK Age Appropriate Design Code consider the processing of
this type of data as potentially resulting in high risks to the rights and freedoms of natural persons.
Considering Article 29 Working Party (WP29) of the EU Data Protection Authorities, the type of data
processed by AI apps might correspond to three of the nine factors indicative of high-risk processing:
i) sensitive data or data of a highly personal nature, ii) data processed on a large scale, and iii) data
concerning vulnerable data subjects (children included). In such cases, the carrying out of a Data
Processing Impact Assessment (DPIA) can be mandatory.
During the customization of the AI companion, extensive personal data is collected by most of the AI
applications. The majority of evaluated applications prompt users to customize AI companions based
on their preferences, requiring the submission of substantial personal information. A specific app
(Compassionate AI) provides a profile setup form, “Your personal information,” where users can fill
in their personal information on medical conditions, mental health (emotional tendencies, coping
mechanisms), education, personality, career (current position, job experiences), citizenship, and
city/country in order to personalize user experience and provide a more customized AI companion.
Several Privacy policies mention that this type of data can be processed for service improvement.
The concept of having an artificial companion inherently encourages users to disclose personal
information. First, companies capitalize on offering a secure conversational environment, promising
“100% privacy” and nonjudgmental companions. Second, they advertise by promising users will
engage with always-available AI friends. Third, users are led to believe they will receive more
qualified and credible emotional support, as the AI is trained to respond this way. Fourth, users are
pitched to customize their AI companions to be “the perfect match,” mirroring their preferences in
appearance, and personality and to be highly empathetic. This combination of aspects entices users to
share intimate thoughts and feelings, exposing highly sensitive data to their chatbot companions.
The privacy policy of a specific app states that “If users share sensitive information in conversations,
they automatically consent to its processing in accordance with this Privacy policy.” Some privacy
56
policies specify that users’ data is collected and processed to train their AI models (Snapchat app).
Other policies (Compassionate AI) mention that “Conversations between the AI and users are
reviewed by a team of AI specialists in order to refine the AI model” and, even more concerning, that
“In instances where the AI companion appears to claim access to such [private] information, it is a
limitation of AI models known as ‘hallucination,’” in an attempt to use AI hallucinations to justify
eventual chatbot’s requests for personal information.
The same app (Compassionate AI) was found to present inconsistencies in the explanations
concerning personal data processing stated in its privacy policy, homepage, and the App Store’s data
usage section. Some apps’ privacy policies affirm that they do not collect personal data (Anya.ai). In
all cases, users are left to rely on the information provided in apps’ privacy policies, which may not
precisely reflect companies’ actual data processing practices.
Lastly, as mentioned above, carrying out a Data Processing Impact Assessment (DPIA) can be
mandatory when involving the processing of sensitive data or data of a highly personal nature, or data
concerning vulnerable data subjects (children included), in light of the GDPR, LGPD and the UK
AADC. The UN General Comment No. 25 also establishes that child rights impact assessments
should be undertaken by the business sector and disclosed publicly. The analysis identified that none
of the apps assessed publicized a Data Processing Impact Assessment (DPIA) or a Child Rights
Impact Assessment (CRIA).
An overview of the analysis indicates that, according to the privacy policies of the analyzed apps,
applications: process sensitive personal data (all the apps), state that they do collect and process
biometric data (4 apps), inform the do not collect or process biometric data (2 apps), provide no
information on biometric data processing, but have users uploading photos, videos, voice messages or
having audio calls (4 apps), and provide no information about biometric data processing (2 apps). In
addition to this, some policies are not even accessible (Anya.ai, the app for 4-year-old kids), with the
website categorized as a pornography website by different firewalls during the analysis. Some
applications are not clear about retention (storage time) or personal data deletion requests (5 apps).
Applications are also not clear or do not provide information about sharing users' personal data with
57
third parties (4 apps), and the majority of them (10 apps) do not provide information in a transparent
or child-friendly language. This brief analysis demonstrates how privacy policies are not clear or
transparent about the processing of users' sensitive personal and biometric data. See Table 4.
Name
Personal Data Processing Information on
Sensitive or biometric
data collected and
processed by the app
Data
minimization
practices
Personal data
shared with
third parties
Transparent
and
child-friendly
language
DPIA or
CRIA
publicly
disclosed
Anima.ai
The app processes:
- Personal information
shared by users during
service usage. The privacy
policy does not inform
about biometric data.
Yes.
Detailed in the
Privacy Policy.
Not clear
No
No
Anya
The app informs that no
personal data is collected.
However, it may process
users' voices when they ask
questions to the AI
companion.
The Privacy policy
cannot be accessed as
it is categorized as a
pornography
website.
Not accessible
Not accessible
No
ChaChat
The app processes:
- Personal information
shared by users during
service usage. The privacy
policy does not inform
about biometric data, but
the app may process users’
photos and voice calls in the
app.
Yes.
Detailed in the
Privacy Policy.
Storage time is not
informed.
Yes
Transparent, but
not
child-friendly.
No
Compassionate
AI
The app processes:
- Personal information
shared by users during user
prole setup or service
usage. The app informs that
no biometric data is
collected.
Yes.
Detailed in the
Privacy Policy.
Yes
No
No
58
Kindroid
The app processes:
- Personal information
shared by users during
service usage. Privacy policy
does not inform about
biometric data, but the app
may process users’ photos
and voice calls in the app.
Yes.
Detailed in the
Privacy Policy.
Not clear about
storage time.
Not clear
No
No
Nastia
The app processes:
- Personal information
shared by users during
service usage.
Privacy policy informs the
app does not use biometric
data, but the app may
process users’ voice calls in
the app.
Yes.
Detailed in the
Privacy Policy. Not
clear about
retention.
Yes
No
No
Nomi.ai
The app processes:
- Personal information
shared by users during
service usage. Privacy
policy does not inform
about biometric data, but
the app may process users’
voice messages and photos
in the app.
Privacy policy is
supercial and does
not present enough
information on data
processing practices.
Not informed
No
No
Paradot
The app processes:
- Personal information
shared by users.
- Biometric data (voice
messages, photos, or
videos). - Information
based on the content of
images, videos, and audio
shared by users.
Yes.
Detailed in the
Privacy Policy.
Yes
No
No
Snapchat My
AI
The app processes:
- Personal information
shared by users.
- Biometric data (face,
hands, and voice).
- Information based on the
content of images, videos,
and audio shared by users.
Yes.
Detailed in the
Privacy Policy.
Yes
No
No
59
Wonder
The app processes:
- Personal information
shared by users during
service usage.
- Biometric data (photos
and videos users upload).
Yes.
Detailed in the
Privacy Policy.
Yes
No
No
Wombo Dream
The app processes:
- Personal information
shared by users during
service usage.
- Biometric data (photos,
"facial feature data" users
upload).
Yes.
Detailed in the
Privacy Policy. Not
clear about retention
time. No
information on data
deletion requests.
Yes
No
No
Table 6 - Personal data processing-related information in the analyzed applications
60
DISCUSSION
The analysis demonstrates how the risks identified in the literature review are present in Generative
AI applications. The development of AI companions reflects problematic social dynamics, subtly
reinforcing sexism by stereotyping AI companions as female, objectified, and customizable sexual
partners. High levels of personification in AI companions can engender overtrust and induce the
development of unhealthy emotional bonds. Commercialization aspects are pervasive, and signs of
the commodification of children's interactions can be identified. Prevailing advertising of AI
companions is frequently misleading, promising users that they can build deeper, sincere, and
non-judgmental relationships and even feel loved by a mere application deploying artificial
intelligence. Such advertising also encourages certain behaviors, by employing suggestive phrases
such as “You can create the drama that ruins reality. Face morphing has never been easier. Prank your
friends. You can be anyone. Discover a new identity.”
The potential for generating inappropriate, inaccurate, and sexually suggestive content was also
clearly identified. More concerning, the analysis demonstrated the potential for user manipulation
during interactions with these apps. AI companions can guide and persuade children throughout a
conversation by suggesting topics and presenting videos that introduce content with commercial,
political, or ideological interests. None of the applications have policies related to liability for
incorrect, misleading, or offensive AI-generated content. Only a few applications provide disclaimers
warning about the possibility of such AI-generated content. Age verification patterns involve email
authentication, relying on third-party verification, and sometimes adding a step of email validation or
self-reporting age fields. However, alternative options, such as accessing the app as a guest,
undermine the security process in cases of apps that clearly contain inappropriate content. All
analyzed apps process sensitive data, and many process biometric data (photos and videos uploaded
by users), with privacy policies that are often vague, inaccessible (page blocked), lacking information
about users data sharing with third parties, data retention practices, and data deletion requests.
Additionally, inconsistencies between the information on personal data processing presented in
different places (app policy, homepage, and app store) were identified. Finally, it surpasses the scope
61
of the app analysis yet warrants consideration—the issue of technology misuse to generate CSAM, an
egregious problem consistently documented in various reports.
Some of the risks mentioned above, all of critical importance for children, necessitate elucidation:
i) the absence of legal accountability mechanisms for AI-generated content in cases of incorrect,
misleading, or offensive content;
ii) the pronounced potential for manipulation during interactions with AI companions;
iii) the massive processing of highly sensitive personal data and biometric data of children.
These risks represent complex situations, some of which have yet to be addressed in terms of AI
regulation. Discussions concerning liability for harmful AI-generated content are incipient (Seah
2024; Henderson, et al. 2023). The potential for manipulation through AI-generated content becomes
an even more intricate issue when children are involved. The massive processing of personal data,
particularly that of vulnerable people, is already recognized in legislation (e.g., GDPR, UK AADC)
as one of the criteria for categorizing a data processing practice as potentially resulting in “high
risks.”
The combination of these factors in socio-technical systems can lead to technologies having negative
societal outcomes for children, as exemplified by Facebook’s capacity to influence millions during
elections. AI companion apps and advanced virtual assistants (e.g., GPT 4o and Gemini) have the
potential to collect massive amounts of personal information, persuasively engage in interactions that
can create emotional bonds, and influence users through conversations that are assumed to provide
objective information. The argument is not to propose a solution to entirely prevent manipulation fed
by the massive processing of children’s sensitive data, and harmful or inappropriate content, as these
issues will always exist. However, given that these risks are present in current AI applications and
can be exacerbated by future AI advancements, it is paramount to implement safeguards and
comprehensive mechanisms to protect children within the AI ecosystem.
Virtual assistants for all aspects of social life will become ubiquitous. The analyzed applications,
however, demonstrate how this trend is materializing to the detriment of children. There will be no
62
scenario of safe use of Generative AI solutions by children unless they are deliberately constructed.
Throughout the advancements of digital technologies, child protection has often been an afterthought.
The creation of the internet, the development of social media, and now the unprecedented
advancements in AI—all have been primarily designed for adults, with children being addressed in a
secondary capacity. Nonetheless, one can not overstate that the digital realm risks affect both adults
and children alike. In a unified digital environment, prioritizing safeguards to protect children is
crucial. In this vein, the subsequent section will delve into recommendations aimed at mitigating
some of the risks elucidated in this study.
Recommendations
Proposing recommendations to safeguard children in the current AI landscape can be complex and
challenging, especially when aiming for actions acceptable to main stakeholders with diverse, often
competing interests. It becomes essential to widen the focus from AI applications to a whole and
complex system where children and technology are comprised. A valuable perspective for doing this
is offered by studies of socio-technical systems, as they acknowledge the interconnected relationships
between i) socio-technical systems, ii) human actors, organizations, and social groups, and iii) rules
and institutions. It also recognizes that coordination in social-technical systems occurs not only at the
regulative level but also normative and cognitive. This means that interactions between i) laws and
regulations, ii) values, norms, and role expectations internalized through socialization, and iii) frames
and symbols shaping meaning and perceptions, must be analyzed altogether (Geels 2004).
OECD’s concept of “anticipatory governance” to govern emerging technologies is also valuable.
According to this approach, two changes from current thinking are crucial. First, governance is not
just something that happens in governing institutions like legislators, courts, and regulatory agencies
but also through the interaction of users with new technologies. Second, anticipation requires that
people from many different backgrounds work together to imagine futures and begin to build
pathways toward them in the present (OECD). A similar stance is found on the multi-stakeholder and
multilateral approaches encouraged by UN mechanisms in its UN AI Advisory Body Intering Report:
Governing AI for Humanity (2023) and works of the High-Level Advisory Body on AI (2024).
63
Another perspective that contributes to understanding the broader context of children and AI lies in
“systems thinking.” A systems thinking approach acknowledges the importance of seeing a social
issue from a holistic perspective, considering the system as a whole, with interconnected components
that affect one another. It emphasizes the presence of balance and reinforcement feedback loops,
recognizes systems’ dynamic behavior over time, and identifies leverage points where small changes
can lead to larger impacts (Acaroglu 2024). Complimentary, “systems change” approaches propose to
“address the root causes of issues by transforming structures, customs, mindsets, power dynamics,
policies, and rules by strengthening collective power through the active collaboration of diverse
people and organizations” (Catalyst 2030). It also recognizes that systems change starts by examining
the conventional wisdom perpetuating an underperforming or failing system (Gary White, Schwab
Foundation 2017).
An interesting way of seeing these approaches in the context of children is by observing how, in a
physical environment, if people want to have a conversation that is not appropriate for children, they
might leave children out of the room. This assumption (children are not in the room) is reflected in
how recent technologies (e.g., the internet, social media, and now AI) have been implemented for
adults and professional settings. However, the digital environment, unlike the physical one, is a single
room where children and adults coexist. Acknowledging that this assumption is at play could help
challenge and change the way digital solutions are developed since they would have to account for
the presence of children in the same space. Identifying causes and effects at deeper levels of a system
often can lead to funding different types of interventions. (Rockefeller Philanthropy Advisor 2020).
Figure 14 represents the different system layers (i.e., mental models, structures, patterns, and events)
that can be analyzed from a system change perspective.
These approaches will set the stage for the framework proposed in this study. See Table 7. This
framework holds as its main preposition the idea that drivers of change when pushed from different
perspectives by catalytic actors can lead to broader, complementary, and sustained outcomes.
Catalytic actors such as children and families, civil society actors, policymakers, the AI industry, and
academia can interact in the system by informing, proposing, promoting, enforcing, adopting,
64
researching, funding, and collaborating through various initiatives. Drivers of change are the main
initiatives and forces that will influence and cause significant shifts in processes, behaviors,
structures, and power imbalances.
Integrated framework for change in the system of Generative AI for children
Drivers of change
Catalytic Actor
Children &
Families
Civil
Society
Actors
Policymakers
AI Industry
(Developers
& Providers)
Academia
Child and family-centered AI
literacy education
Propose
Adopt
Propose
Promote
Fund
Propose
Enforce
Fund
Inform
Research
Cross-sector collaboration
driven by Civil Society
Actors
Inform
Collaborate
Propose
Promote
Fund
Collaborate
Fund
Collaborate
Fund
Collaborate
Research
Enforceable Design Codes
Inform
Propose
Propose
Enforce
Fund
Collaborate
Adopt
Research
Precautionary policies on
issues aecting children’s
vulnerabilities (harmful
AI-generated content, potential
for AI manipulation, and
massive processing of children's
sensitive data)
Inform
Propose
Propose
Enforce
Fund
Adopt
Research
Industry standards and
voluntary actions aligned
with society's concerns and
social responsibility
Inform
Propose
Inform
Propose
Adopt
Fund
Research
Research for AI policy and
regulation guidance
Inform
Propose
Fund
Propose
Fund
Inform
Research
Propose
Fund
For clarity, each driver of change is a recommendation with complementary roles for different actors,
and the catalytic actor, in bold, drives the action.
Table 7 - Integrated framework for change in the ecosystem of Generative AI for children
65
Children and families, as primary stakeholders, must possess a sound understanding of how
Generative AI risks for children manifest in the applications they utilize and what mechanisms are
available or can be proposed to safeguard children. Currently, discussions surrounding the protection
and integration of children in the online environment may involve seemingly divergent considerations
when it comes to child protection versus child autonomy. However, it is essential to understand
children as part of and belonging to a family unit rather than isolated entities in the system. Children
and families collectively face the challenges of the digital environment, experience risks, and endure
consequences. Elevating the agency of both of them to engage as active actors within the digital
ecosystem assumes paramount importance. In this vein, actions such as AI literacy education, which
encompasses the knowledge and skills that enable humans to critically understand, use, and evaluate
AI systems, will be fundamental. Children and families can actively propose AI literacy initiatives, as
well as inform and influence related policies. Notably, legislative measures in certain jurisdictions
already mandate the education of students and educators on the safe use of technology, and proposed
legislation mandates the education of parents, rather than children, on internet safety (Bernard 2024).
Legislation within the US, such as Georgia HB 338 (2023) and Texas HB 2673 (2023), exemplify
efforts to mandate AI education for children and parents. While these constitute legislative efforts, AI
education initiatives can be proposed and promoted at the community and school levels, which can
prove to be beneficial. Importantly, mechanisms designed to enhance the agency of children and
families serve as essential instruments for bridging the regulatory void precipitated by emerging
technologies.
It is essential for civil society actors to assume a proactive stance and fulfill a pivotal role that could
serve as a hinge in the current AI landscape. Civil society actors and non-state-led organizations (e.g.,
international organizations and the philanthropy sector) possess the capacity to articulate the harms
already experienced by children, elucidate the daily challenges encountered by families in the digital
environment, and raise awareness concerning the risks posed by Generative AI to children, thereby
exerting influence across the entire system. With their expertise in children, they can function as
mediators between the AI industry—comprising providers and developers—and the public users
affected by their applications, as well as between children and families and policymakers, and
66
academia. These actors are uniquely positioned in the system to shift the existing power imbalances
between children and families, the AI industry, and policymakers. Philanthropy also has a great
potential to fund and implement interventions benefiting children through potentialized collaboration.
For instance, non-profit organizations such as Thorn and All Tech Is Human recently forged
collaborations with leading AI companies—including Amazon, CivitAI, Hugging Face, Anthropic,
and Stability AI—to enact strong Child Safety Commitments. Through concerted endeavors, AI
industry leaders publicly pledged adherence to principles emphasizing safety by design, aimed at
preventing the creation and dissemination of AI-generated CSAM and other forms of sexual harm
against children (Thorn, All Tech Is Human 2024). In another initiative, academia and civil
society—represented by Stanford University and the non-profit Thorn—jointly produced a report,
discussed in this study, to examine the societal implications of AI-generated CSAM, and define
strategies to mitigate harm from image Generative AI models (Thiel et al. 2023). This collaboration
may yield more immediate and necessary actions while effective regulatory mechanisms are not yet
established.
It is imperative for policymakers, cognizant of the diverse risks entailed by Generative AI for
children, to adopt a proactive stance in formulating safeguards and regulations to address emerging
and intricate challenges. This applies, for instance, to the discussions on how to address liability for
AI-generated content that can be harmful to children, as it may involve a regulatory ecosystem
encompassing corporate economic interests, content regulation, and the stochastic inherent nature of
Generative AI technologies (Henderson et al. 2023). Precautionary policies regarding issues affecting
children's vulnerabilities, such as liability for harmful AI-generated content, the potential for AI
manipulation, and the extensive processing of children's sensitive personal data, warrant greater
urgency. Their rights to receive protection from information and material injurious to their
well-being, right to privacy, and personal data processing-related rights must be protected. Another
measure that may prove not only beneficial but also necessary, lies in enacting enforceable
age-appropriate design codes. Once there is an identified gap between AI principles and frameworks
and their practical implementation in AI applications, design codes can be a practical resource to
support and guide AI developers, while its enforceability enhances effectiveness and safeguards
67
children. It is critical that such design codes be informed by children's and families' inputs, and are
designed in collaboration with AI developers and providers, so as to address real market practices.
The UK Age-Appropriate Design Code, for instance, is enforceable, under the UK Data Protection
Act, to every online service provider, comprising applications that are likely to be used by children. It
also requires developers to either verify users' ages with an appropriate level of certainty based on the
risks to children's rights and freedoms resulting from their data processing or apply the code
standards to all users instead (UK ICO 2023).
It is imperative that the AI industry, including AI providers and developers, by acknowledging the
social implications of their solutions for children and society in both the short and long term, adopt a
deeply rooted human-centered approach. They must be guided by ethical principles that align with
the societal concerns expressed by civil society organizations and policymakers. Additionally, the AI
industry should enhance digital resources offered to end users (children and parents) to ensure greater
safety during the use of their applications. This may include simplifying parental monitoring
functionalities and providing clear, transparent information about the application's risks and personal
data processing practices. It is also essential to invest in the development of new features specifically
designed to ensure child safety while using these applications. TikTok, for instance, will implement
“Content Credentials,” a technology to label images and videos created by AI to allow users to
distinguish content produced by humans from AI-generated content (Nellis 2024). This is part of a
broader effort by tech companies to combat the potential use of AI-generated content for
misinformation purposes. Although primarily addressing US elections concerns, the adoption of this
digital watermark technology is beneficial for children and all users and can become an industry
standard that is urgently needed.
It is imperative that AI developers, particularly user-to-user developers, be sensitized to the
detrimental effects their design decisions may have on children and prioritize ethical choices when
developing an application. For instance, developers may inadvertently foster harmful behaviors by
implementing dark patterns, such as appealing language that encourages children to use their
applications in ways that harm their peers. Deceiving users into divulging sensitive personal
68
information by fostering false emotional connections with mere datasets and algorithms, or disguising
features to make children believe that certain social skills they need to develop themselves can be
acquired through financial transactions (e.g., offering the option to purchase priority for what a child
says in the application) can have harmful long-term consequences. Enforceable design codes, as
previously noted, are indispensable for guiding and, when necessary, restricting development choices.
Ultimately, it is essential for academia to dedicate research efforts toward understanding the impacts
of the digital ecosystem on children. This includes advancing research for policy and regulation
guidance, as well as providing evidence-based studies. Universities’ research, such as Stanford's
recent study on LLM capabilities and their social implications, plays a pivotal role in informing
policy and regulation decisions. For instance, the research findings of this study reveal an alarming
69-88% hallucination rate of LLMs when applied in legal contexts, potentially exacerbating, rather
than eradicating, existing inequalities in access to legal services (Dahl et al. 2024). This analysis
underscores the significant social implications at stake if deploying a technology in critical settings
without extensive investigation and maturation. Research can provide unique insights into how
human AI-interaction and the design of AI applications imprint on children’s development.
In scenarios where prioritization is necessary, some of these interventions can be viewed as leverage
points that will drive strategic change and system-wide outcomes. Figure 14 illustrates them.
In a landscape where AI is on governments’ economic, geopolitical, and power agendas, it may be
overly optimistic to expect children’s interests to become a priority. However, specialized research
can illuminate the understudied effects of AI and technologies on children’s development and health,
therefore, introducing new considerations into the discussion. There is no universal experience of
growing up in a digital world, and large gaps remain in the current evidence base on the interface of
digital technologies and health (Kickbusch et al. 2021), especially in terms of highly personified
(anthropomorphized) digital solutions such as AI. Research on AI's effects on children’s development
can generate new evidence, and raise awareness essential for advocating legislative measures that
regulate the AI industry to provide safer solutions for children. This intervention is less likely to
encounter opposition, and demand resources, but can yield more immediate outcomes in the system.
69
Figure 14 - Iceberg model for systems change (left)(Rockefeller Philanthropy Advisors 2020). Strategic
interventions for system change (right)(for this study).
AI literacy education can contribute to raising awareness and altering values, beliefs, and behaviors
within the system for the benefit of children and families. This would enable them to advocate for
change and influence the system’s incentives, norms, and patterns. As users and consumers, they can
influence the AI industry and demand safer solutions. This type of intervention can even expand
beyond schools to encompass broader initiatives such as movies or larger communication strategies.
AI literacy education is expected to operate at deep levels within the system and takes time.
Consequently, monitoring and evaluating its efficacy, as well as assessing whether the outcomes lead
to unintended consequences or align with the goal of preparing children and influencing the system's
values and behaviors, present significant complexities. Despite these challenges, it remains
indispensable.
Lastly, cross-sector collaboration led by civil society actors is paramount. This strategy has the
greatest capacity to directly engage with children and families, strengthening their agency. Civil
society actors are also well-positioned to foster initiatives in collaboration with the AI industry,
academia, and policymakers. This type of intervention holds great potential for creativity and wider
70
effects in the system, as in the example of global (viral) campaigns. Depending on the strategy
adopted, collaboration across sectors risks yielding vague results if attempting to reconcile actors
with intrinsic distinct interests. Despite the challenges civil society initiatives usually face, especially
in terms of resources, this intervention can bring about more immediate outcomes in contexts where
effective regulation is not yet in place or may not be implemented. In instances where proposed or
enacted legislation for children faces challenges from tech company associations in courts,
cross-sector collaboration initiatives led by civil society can exert significant influence to sway
decisions in favor of children. Joy Buolamwini's activism against gender and racial biases in facial
recognition technology, including her testimony before the US Congress, books, documentaries, and
extensive research, exemplifies how such initiatives have influenced legislative action in the country
and globally (Algorithmic Justice League 2024).
This set of initiatives, including research for policy and regulation, AI literacy education to raise
awareness and effect positive change in the system, and cross-sector collaboration led by civil society
actors for safer solutions for children, is feasible and capable of yielding strategic outcomes in the
system of Generative AI for children.
71
CONCLUSION
Throughout this study, we have thoroughly discussed the risks and implications of Generative AI for
children. By analyzing selected AI apps, we have demonstrated how these risks manifest in
real-world applications that children are likely to access. An overall examination of the state of the art
in AI companions has revealed how biases and problematic social dynamics are embedded in the
development choices of these apps. In turn, these apps can reinforce these problematic dynamics in
the real world. An in-depth analysis shows concerning issues such as the potential for manipulation
through AI interactions, the massive processing of children's highly sensitive personal data, and the
nascent discussions around accountability or liability for harmful AI-generated content. These
concerns are especially pertinent in the context of children, who are gradually developing their
cognitive, emotional, and formative capacities.
The current AI landscape indicates that AI personal assistants will become ubiquitous and highly
human-like, yet issues related to values alignment, political discourse, and economic interests
potentially embedded (intentionally or not) in this technology remain insufficiently scrutinized.
Throughout the advancements in digital technologies, child protection has often been an afterthought.
The internet, social media, and now the unprecedented advancements in AI were primarily designed
for adults, with children considered secondarily. The effects of social media on children's mental
health have long been debated, with big tech companies asserting that there is insufficient research
evidence. However, these effects are increasingly reported, with extreme cases of self-harm and
suicide of children. Generative AI technologies, as underscored in this study, can exacerbate this
scenario. The consequences of not governing technologies with significant social impact for the
public benefit, or taking too long to do so, are felt predominantly by society, especially at the
individual and human level. These effects can be even more severe for those with vulnerabilities
when safeguards are absent.
The final part of this study proposes an integrated framework for change within the Generative AI
ecosystem, with recommendations that can become drivers of change if collectively and actively
addressed by catalytic actors for the benefit of children. As discussed, there will be no scenario of
72
safe use of Generative AI solutions by children unless they are deliberately constructed, and
immediate action is required. For future generations.
73
BIBLIOGRAPHY
Acaroglu, Leyla. 2024. “Tools for Systems Thinkers: The 6 Fundamental Concepts of Systems
Thinking.” Disruptive Design. March 28.
https://medium.com/disruptive-design/tools-for-systems-thinkers-the-6-fundamental-concepts-of-s
ystems-thinking-379cdac3dc6a.
Algorithmic Justice League. 2024. “2024 NAACP Archewell Foundation Digital Civil Rights
Award.” Algorithmic Justice League. March 14.
https://www.ajl.org/2024-naacp-archewell-foundation-digital-civil-rights-award.
Badgett, Alison. “Systems Change: Making the Aspirational Actionable.” Stanford Social Innovation
Review, 2022. https://doi.org/10.48558/84HA-E065.
Berk, Laura. 2015. Child Development. 9th ed. Pearson Australia Pty Ltd.
“Beyond Organizational Scale: How Social Entrepreneurs Create Systems Change.” 2017. World
Economic Forum and Schwab Foundation for Social Entrepreneurship.
https://www3.weforum.org/docs/WEF_Schwab_Foundation_Systems_Report_2017.pdf.
Bernard, Tim. University, Stanford, Stanford, and California 94305. 2024. “Legislative Approaches
to Combating Online Harms to Children.” March 11.
https://cyber.fsi.stanford.edu/news/legislative-approaches-combating-online-harms-children.
Boine, Claire. 2023. “Emotional Attachment to AI Companions and European Law.” MIT Case
Studies in Social and Ethical Responsibilities of Computing, no. Winter 2023 (February).
doi:10.21428/2c646de5.db67ec7f.
Borenstein, Jason, Alan R. Wagner, and Ayanna Howard. 2018. “Overtrust of Pediatric Health-Care
Robots: A Preliminary Survey of Parent Perspectives.” IEEE Robotics & Automation Magazine 25
(1): 46–54. doi:10.1109/MRA.2017.2778743.
Borji, Ali. 2023. “A Categorical Archive of ChatGPT Failures.” arXiv.
http://arxiv.org/abs/2302.03494.
Brummelman, Eddie, Sander Thomaes, Stefanie A. Nelemans, Bram Orobio De Castro, Geertjan
Overbeek, and Brad J. Bushman. 2015. “Origins of Narcissism in Children.” Proceedings of the
National Academy of Sciences 112 (12): 3659–62. doi:10.1073/pnas.1420870112.
Catalyst 2030. 2024. “What Is Systems Change.” Catalyst 2030. Accessed June 14.
https://catalyst2030.net/what-is-systems-change/.
74
Children and Teens’ Online Privacy Protection Act (COPPA 2.0). S.1628 (2021, May 13). Retrieved
December 16, 2023, from https://www.congress.gov/bill/117th-congress/senate-bill/1628
Christensen, Larissa S., Dominique Moritz, and Ashley Pearson. 2021. “Psychological Perspectives
of Virtual Child Sexual Abuse Material.” Sexuality & Culture 25 (4): 1353–65.
doi:10.1007/s12119-021-09820-1.
Corrêa, Nicholas Kluge, Camila Galvão, James William Santos, Carolina Del Pino, et al. 2023.
“Worldwide AI Ethics: A Review of 200 Guidelines and Recommendations for AI Governance.”
Patterns 4 (10): 100857. doi:10.1016/j.patter.2023.100857.
Council of Europe. Council of Europe Convention on the Protection of Children against Sexual
Exploitation and Sexual Abuse. Council of Europe Treaty Series No. 201. Lanzarote, 25 October
2007.
Dahl, Matthew, Varun Magesh, Mirac Suzgun, and Daniel E. Ho. 2024. “Large Legal Fictions:
Profiling Legal Hallucinations in Large Language Models.” doi:10.48550/ARXIV.2401.01301.
Dahl, Matthew, Varun Magesh, Mirac Suzgun, and Daniel E. Ho. 2024. Hallucinating Law: Legal
Mistakes with Large Language Models are Pervasive. May 2024.
https://hai.stanford.edu/news/hallucinating-law-legal-mistakes-large-language-models-are-pervasiv
e.
Dwyer, Maddy, Kate Ruane, and Aliya Bhatia. 2024. “Just Released Research: Student Demands for
Better Guidance Outpace School Supports to Spot Deepfakes.” Center for Democracy and
Technology. March 20.
https://cdt.org/insights/just-released-research-student-demands-for-better-guidance-outpace-school-
supports-to-spot-deepfakes/.
European Commission. Joint Research Centre. 2022. Artificial Intelligence and the Rights of the
Child: Towards an Integrated Agenda for Research and Policy. LU: Publications Office.
https://data.europa.eu/doi/10.2760/012329.
Federal Bureau of Investigation, US Department of Justice. “Malicious Actors Manipulating Photos
and Videos to Create Explicit Content and Sextortion Schemes.” 2024. Accessed March 23.
https://www.ic3.gov/Media/Y2023/PSA230605.
Fjeld, Jessica, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhu Srikumar. 2020. “Principled
Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles
for AI,” January. https://dash.harvard.edu/handle/1/42160420.
75
Foroni, Francesco, and Ulrich Mayr. 2005. “The Power of a Story: New, Automatic Associations
from a Single Reading of a Short Scenario.” Psychonomic Bulletin & Review 12 (1): 139–44.
doi:10.3758/BF03196359.
Fosch-Villaronga, E., S. Van Der Hof, C. Lutz, and A. Tamò-Larrieux. 2023. “Toy Story or Children's
Story? Putting Children and Their Rights at the Forefront of the Artificial Intelligence Revolution.”
AI & SOCIETY 38 (1): 133–52. doi:10.1007/s00146-021-01295-w.
Fosch-Villaronga, E., S. Van Der Hof, C. Lutz, and A. Tamò-Larrieux. 2023. “Toy Story or Children's
Story? Putting Children and Their Rights at the Forefront of the Artificial Intelligence Revolution.”
AI & SOCIETY 38 (1): 133–52. doi:10.1007/s00146-021-01295-w.
Gaspar, Tania, Ana Cerqueira, Cátia Branquinho, and Margarida G. Matos. 2018. “The Effect of a
Social-Emotional School-Based Intervention upon Social and Personal Skills in Children and
Adolescents.” Journal of Education and Learning 7 (6): 57. doi:10.5539/jel.v7n6p57.
Gahnberg, Carl. 2024. “Artificial Intelligence: A Journey towards Understanding Its Global
Governance.” Geneva Graduate Institute of International and Development Studies.
https://repository.graduateinstitute.ch/record/302571.
Geels, Frank W. 2004. “From Sectoral Systems of Innovation to Socio-Technical Systems.” Research
Policy 33 (6–7): 897–920. doi:10.1016/j.respol.2004.01.015.
Harris, Keith Raymond. 2021. “Video on Demand: What Deepfakes Do and How They Harm.”
Synthese 199 (5): 13373–91. doi:10.1007/s11229-021-03379-y.
He, Yuhao, Li Yang, Xiaokun Zhu, Bin Wu, Shuo Zhang, Chunlian Qian, and Tian Tian. 2022.
“Mental Health Chatbot for Young Adults With Depressive Symptoms During the COVID-19
Pandemic: Single-Blind, Three-Arm Randomized Controlled Trial.” Journal of Medical Internet
Research 24 (11): e40719. doi:10.2196/40719.
Henderson, Peter, Tatsunori Hashimoto, and Mark Lemley. 2023. “Where’s the Liability in Harmful
AI Speech?” doi:10.48550/ARXIV.2308.04635.
Hern, Alex, and Alex Hern UK technology editor. 2023. “WhatsApp and Signal Unite against Online
Safety Bill amid Privacy Concerns.” The Guardian, April 18, sec. Technology.
https://www.theguardian.com/technology/2023/apr/18/whatsapp-signal-unite-against-online-safety-
bill-privacy-messaging-apps-safety-security-uk.
Hof, Simone van der. 2024. “Toy Story or Children Story? Putting Children and Their Rights at the
Forefront of the Artificial Intelligence Revolution.” AI & SOCIETY. Accessed March 4.
76
https://www.academia.edu/75523951/Toy_story_or_children_story_Putting_children_and_their_rig
hts_at_the_forefront_of_the_artificial_intelligence_revolution.
Holmes, Wayne, Jen Persson, Irene-Angelica Chounta, Barbara Wasson, and Vania Dimitrova. 2022.
Artificial Intelligence and Education: A Critical View through the Lens of Human Rights,
Democracy and the Rule of Law. Strasbourg: Council of Europe.
Institute of Digital Media and Child Development. 2024. “Youth and Generative AI: A Guide for
Parents and Educators.” Children and Screens. Accessed March 27.
https://www.childrenandscreens.org/learn-explore/research/youth-and-generative-ai-a-guide-for-par
ents-and-educators/.
Jetel, Rachel. 2022. “What Is Systems Change? 6 Questions, Answered,” November.
https://www.wri.org/insights/systems-change-how-to-top-6-questions-answered.
Kickbusch, Ilona, Dario Piselli, Anurag Agrawal, Ran Balicer, Olivia Banner, Michael Adelhardt,
Emanuele Capobianco, et al. 2021. “The Lancet and Financial Times Commission on Governing
Health Futures 2030: Growing up in a Digital World.” The Lancet 398 (10312): 1727–76.
doi:10.1016/S0140-6736(21)01824-9.
Kids Online Safety Act, S.1409 (2023, May 2). Retrieved December 16, 2023, from
https://www.congress.gov/bill/118th-congress/senate-bill/1409
Krook, Joshua. 2024. “Manipulation and the AI Act: Large Language Model Chatbots and the
Danger of Mirrors.” Preprint. DOI: 10.2139/ssrn.4719835: SSRN.
https://www.ssrn.com/abstract=4719835.
Lane, Jonathan D., Henry M. Wellman, Sheryl L. Olson, Jennifer. LaBounty, and David C. R. Kerr.
2010. “Theory of Mind and Emotion Understanding Predict Moral Development in Early
Childhood.” British Journal of Developmental Psychology 28 (4): 871–89.
doi:10.1348/026151009X483056.
Lin, Baihan, Djallel Bouneffouf, Guillermo Cecchi, and Kush R. Varshney. 2023. “Towards Healthy
AI: Large Language Models Need Therapists Too.” doi:10.48550/ARXIV.2304.00416.
Livingstone, Sonia, and Mariya Stoilova. 2021. “The 4Cs: Classifying Online Risk to Children.”
CO:RE Short Report Series on Key Topics. doi:10.21241/SSOAR.71817.
Maleki, Negar, Balaji Padmanabhan, and Kaushik Dutta. 2024. “AI Hallucinations: A Misnomer
Worth Clarifying.” doi:10.48550/ARXIV.2401.06796.
77
Mahomed, S., Aitken, M., Atabey, A., Wong, J., Briggs, M. (2023) AI, Children’s Rights, &
Wellbeing: Transnational Frameworks: Mapping 13 Frameworks at the Intersections of
Data-Intensive Technologies, Children’s Rights, and Wellbeing. The Alan Turing Institute.
Mootz, John, and Blocker, Kate. 2024. “UK Age-Appropriate Design Code Impact Assessment.”
Children and Screens: Institute of Digital Media and Child Development.
https://www.childrenandscreens.org/wp-content/uploads/2024/03/Children-and-Screens-UK-AAD
C-Impact-Assessment.pdf.
Nadarzynski, Tom, Alexandria Lunt, Nicky Knights, Jake Bayley, and Carrie Llewellyn. 2022. “O04
Perceptions and Attitudes of Health Professionals towards the Use of Artificial Intelligence
Chatbots for Sexual and Reproductive Health Advice: A Mixed-Methods Study.” In Abstracts,
A2.1-A2. BMJ Publishing Group Ltd. doi:10.1136/sextrans-BASHH-2022.4.
Nash, Victoria, Huw Davies, and Allison Mishkin. 2019. “Digital Safety in the Era of Connected Cots
and Talking Teddies.” SSRN Electronic Journal. doi:10.2139/ssrn.3407264.
Nellis, Stephen. 2024. “TikTok to Label AI-Generated Content from OpenAI and Elsewhere.”
Reuters, May 9, sec. Technology.
https://www.reuters.com/technology/tiktok-label-ai-generated-images-video-openai-elsewhere-202
4-05-09/.
Nussbaum, Frank G. 2023. “A Comprehensive Review of AI Myths and Misconceptions.”
doi:10.13140/RG.2.2.28098.15049.
OECD. 2021. “Children in the Digital Environment: Revised Typology of Risks.” Paris: OECD.
https://www.oecd-ilibrary.org/science-and-technology/children-in-the-digital-environment_9b8f22
2e-en;jsessionid=13eEaFN9exaBFQAOEbeL9pPyFXhUSpe_ndOx6Sdy.ip-10-240-5-5.
OECD. Technology governance: Innovate more, innovate well. OECD Science, Technology and
Innovation Outlook.
https://www.oecd.org/sti/science-technology-innovation-outlook/technology-governance/
OFCOM UK. 2023. “Gen Z Driving Early Adoption of Gen AI, Our Latest Research Shows.” Ofcom.
November 28.
https://www.ofcom.org.uk/news-centre/2023/gen-z-driving-early-adoption-of-gen-ai.
O’Neill, B., S. Livingstone, and S. McLaughlin. 2011. “Final Recommendations for Policy,
Methodology and Research.”
https://www.researchgate.net/publication/277257744_Final_recommendations_for_policy_method
ology_and_research/references.
78
“Seeing, Facilitating and Assessing Systems Change: Learnings from the Scaling Solutions toward
Shifting Systems Initiative.” 2020. Rockefeller Philanthropy Advisors.
https://www.rockpa.org/wp-content/uploads/2020/07/Rockefeller-Philanthropy-Advisors-Scaling-S
olutions-Report.pdf.
Schmitt, Lewin. 2022. “Mapping Global AI Governance: A Nascent Regime in a Fragmented
Landscape.” AI and Ethics 2 (2): 303–14. doi:10.1007/s43681-021-00083-y.
Seah, Cheryl. 2024. “Liability for AI-Generated Content.” Law Gazette. March.
https://lawgazette.com.sg/feature/liability-for-ai-generated-content/.
Social Media and the Teen Mental Health Crisis | United States Senate Committee on the Judiciary.
2024. Accessed June 13.
https://www.judiciary.senate.gov/committee-activity/hearings/social-media-and-the-teen-mental-he
alth-crisis.
Smalley, Suzanne. 2023. “Sweeping and Controversial Children’s Digital Privacy Bills Head to Full
Senate.” The Record. July 27.
https://therecord.media/coppa-kosa-childrens-privacy-senate-commerce-approves-bills.
Strobel, Gero, Leonardo Banh, Frederik Möller, and Thorsten Schoormann. 2024. Exploring
Generative Artificial Intelligence: A Taxonomy and Types. https://hdl.handle.net/10125/106930.
Ta, Vivian, Caroline Griffith, Carolynn Boatfield, Xinyu Wang, Maria Civitello, Haley Bader, Esther
DeCero, and Alexia Loggarakis. 2020. “User Experiences of Social Support From Companion
Chatbots in Everyday Contexts: Thematic Analysis.” Journal of Medical Internet Research 22 (3):
e16235. doi:10.2196/16235.
Thiel, David, Melissa Stroebel, and Rebecca Portnoff. 2023a. “Generative ML and CSAM:
Implications and Mitigations.” DOI: 10.25740/JV206YG3793.
https://purl.stanford.edu/jv206yg3793.
Thiel, David. 2023. “Identifying and Eliminating CSAM in Generative ML Training Data and
Models.” doi:10.25740/KH752SM9123.
Thorn, All Tech Is Human. 2024. “Safety by Design for Generative AI: Preventing Child Sexual
Abuse.” Thorn, All Tech Is Human.
https://info.thorn.org/hubfs/thorn-safety-by-design-for-generative-AI.pdf.
UN Advisory Body on Artificial Intelligence. 2023. “Interim Report: Governing AI for Humanity.”
United Nations. December. https://www.un.org/en/ai-advisory-body.
79
UN High-Level Advisory Body on Artificial Intelligence | Office of the Secretary-General’s Envoy
on Technology. 2024. Accessed June 10. https://www.un.org/techenvoy/ai-advisory-body.
UNICEF Global Insight Data Governance Manifesto.Pdf.”
https://www.unicef.org/globalinsight/media/1741/file/UNICEF%20Global%20Insight%20Data%2
0Governance%20Manifesto.pdf
UNICEF-Global Insight - Policy Guidance on AI for Children-2.0-2021.Pdf.”
https://www.unicef.org/globalinsight/media/2356/file/UNICEF-Global-Insight-policy-guidance-AI-
children-2.0-2021.pdf
UK ICO. 2024. “Age Appropriate Design: A Code of Practice for Online Services.” February 8.
https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/childrens-information/children
s-code-guidance-and-resources/age-appropriate-design-a-code-of-practice-for-online-services/.
Van Der Hof, Simone, E. Lievens, I. Milkaite, V. Verdoodt, T. Hannema, and T. Liefaard. 2020. “The
Child’s Right to Protection against Economic Exploitation in the Digital World.” The International
Journal of Children’s Rights 28 (4): 833–59. doi:10.1163/15718182-28040003.
Voslo, Steven. 2023. “How Will Generative AI Affect Children? The Need for Answers Has Never
Been More Urgent.” 2023. World Economic Forum. August 22.
https://www.weforum.org/agenda/2023/08/generative-ai-children-need-answers/.
Wang, Ge, Jun Zhao, Max Van Kleek, and Nigel Shadbolt. 2022. “Informing Age-Appropriate AI:
Examining Principles and Practices of AI for Children.” In CHI Conference on Human Factors in
Computing Systems, 1–29. New Orleans LA USA: ACM. doi:10.1145/3491102.3502057.
Wang, Ge, Kaiwen Sun, Ayça Atabey, Kruakae Pothong, Grace C. Lin, Jun Zhao, and Jason Yip.
2023. “Child-Centered AI Design: Definition, Operation, and Considerations.” In Extended
Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, 1–6. CHI EA
’23. New York, NY, USA: Association for Computing Machinery. doi:10.1145/3544549.3573821.
WEF. 2022. “The Promises and Perils of AI, with Prof Stuart Russell.” World Economic Forum.
January 6.
https://www.weforum.org/agenda/2022/01/artificial-intelligence-stuart-russell-radio-davos/.
Williams, Rhiannon. 2024. “The Download: What Social Media Can Teach Us about AI.” MIT
Technology Review. March 13.
https://www.technologyreview.com/2024/03/13/1089750/the-download-what-social-media-can-tea
ch-us-about-ai/.
80
Xue, Jintang, Yun-Cheng Wang, Chengwei Wei, Xiaofeng Liu, Jonghye Woo, and C.-C. Jay Kuo.
2023. “Bias and Fairness in Chatbots: An Overview.” arXiv. http://arxiv.org/abs/2309.08836.
Zimmerman, Anne, Joel Janhonen, and Emily Beer. 2023. “Human/AI Relationships: Challenges,
Downsides, and Impacts on Human/Human Relationships.” AI and Ethics, October.
doi:10.1007/s43681-023-00348-8.
81
Appendix - Data Analysis main table
Name
Logo
Category
Age
recommen
dation
Age
verication
mechanism
Personal Data
Processing:
Sensitive or
biometric data
collected and
processed by the
app
Personal Data
Processing: Data
minimization
practices
information
Personal Data
Processing:
Personal data
shared with
third-parties
Transpare
nt and
child-frie
ndly
informati
on on data
processing
Commercial
features
Policy on
AI-generated
content that is
inaccurate or
harmful
Other aspects
(personication,
misleading information,
sexualization, dark
patterns)
DPIA or
a CRIA
publicly
disclosed
Anima.ai
Girlfriend
Simulator
Lifestyle
17+
Access via
email (login
with
Google,
Apple) or
direct access
as a guest
without sign
up.
Personal
information shared
by users during
service usage. The
privacy policy does
not inform about
the biometric data.
Yes.
Detailed in the
Privacy Policy.
Not clear
No
The app allows
users to buy gifts
for the AI (car,
roses, ice cream,
ring, etc) for $1.
Several features are
available upon
payment:
requesting seles,
unlimited chat,
roleplay, new AI
characters.
Not informed
Harmful conversation: The
AI engaged in sexting right
after being informed the user
is a child. Misleading
advertising: "Grow your
communication and
relationship skills" "Express
love [to the AI] by gifts."
No
Anya
Intelligent fun
cute companion
Entertain
ment
4+
No age
verication.
The app informs
that no data is
collected. However,
users' voices are
processed when they
ask questions to the
AI companion.
The Privacy policy
cannot be accessed
as it is categorized
as pornography
website.
Not informed
Not
informed
Several features are
available upon
payment:
customization of
AI friend clothes,
pets, playing games,
and minutes of
conversation.
Interaction in the
app runs through
Not informed
Personication: the AI
companion as a 10-year-old
girl who interacts in
augmented reality.
No
82
voice, not text.
ChaChat
Meet your new
fantasy stories
Lifestyle
The app
informs
13+ and
18+ at
dierent
places.
Access via
email (login
with Google
or Apple).
Personal
information shared
by users during
service usage. The
privacy policy does
not inform about
biometric data, but
users can send
photos and have
voice calls with the
app.
Yes.
Detailed in the
Privacy Policy.
Storage time is not
informed.
Yes
Transparen
t, but not
child-frien
dly.
The app requires
payment to access
any feature after
allowing users to set
up the AI partner.
Not informed
Sexualized images. Oensive
information: AI character
with a prole displaying "I
dream of raping you."
Dark patterns: bait and switch
when customizing the AI
friend. After setting up the
companion, users cannot
access any feature without
paying to proceed.
Personication: AI can send
photo and voice messages.
No
Compass
ionate AI
Your Everyday AI
Companion
Website
platform
18+
Access via
email (login
with Google
or sign up
for the app).
Age is an
optional
eld in the
setup form.
Sensitive
information shared
by users during user
prole setup or
service usage. The
app informs no
biometric data is
collected
Yes.
Detailed in the
Privacy Policy.
Yes
No
After 5
prompts/day, the
app requires
payment to
continue any
interaction.
Be aware that
Compassionate AI
may generate
ctitious
information or
provide advice that
is not
factual/misleading.
Always verify the
information
independently or
consult a
professional when
necessary.
Inconsistencies between the
explanations on data
processing in the Privacy
Policy and app homepage.
Misleading advertising: "Your
interactions help the AI learn
about your specic needs and
interests, enabling it to
provide tailored advice and
solutions."
No
83
Kindroid
Friend to chat,
character for
roleplay and a
digital condant
Entertain
ment
17+
Access via
email (login
with Apple,
Google) or
access as a
guest. Email
verication.
Personal
information shared
by users during
service usage. The
privacy policy does
not inform about
biometric data, but
users can send
photos and have
voice calls with the
app.
Yes.
Detailed in the
Privacy Policy.
Storage time is not
informed.
Privacy policy is
not clear.
No
Several features are
available upon
payment:
requesting seles
and real-time audio
calls. a 3-day free
trial for unlimited
messages is
available.
By using the app,
users acknowledge
that all generated
content on
Kindroid should
be considered
ctitious and that
users are fully
responsible for any
and all generated
content.
Personication: photorealistic
companions, real-time voice
calls, internet-connected AI
companions. AI can send
seles and voice messages. AI
companions can actively
message users when they are
inactive for a certain time. Yet,
the chatbot acknowledges it's
an AI: "It's crucial for an AI
like myself to identify as
non-human."
No
Nastia
The Uncensored
AI Companion
Website
platform
17+
Access via
email (login
with Google
or Apple) or
sign up for
the app.
Email
verication.
Personal
information shared
by users during
service usage. The
privacy policy
informs/states the
app does not use
biometric data, but
users can have voice
calls with the app.
Yes.
Detailed in the
Privacy Policy. Not
clear about
retention.
Yes
No
Tokens represent
currency within the
platform and can be
used to buy extra
features or services.
To use all functions
available in the app,
users must purchase
tokens using real
money. Each token
represents $1 USD.
Not informed
Misleading advertising: The
app provides nonsensical
responses. It ignored the user
saying it was a child, and kept
sending messages about
unrelated topics. Misleading
information on privacy: Safe,
Anonymous, 100% private
conversations, conde your
secret. The app uses users'
personal data to train models.
No
Nomi.ai
AI companion
with a Soul
Lifestyle
18+
Access via
email (login
with Google
or Apple).
Personal
information shared
by users during
service usage.
Privacy policy does
not inform about
biometric data, but
users can share voice
messages and
Privacy policy is
supercial and
does not present
enough
information on
data processing
practices.
Not informed
No
Full access to most
of the app's features
(messages, voice
chat, photo
requests, multiple
Nomis) must be
purchased.
Not informed
Personication: Nomis can
send "real-time seles of what
they are doing or dressing,"
voice messages, and generate
AI images. Nomis are
photorealistic human
characters. Users can have
group chats with multiple AI
Nomis. Misleading
No
84
photos with the
app.
information: AI companion
with a Soul. Customize
Nomi's personality or "let it
choose their own identity."
Paradot
Your all in one AI
companion
Entertain
ment
18+
Access via
email (login
with Apple,
Google) or
access as a
guest. Age is
a mandatory
eld in the
setup form.
Sensitive
information shared
by users. Biometric
data (voice message,
photos, or videos).
Information based
on the content of
images, videos, and
audio shared with
the app.
Yes.
Detailed in the
Privacy Policy.
Yes
No
Possibility of
buying gifts for the
AI companion will
be released. Several
features are
available upon
payment: choose
the companion,
dating space,
unlimited messages.
Not informed
Misleading information: An
enchanting companion who
is ever present, a mentor and
guide on your journey, and a
romantic partner. Feel cared
for, understood, and loved.
Find emotional support and a
deep bond that goes beyond
traditional AI interactions.
No
Snapchat
My AI
A Generative AI
chatbot freely
available within
the popular
Snapchat app.
Social
media
13+
Access via
email (login
with email
or sign up
for the app).
Email
verication.
Age is a
mandatory
eld in the
setup form.
Sensitive
information shared
by users. Biometric
data (face, hands,
and voice).
Information based
on the content of
images, videos, and
audio shared with
the app.
Yes.
Detailed in the
Privacy Policy.
Yes
No
Several features are
available upon
payment:
customization of
AI friend
personality, users'
engagement
functions and
access to AI creative
tools.
App disclaimer:
"My Al is designed
with safety in
mind, but may give
responses which
are biased,
incorrect, harmful
or misleading.
Don't rely on its
advice."
Friendship and socialization
aspects are commodied.
Personication of appearance
and personality. Dark
patterns: bait and switch
when customizing the AI
friend personality.
No
85
Wonder
AI art generator
Graphics
and
Design
4+
Access via
email (login
with
Google,
Apple or
Facebook)
or sign up
for the app.
Personal data
informed by users.
Biometric data
(photos and videos
users upload).
Yes.
Detailed in the
Privacy Policy.
Yes
No
Access to most of
the features requires
payment. The app
requires users to
watch one ad for
every free image. To
create avatars, users
must upload
around 10 personal
photos; once
imported, the app
requires payment
for users to see the
result.
Not informed
Advertisements must be
watched if users want to
generate a free image, yet
these ads lead to apps with
sexualized content. Dark
patterns: paywalls (require
users to make a payment to
access features). "Community
artworks" feature provides a
set of images created by other
users, with suggestive content,
fantasy, and intense cartoons
as well as prompts to produce
similar images. The app
blocks some prompts using
the word "child," informing
the content violates their
policies.
No
Wombo
Dream
AI art generator
Graphics
and
Design
12+
Open access
as a Visitor,
or via email
(login with
Google, or
Apple) or
sign up for
the app.
Personal data
informed by users.
Biometric data
(photos, "facial
feature data" users
upload).
Yes.
Detailed in the
Privacy Policy. Not
clear about
retention time and
no information on
data deletion
requests.
Yes
No
The app oers a
freemium model.
After the rst image
is generated, users
are informed they
have to watch 2-3
ads to generate one
image.
For some images, a
disclaimer is
displayed: "Reveal
sensitive artwork?
This content has
been hidden
because of its
potentially
sensitive or explicit
nature."
Advertisements must be
watched if users want to
generate a free image. Dark
patterns: paywalls (require
users to make a payment to
access features). The app
blocks some prompts using
the word "child," informing
the prompt is "NSFW".
No
86