ResearchPDF Available

Abstract and Figures

Risk classifications guide practitioners and policymakers in their work and in communicating their results. It is timely to update risk classifications, given the variation in their use, the emerging risks in the digital environment, and our growing understanding of children’s experiences of online risks of harm. This report proposes a new CO:RE 4Cs classification, recognising that online risks arise when a child: (1) engages with and/or is exposed to potentially harmful CONTENT; (2) experiences and/or is targeted by potentially harmful CONTACT; (3) witnesses, participates in and/or is a victim of potentially harmful CONDUCT; (4) is party to and/or exploited by a potentially harmful CONTRACT. The 4Cs classification also distinguishes between aggressive, sexual and value risks, as we as important cross-cutting risks, notably to children’s privacy, health and fair treatment.
Content may be subject to copyright.
www.ssoar.info
The 4Cs: Classifying Online Risk to Children
Livingstone, Sonia; Stoilova, Mariya
Erstveröffentlichung / Primary Publication
Kurzbericht / abridged report
Empfohlene Zitierung / Suggested Citation:
Livingstone, S., & Stoilova, M. (2021). The 4Cs: Classifying Online Risk to Children. (CO:RE Short Report Series on
Key Topics). Hamburg: Leibniz-Institut für Medienforschung | Hans-Bredow-Institut (HBI); CO:RE - Children Online:
Research and Evidence. https://doi.org/10.21241/ssoar.71817
Nutzungsbedingungen:
Dieser Text wird unter einer CC BY Lizenz (Namensnennung) zur
Verfügung gestellt. Nähere Auskünfte zu den CC-Lizenzen finden
Sie hier:
https://creativecommons.org/licenses/by/4.0/deed.de
Terms of use:
This document is made available under a CC BY Licence
(Attribution). For more Information see:
https://creativecommons.org/licenses/by/4.0
The 4Cs: Classifying Online Risk to
Children
CO:RE Short Report Series: Key topics
Sonia Livingstone and Mariya Stoilova
DOI: https://doi.org/10.21241/ssoar.71817
[2]
Please cite this report as:
Livingstone, S., & Stoilova, M. (2021). The 4Cs: Classifying Online Risk to Children. (CO:RE Short Report
Series on Key Topics). Hamburg: Leibniz-Institut für Medienforschung | Hans-Bredow-Institut (HBI); CO:RE -
Children Online: Research and Evidence. https://doi.org/10.21241/ssoar.71817.
Editor: Veronika Kalmus
Language editor: Dawn L. Rushen
The CO:RE project is a Coordination and Support Action within the Horizon 2020 framework, which aims to
build an international knowledge base on the impact of technological transformations on children and youth.
Part of the knowledge base is a series of short reports on relevant topics that provide an overview of the state
of research. This part is coordinated by Veronika Kalmus (University of Tartu, Estonia).
For all reports, updates, insights, as well as full details of all CO:RE Consortium members and CO:RE
national partners throughout Europe and beyond, please visit core-evidence.eu
. This project has received funding from the European Union’s Horizon 2020 EU.3.6.1.1
– The mechanisms to promote smart, sustainable and inclusive growth DT-
TRANSFORMATIONS-07-2019 – The impact of technological transformations on
children and youth. Grant Agreement ID 871018.
Acknowledgements
We thank the joint Insafe and INHOPE networks for their input during an online consultation and Karl Hopwood
for working with us to make this happen. We also thank the CO:RE Consortium for their insights as we
developed the classification of online risks, the reviewers of earlier drafts of this report, and the European
Union’s Horizon 2020 programme for the funding.
Contents
Key insights ......................................................................................................................................................... 3
Understanding online risk .................................................................................................................................... 3
The 3Cs of online risk .......................................................................................................................................... 4
Adopting the classification ................................................................................................................................... 5
Contract risks: the fourth ‘C’ ................................................................................................................................ 6
Cross-cutting risks ............................................................................................................................................... 8
Practitioner reflections ......................................................................................................................................... 9
The new CO:RE classification ........................................................................................................................... 10
Conclusions ....................................................................................................................................................... 12
References ........................................................................................................................................................ 13
About the authors .............................................................................................................................................. 14
[3]
Key insights
Risk classifications guide practitioners and
policymakers in their work and in communicating
their results. EU Kids Online’s (2009) 3Cs of
online risk is used widely as a classic point of
reference for stakeholders internationally.
It is timely to update this classification, given
the variation in its use, the emerging risks in the
digital environment, and our growing under-
standing of children’s experiences of online risks
of harm. As part of our CO:RE work on theories
and concepts, we:
o reviewed existing classifications of online risk
to children by UNICEF, the International
Telecommunication Union (ITU), Organisa-
tion for Economic Co-operation and Devel-
opment (OECD), Council of Europe (CoE)
and others;
o consulted European practitioners of child
internet safety from Insafe and INHOPE to
build on their experience.
This report proposes a new CO:RE 4Cs classi-
fication, recognising that online risks arise when
a child:
o engages with and/or is exposed to potentially
harmful CONTENT;
o experiences and/or is targeted by potentially
harmful CONTACT;
o witnesses, participates in and/or is a victim of
potentially harmful CONDUCT;
o is party to and/or exploited by a potentially
harmful CONTRACT.
The 4Cs classification also distinguishes
between aggressive, sexual and value risks,
as this is helpful in retaining a balanced view of
the range of risks that children can encounter.
We note that risks to the values that shape
childhood and society are increasingly
prominent.
1 See https://core-evidence.eu/understanding-children-
online-theories-concepts-debates/
In addition to the 4Cs, the new CO:RE classi-
fication recognises important cross-cutting
risks, notably to children’s privacy, health and
fair treatment.
Keeping in mind that children’s online
opportunities are paramount, and that a host of
individual and societal protective and vulner-
ability factors mediate between risk and harm,
we hope that the new classification is insightful
for research, policy and practice that contributes
to realising children’s rights in relation to the
digital environment (UN, 2021).
Understanding online risk
In the CO:RE project, our work on theory examines
the key concepts that frame the field of research,
policy and practice. The aim is to bring together
diverse perspectives and interrogate their under-
lying assumptions in order to contribute to the
collective ambition of understanding the experi-
ences and consequences of growing up in a digital
world.
A comprehensive understanding of children’s
engagement with the digital environment requires a
balanced consideration of both risks and opportu-
nities, recognising the full range of children’s rights
in a digital world (UN, 2021). Within this broader
frame (Livingstone, 2016), risk is one of the key
concepts identified for investigation by the CO:RE
Consortium,1 and is the focus of this short report.
In a fast-changing digital ecosystem, the nature of
risk is continually evolving, sometimes exposing
children to emerging risks well before adults know
how to mitigate them. Risk has been defined as:
Uncertainty about and severity of the
consequences (or outcomes) of an activity with
respect to something that humans value.
(Aven & Renn, 2009, p. 1)
The clash of possibly severe outcomes with human
values inevitably raises concerns, and the digital
environment, in which children are often very active,
adds heightened uncertainties into the mix. No
wonder that online risk is one of the most contested
areas of children’s digital experience, concerning
[4]
many stakeholders and posing pressing challenges
for research, policy and practice.
These challenges include understanding children’s
exposure to different types of online risk, and how
regulatory, technical, social or individual inter-
ventions can be effective in developing strategies to
cope with risk, mitigating or minimising any harmful
consequences.
From the outset, it is vital to distinguish between
online risk and harm. Conceptually, risk is the
probability of harm, while harm includes a range of
negative consequences to the child’s emotional,
physical or mental wellbeing (Livingstone, 2013).
For example, exposure to pornography poses a risk
to a child, but it is not a certainty that there will be
harmful consequences.
Harmful outcomes depend on the nature of the risk
(whether it is more probable or more severe in its
consequences) and on the design, regulation and
management of the digital environment (privacy
settings, moderation services, access to helplines
etc.). They also depend on the child and their
circumstances, because what is problematic for one
child might not be so for another. Such differences
reflect societal factors (norms and regulations,
political priorities, economic investments, education
and family systems, etc.) as well as the individual
protective or vulnerability factors that differentiate
among children (including age, gender, digital skills,
resilience, personality, socio-economic situation
and family context).
It is paramount that our understanding of online risk
is evidence-based, prioritising robust research
conducted with and in relation to children.2 Our
understanding should also be informed by
children’s own views and experiences, and those of
practitioners responding to child online risk and
safety problems, rather than assuming or imposing
a vision grounded in adult normative expectations
or popular anxieties.
In this short report we critically examine how online
risks have been classified in order to develop a
better understanding of children’s online experi-
ences and their potential or actual real-world conse-
quences. After discovering how existing classifi-
cations have been adopted in the work of various
2 See OECD (2011); UNICEF (2017); Smahel et al
(2020).
stakeholders, we propose a new classification of
online risk to children to meet the challenges of a
changing digital environment and the practical
imperatives of policymakers and practitioners.
This new classification highlights four dimensions
related to the positioning of the child in the digital
environment, and shows how these intersect with
three dimensions regarding the nature of the risk. It
also recognises the cross-cutting dimensions of
privacy, discrimination and health risks.
The 3Cs of online risk
A comprehensive classification of online risk was
proposed by EU Kids Online in 2009 (Staksrud &
Livingstone, 2009; Staksrud et al., 2009), funded by
the European Commission’s (EC) Safer Internet
Programme (now the Better Internet for Kids
Programme).3 It was originally developed to answer
the often-asked questions regarding ‘What risks are
we talking about?’ and ‘Why should policymakers
take action?’ It sought to disaggregate risks and
raise awareness of the wide array of risks affecting
children, including, but also going beyond, the main
emphasis on pornography, grooming and
cyberbullying that dominated the agenda at the
time.
Taking a child-centred and evidence-based ap
proach, EU Kids Online’s classification identified
two dimensions of risk: the positioning of the child
in relation to the digital environment (as a recipient
of mass-produced content, a participant in adult-
initiated activity, and an actor in peer-to-peer
exchanges), and the nature of the risk (aggressive,
sexual, values and commercial).
This classification took a strongly child-centred
approach. It highlighted that children should not be
treated as solely vulnerable victims or protected at
all costs, including at the cost of their online
opportunities. The idea was to recognise children’s
agency as actors in a digital world, but without
holding them unduly responsible for risks online or,
especially, for the at-times harmful effects on their
wellbeing or that of others. As will be seen later, the
revised CO:RE classification recognises the child’s
perspective and agency but also the power of
3 www.betterinternetforkids.eu/nl/
[5]
societal and digital infrastructures to shape the
child’s experiences and outcomes.
The original classification was tested using data
from EU Kids Online’s two-wave European survey
with internet-using children aged 9–16 conducted in
2010 (Livingstone et al., 2011) and 2017–19
(Smahel et al., 2020). It has been incorporated into
the Global Kids Online model and its surveys of
children in 18 countries (Livingstone et al., 2019).
Taken together, these projects have generated
cross-nationally comparable data from 40,000
children in more than 35 countries, providing an
evidence base to inform policy priorities and
establishing a baseline against which socio-
technical change and policy interventions have
been positively evaluated (Morton et al., 2019).4
Figure 1 shows the classification with exemplar
risks in the cells.5
Figure 1: The EU Kids Online original 3Cs classification of online risks (Livingstone et al., 2011)
Adopting the classification
The 3Cs classification became a classic point of
reference since 2010, much cited by the
policymakers and practitioners working to maximise
children’s online opportunities and minimise their
risks of harm.
To trace its use, we conducted a search for mention
of ‘content, contact and conduct risks’ online and
among reports and documents by relevant
organisations. We found that the 3Cs of online risk
have informed the work of a range of key actors,
albeit not always with a direct source, including
UNICEF, the European Commission (EC), the
Organisation for Economic Co-operation and
4 See also www.eukidsonline.net and
www.globalkidsonline.net
5 In keeping with EU Kids Online’s commitment to
balance risks and opportunities, a parallel classi-
fication was proposed for opportunities, although it
was little noted (Livingstone et al., 2018).
Development (OECD), the Broadband Commission
for Sustainable Development (2019), the Inter-
national Telecommunication Union (ITU) (2020),
the ICT Coalition (O’Neill, 2014; Croll, 2016), and
others (O’Neill & Dinh, 2018; Green et al., 2019).6
One use is to classify the plethora of problems
reported by children who call helplines. Supported
by the EC’s Better Internet for Kids programme, the
work of the Safer Internet Centres (SICs) provides
helplines across Europe:
Helplines provide information, advice and
assistance to children, young people and
parents on how to deal with harmful content,
harmful contact (such as grooming) and
6 We did not find classifications in the work of ECPAT
International, the European Union Agency for
Fundamental Rights (FRA), GSMA, INTERPOL, Child
Helpline International (CHI), CEO Coalition, European
Network of Ombudspersons for Children or UNESCO.
Content
Receiving mass-produced
content
Contact
Participating in (adult-
initiated) online activity
Conduct
Perpetrator or victim in peer-
to-peer exchange
Aggressive Violent/gory content Harassment, stalking Bullying, hostile peer activity
Sexual Pornographic content ‘Grooming’, sexual abuse or
exploitation
Sexual harassment, ‘sexting’
Values Racist/hateful content Ideological persuasion Potentially harmful user-
generated content
Commercial Embedded marketing Personal data misuse Gambling, copyright
infringement
[6]
harmful conduct (such as cyberbullying or
sexting). (O’Neill & Dinh, 2018, p. 68)
Relatedly, the EC’s self-regulatory initiative, the
‘Alliance to better protect minors online’,7 called on
businesses to tackle ‘existing and emerging risks
that children and young people face online,
including: harmful content (e.g. violent or sexually
exploitative content); harmful conduct (e.g.
cyberbullying), and harmful contact (e.g. sexual
extortion)’.8
UNICEF’s flagship annual publication The state of
the world’s children focused in 2017 on children in
a digital world, and also used the classic EU Kids
Online classification, recognising that while it is vital
to address online risk, some degree of risky
opportunities can afford children the chance to learn
and become resilient, depending on their maturity
and circumstances (UNICEF, 2017).
Undoubtedly, what has proved most valuable are
the definitions of the 3Cs, as illustrated in Figure 2.
It is noteworthy that most uses of the classification
refer to just one of the two dimensions (the child in
relation to the digital environment) and discuss
content, contact and conduct. Thus, they often omit
the second dimension – the nature of the risk
(aggressive, sexual, values, commercial) – and,
perhaps in consequence, the exemplar risks
highlighted and researched by EU Kids Online,
among other researchers (Stoilova et al., 2021).
Without the second dimension, however,
commercial risks became somewhat neglected,
leading to calls for revision of the original risk classi-
fication given rising evidence of the importance of
commercial online risks to children.
7 See https://ec.europa.eu/digital-single-
market/en/alliance-better-protect-minors-online
8 This framing is problematic in eliding risk and harm,
because it is precisely in the gap between them that
Figure 2: The 3Cs of online risk (UNICEF, 2017)
Content risks: Where a child is exposed to
unwelcome and inappropriate content. This can
include sexual, pornographic and violent images;
some forms of advertising; racist, discriminatory or
hate speech material; and websites advocating
unhealthy or dangerous behaviours, such as self-
harm, suicide and anorexia.
Contact risks: Where a child participates in risky
communication, such as with an adult seeking
inappropriate contact or soliciting a child for sexual
purposes, or with individuals attempting to
radicalize a child or persuade him or her to take
part in unhealthy or dangerous behaviours.
Conduct risks: Where a child behaves in a way
that contributes to risky content or contact. This
may include children writing or creating hateful
materials about other children, inciting racism or
posting or distributing sexual images, including
material they have produced themselves.
Contract risks: the fourth ‘C’
Digital technologies have developed significantly
since the original typology was created, and the
online ecology affords new opportunities but also
new risks for children, particularly in relation to
commercialisation and datafication. To respond to
these changes and to reintroduce more prominently
commercial dimensions of online risk, a fourth ‘C’
(variously labelled ‘contract’, ‘commercial’ or
‘consumer’) has been suggested.
In a 2018 redevelopment of the EU Kids Online
classification, the fourth ‘C’ is conceived not as a
commercial risk, but as a ‘contract’ risk that directly
or indirectly connects children and digital providers.
This reflects the dramatic rise in the
commercialisation of children’s personal data,
arguably resulting in the ‘datafication’ of children
themselves (Mascheroni, 2020).
many empowering and safety interventions focus
their efforts (e.g. digital literacy).
[7]
With the 4Cs, EU Kids Online has proposed not only
a classification but also a digital ecosystem of online
risks in which children are variously positioned and
in which the different risks interact in increasingly
complex ways. This informed the CoE’s Handbook
for policy makers on the rights of the child in the
digital environment (Livingstone et al., 2020), as
shown in Figure 3.
Figure 3: The EU Kids Online 4Cs model of online risks (Livingstone et al., 2020, p. 57, adapted from
Hasebrink et al., 2018)
Most obviously, contract risks arise when the child
‘accepts’ (including unintentionally, involuntarily or
unknowingly) the Terms of Service (or Terms and
Conditions) of a commercial provider of digital
products or services. Such contractual arrange-
ments can bind the child in ways that may be unfair
or exploitative, or which pose security or safety or
privacy risks of which they may be unaware or over
which they have little control or means of escape.
Related risks arise because of the data processed
by public and third sector organisations, as well as
through a host of public–private partnerships
(Stoilova et al., 2020).9 The Broadband Commis-
sion observes that children:
… have no way of understanding what they
were signing up for when they installed the app
or logged on to the site. Services and
obligations that are designed for adults must
be age-limited — so that children cannot sign
up to them without a guardian’s permission…
9
This data may be given by or taken from children’s
digital activities, as well as inferred or assumed about
them, or about others connected with them, through
profiling operations. The fast-growing data ecosystem
now provides an infrastructure not only for commercial
transactions impacting on children but also for the
digital products and services that afford content,
While online, children also risk spending
money without permission of parents or
caregivers and having their data harvested.
(Broadband Commission for Sustainable
Development, 2019, p. 34)
In short, contract risks arise when children use
digital services as well as when they are impacted
by digital transactions conducted by others in other
ways (e.g. through institutional uses of digitised
databases that include the child’s profile, or
algorithmic processing of personal data relating to
the child or others connected with them; see O’Neill,
2014; 5Rights Foundation, 2019).
In naming this category of risks ‘contract risks’, we
note the legal difficulties linked to contracts
involving children, as well as the fact that users (of
all ages) can be unaware of the contractual nature
of their relationship with digital service providers.
We also note that the contract that occasions a risk
contact and content risks. The result is that the types
of risk are increasingly interlinked, as are the solutions
– e.g. data protection regulation can prevent some
interpersonal or social forms of online harm (Stoilova
et al., 2020).
[8]
may not be with the child but with their parent or
school or indeed, between a service provider and a
third party, among other possibilities in the complex
digital ecosystem. Nonetheless, on balance, we
propose that the label ‘contract’ is helpful in pointing
to a mix of marketing, data processing and other
contractual risks that merit specific attention, most
but not all of which are commercial, and some of
which are still emerging.
Cross-cutting risks
Even with the fourth ‘C’, there are dimensions of
online risk that might not fit neatly into these
categories. UNICEF’s State of the World’s Children
participatory workshops (UNICEF, 2017) revealed
that children report concerns about risks that do not
fit well with the classification, such as technological
problems and parental intrusion in their online lives.
In its draft Recommendation on children in the
digital environment, the OECD observes that:
the nature of existing risks have significantly
changed, and a number of new risks have
emerged. Technological developments and
new business models have contributed to the
change in digital devices and services, which
in themselves have also contributed to the
evolving risk landscape. (OECD, 2021, p. 4)
Do we need to go beyond the 4Cs and add new and
cross-cutting elements? Recognising that digital
service providers need to know which risks are of
greatest concern so that they can innovate in safety
by design, and building on multi-stakeholder
consultation (5Rights Foundation, 2019), the OECD
recently proposed that some risks are seen as
cross-cutting in nature – such as those related to
privacy, advanced technological features (e.g.
Internet of Things [IoTs], artificial intelligence [AI],
biometrics, predictive analytics), health and
wellbeing.
Note that the OECD builds on the EU Kids Online
classification, although it defines the fourth ‘C’ as
‘consumer risks’.10 The second dimension of the
figure lists ‘risk manifestations’ (or examples of
ways in which children might encounter potential
harms online), although it does not organise them
further. This is shown in Figure 4.
Figure 4: Children in the digital environment: revised typology of risks (OECD, 2021)
10 The OECD’s proposed category of consumer risks
includes four manifestations: (1) marketing risks; (2)
commercial profiling risks; (3) financial risks; and (4)
security risks.
[9]
Practitioner reflections
To discover how practitioners working in the field of
child online protection classify risks, and whether
they consider that revisions to the 4Cs are needed,
in October 2020 we conducted an online workshop
with 125 members from the Insafe and INHOPE
networks from over 20 countries.11
The consultation sought to:
Identify familiar and emerging online risks
affecting children across Europe, and to see
whether these are common across or specific
to different contexts or countries.
Consider whether classifications of online
risk are adopted in practice and useful, and
if so, what purpose they serve and what the
strengths and shortcomings of the available
classifications are.
Insafe and INHOPE members contributed a series
of reflections on the risk classification and its
possible development.12 After a lively discussion,
there was widespread agreement that risk
classifications are useful for practitioners.
Practical purposes of the classification of online
risks include:
Identifying the range and diversity of risks,
including identifying emerging risks.
Making comparisons and capturing trends
across risks and across time/contexts.
Systematically communicating results and
priorities to expert, policymaker and lay
audiences.
Highlighting the need for resources, budgets
and training.
Classifying the types of risks reported via input
from helplines and complaints mechanisms.
Targeting planning, interventions and
awareness-raising campaigns.
Mapping evidence to risk categories and
identifying evidence gaps.
In practice, some organisations will always generate
their own classifications – for instance, when
working bottom-up from helpline calls to track local
11 See www.betterinternetforkids.eu/practice/
articles/article?id=6745701
trends – while others will not need to classify risks in
their work.
Overall, however, the consensus was that it is
valuable to have a shared approach to answering
questions such as ‘What do we mean by online
risks?’ and ‘Which risks are emerging?’ or ‘Which
should be prioritised?’ and ‘How is my country doing
compared with others?’
For researchers, the classification is useful in
providing a common terminology by which to report
and review findings, and for mapping where
evidence is sufficient and where there are pressing
gaps. As for practitioners, researchers also
repeatedly find that risks intersect, bridging offline
and online experiences, and compounding adverse
outcomes for the more disadvantaged or vulnerable
children. But we can only report such complex
relations among risks if we first identify those risks,
so the classification remains useful.
It was also generally agreed that, to be useful, risk
classifications should prioritise:
Flexibility the classification has to be broad
and flexible so that new risks can be added
when needed or when we need to refer to
different groups of children or address
stakeholders.
Clarity the risks should not overlap with each
other and they should map readily onto the
reports from children or practitioners about
problematic experiences. Recognising that this
is a complex domain, the call was also to avoid
oversimplification, recognising ‘hybrid threats’
that could be classified in more than one
domain (e.g. identity theft could be linked to
contact, conduct or contract risks depending on
the circumstances; online pressures relating to
body image can have both sexual and value
dimensions; see Figure 6).
Examples to be readily understood and
applicable to the practical work, including real-
world examples in the cells of the classification
table is important. While it is recognised that the
examples provided cannot be comprehensive,
they should map onto the actual problems
reported by children or encountered by
practitioners. They should also resonate with
12 For detailed findings, see Livingstone et al. (2021).
[10]
audiences (parents, policymakers, etc.) when
risk-related work is made public.13
Two structural changes to the online risk classi-
fication were recommended:
Inclusion of the fourth ‘C’ this is needed,
and it was widely thought that the term
‘contract’ is more inclusive than ‘commercial’ or
‘consumer’ risks in recognising that risks can
arise when the child is party to a contract with
public and third sector organisations as well as
commercial bodies, especially with the
prevalence of public–private partnerships in
complex digital ecologies.
Cross-cutting risks the recognition of risks
that cut across several or all of the 4Cs was
also agreed, although much debated. Again,
this arises because of the complexity of the
digital ecology and also because risks are
interrelated, and they can affect multiple
dimensions of a child’s experience. The effects
on children’s health (e.g. health risks linked to
excessive screen use) were raised by multiple
contributors. So, too, were the array of privacy
risks experienced by children online, many of
which arise from data processing (and so can
be classified as contract risks) but that can also
arise in relation to content, and through inter-
personal contact and conduct.
Even after discussion, different views remained
regarding:
Country specificities should the classifica-
tion differ by country and context to recognise
different legal, regulatory and cultural factors
that shape children’s exposure to risk? It
emerged, however, that pan-European com-
monalities are more notable than country
differences, and are often more worthy of
attention given the benefits of sharing insights
and best practice across countries, and in
working towards common solutions.
13 In this regard, the ‘risk manifestations’ in the OECD
classification were found to be difficult to interpret both
because they are abstract and yet overlapping, and
because the legal/illegal boundary varies by
country/policy context. Relatedly, the idea of cross-
cutting technological risks was not taken up, possibly
because all online risks have a technological
dimension or because the examples given in the
Extending the classification with a fifth ‘C’
a range of possibilities was suggested,
including that the classification could identify
the consequences of risk, such as health or
wellbeing, or other abuses of children’s rights;
and/or distinguish illegal (criminal’) from
harmful risks. However, this discussion threw
up the many differences not only by country
(e.g. in which online risks are illegal) but also
organisational sector, type and purpose. It was
agreed, therefore, that although 5Cs may be
useful on occasion, this should be left to each
country or organisation to determine for itself.
The new CO:RE classification
We propose a new CO:RE classification of online
risk, learning from the above experiences and from
consultation with the CO:RE Consortium. Risk is
recognised as relational, emerging from the
dynamic interaction between the child’s agency and
the agency of others operating in the digital
environment (including through automated pro-
cessing such as algorithms and as embedded in
digital design and operation).14
The 4Cs of online risks of harm are content, contact,
conduct and contract risks, as explained in Figure 5.
The classification has the merit, we suggest, of order
and clarity. We believe it to be fit for purpose,
recognising the multiple positions that children may
occupy in an increasingly significant and powerful
digital environment, including continually emerging
online risks. It is orderly and clear, and it provides
practitioner-tested exemplars of key risks, including
those that have become familiar in recent decades
and those that are emerging and new.
The introduction of contract risks as the fourth ‘C’
incorporates risks previously labelled ‘commercial’.
OECD typology are linked most closely to contract
risks or again, to privacy or discrimination.
14 This framing of the 4Cs overcomes the previous
potential for misunderstanding (e.g. the implication
that a child may participate willingly in contact abuse,
or that they are mere receivers of content rather than
also actively seeking it).
[11]
Figure 5: The CO:RE 4Cs of online risk
Content risks: The child engages with or
is exposed to potentially harmful content.
This can be violent, gory content, hateful or
extremist content, as well as pornographic
or sexualised content that may be illegal or
harmful, including by being age-
inappropriate. Content online may be
mass-produced or user-generated
(including by the child), and it may be
shared widely or not.
Contact risks: The child experiences or is
targeted by contact in a potentially harmful
adult-initiated interaction, and the adult
may be known to the child or not. This can
be related to harassment (including
sexual), stalking, hateful behaviour, sexual
grooming, sextortion or the generation of
sharing of child sexual abuse material.
Conduct risks: The child witnesses,
participates in or is a victim of potentially
harmful conduct such as bullying, hateful
peer activity, trolling, sexual messages,
pressures or harassment, or is exposed to
potentially harmful user communities (e.g.
self-harm or eating disorders). Typically
conduct risks arise from interactions
among peers, although not necessarily of
equal status.
Contract risks: The child is party to and/or
exploited by potentially harmful contract or
commercial interests (gambling,
exploitative or age-inappropriate marketing,
etc.). This can be mediated by the
automated (algorithmic) processing of data.
This includes risks linked to ill-designed or
insecure digital services that leave the child
open to identity theft, fraud or scams. It
also includes contracts made between
other parties involving a child (trafficking,
streaming child sexual abuse).
Cross-cutting risks: Some risks relate to
most or all of the four categories and can
have multiple manifestations across the
different dimensions (aggressive, sexual,
values). These include online risks relating
to privacy, physical or mental health,
inequalities or discrimination.
Hence the new classification now distinguishes
three dimensions in relation to the nature of the risk:
aggressive, sexual and values. It is noteworthy that
interest in value-related risks (e.g. misinformation,
radicalisation, self-harm, algorithm bias) has grown
in recent years, now attracting as much attention
and anxiety as aggressive and sexual risks.
Finally, the new classification recognises three
types of cross-cutting risk – to children’s privacy,
their health, and their fair treatment and equal
inclusion in a digital world. These risks, we suggest,
can occur in relation to any and all of content,
contact, conduct and contract risks (see Figure 6).
Importantly, it should be noted that, although some
risks are particularly cross-cutting in nature, many of
the online risks to children intersect and hybridise,
depending on the circumstances, and more so as
the digital environment evolves. Hence the classi-
fication and its exemplars are offered here as a way
of organising and opening up further investigation,
rather than as implying that risks are simple or
disconnected.
[12]
Figure 6: The CO:RE classification of online risk to children
Conclusions
We hope this new classification serves constructive
purposes for researchers, policymakers and practi-
tioners working to minimise or manage online risks
to children’s rights and wellbeing. The classification
offers the foundations of a better understanding of
online risk to children, and it can underpin the work
of different stakeholders:
Policymakers can use it to identify what risks
matter and why, what evidence supports them,
and how they fit within or fall outside existing
regulatory frameworks.
Parents and the public can use it to learn what
can be done about the different risks and what
to look out for.
Researchers can use the classification to
develop comprehensive definitions and
measures of online risk, and to organise,
compare and report findings.
15 We sought to future-proof the classification by
describing risks in broad terms rather than focusing
on very particular or time-bound risks, although we
Practitioners can use it in their work to classify
and understand the problems reported to
them, to communicate with different audi-
ences, and to manage and bid for resources.
The classification will need careful framing for
different audiences, so more work needs to be done
on implementation. Moreover, as society and the
digital environment continues to change, the classi-
fication will need revisiting in the future.15
It should be noted that our focus has been on
children online, leaving others to attend to the
important risks of not being online – digital
exclusion, struggles for access and connectivity,
lack of digital skills, and so forth.
We did not focus on the factors that account for
whether, when or why some children are more likely
to encounter particular online risks than others, nor
the protective or vulnerability factors – whether
concerning children, their circumstances, the digital
environment or its regulation and management –
appreciate they arouse concern (e.g. sharenting,
influencers, deep fakes, viral challenges).
[13]
that account for harmful outcomes. Again, this has
been amply addressed elsewhere.16
It is also important to see risk as only one of the
dimensions of children’s online experiences,
alongside opportunities and among many factors
that intersect to influence children’s outcomes
(Livingstone, 2016). Indeed, while the digital
environment affords children a range of risks, it also
offers many opportunities to benefit, and this merits
a parallel analysis. If society becomes overpro-
tective, it can inadvertently undermine the very
opportunities for which society provides children
with internet access. We will address the 4Cs of on-
line opportunities in our future work.
References
5Rights Foundation (2019). Towards an internet safety
strategy. 5Rights.
https://5rightsfoundation.com/uploads/final-5rights-
foundation-towards-an-internet-safety-strategy-january-
2019.pdf
Aven, T & Renn, O. (2009). On risk defined as an event
where the outcome is uncertain, Journal of Risk
Research, 12(1), 1–11.
Broadbnd Commission for Sustainable Development
(2019). Child online safety: Minimizing the risk of
violence, abuse and exploitation online. ITU and
UNESCO.
https://unesdoc.unesco.org/ark:/48223/pf0000374365?p
osInSet=1&queryId=1a93f340-75cf-42d8-adfe-
4f4b718fcad3
Croll, J. (2016). Let’s play it safe: Children and youths in
the digital world. Assessment of the emerging trends and
evolutions in ICT services. ICT Coalition.
www.ictcoalition.eu/medias/uploads/source/available%2
0here.pdf
Green, A., Wilkins, C. & Wyld, G. (2019). Keeping
children safe online. Nominet, NPC and Parent Zone.
www.thinknpc.org/wp-content/uploads/2019/07/Keeping-
Children-Safe-Online-NPC-Nominet-ParentZone-
2019.pdf
Hasebrink, U., Rechlitz, M., Dreyer, S., Brüggen, N.,
Gebel, C. & Lampert, C. (2018). What are you concerned
about? Classifying children's and parents' concerns
regarding online communication. ECREA Conference,
November. https://leibniz-
hbi.de/uploads/media/default/cms/media/hl9lir5_2018-
11-01_ECREA_Hasebrink%20et%20al_What
%20are%20you%20concerned%20about.pdf
ITU (International Telecommunication Union) (2020).
Child Online Protection (COP) guidelines. www.itu-cop-
guidelines.com/
16 OECD (2011); Livingstone et al. (2012); O’Neill and
Dinh (2018); Smahel et al. (2020); Stoilova et al.
(2021).
Livingstone, S. (2013). Online risk, harm and
vulnerability: Reflections on the evidence base for child
internet safety policy. ZER: Journal of Communication
Studies, 18(35), 13–28. http://eprints.lse.ac.uk/62278/
Livingstone, S. (2016). A framework for researching
Global Kids Online: Understanding children’s well-being
and rights in the digital age. London: Global Kids Online.
www.globalkidsonline.net/framework
Livingstone, S., Haddon, L., Görzig, A. & Ólafsson, K.
(2011). Risks and safety on the internet: The perspective
of European children: Full findings and policy implications
from the EU Kids Online survey.
http://eprints.lse.ac.uk/33731/
Livingstone, S., Haddon, L. & Görzig, A. (eds) (2012).
Children, risk and safety online: Research and policy
challenges in comparative perspective. Policy Press.
Livingstone, S., Kardefelt-Winther, D. & Saeed, M.
(2019). Global Kids Online comparative report. UNICEF
Office of Research – Innocenti, Florence. www.unicef-
irc.org/publications/1059-global-kids-online-comparative-
report.html
Livingstone, S., Lievens, E. & Carr, J. (2020). Handbook
for policy makers on the rights of the child in the digital
environment. Council of Europe.
https://rm.coe.int/publication-it-handbook-for-policy-
makers-final-eng/1680a069f8
Livingstone, S., Mascheroni, G. & Staksrud, E. (2018).
European research on children’s internet use: Assessing
the past and anticipating the future. New Media & Society,
20(3), 1103–1122, http://eprints.lse.ac.uk/68516
Livingstone, S., Stoilova, M. & Hopwood, K. (2021).
Classifying known and emerging online risks for children:
A child practitioners’ perspective. CO:RE–Children
Online: Research and Evidence. https://core-
evidence.eu/wp-content/uploads/2021/02/WP5_online-
forum-III_event-report.pdf
Mascheroni, G. (2020). Datafied childhoods:
Contextualising datafication in everyday life. Current
Sociology, 68(6), 798–813.
doi:10.1177/0011392118807534
Morton, S., Grant, A., Cook, A., Berry, H., McMellon, C.,
Robbin, M. & Ipince, A. (2019). Children’s experiences
online: Building global understanding and action. UNICEF
Office of Research – Innocenti. www.unicef-
irc.org/publications/1065-childrens-experiences-online-
building-global-understanding-and-action.html
O’Neill, B. (2014). First report on the implementation of
the ICT principles. Dublin Institute of Technology & ICT
Coalition.
www.ictcoalition.eu/medias/uploads/source/First%20Re
port%20on%20the%20Implementation%20of%20the%2
0ICT%20Principles.pdf
O’Neill, B. & Dinh, T. (2018). The Better Internet for Kids
policy map: Implementing the European Strategy for a
Better Internet for Children in European member states.
www.betterinternetforkids.eu/documents/167024/263734
[14]
6/BIK+Map+report+-+Final+-+March+2018/a858ae53-
971f-4dce-829c-5a02af9287f7
OECD (Organisation for Economic Co-operation and
Development) (2011). Recommendation of the Council
on the protection of children online.
https://legalinstruments.oecd.org/en/instruments/OECD-
LEGAL-0389
OECD (2021). Children in the digital environment:
Revised typology of risks. OECD Digital Economy
Papers, No. 302. https://doi.org/10.1787/9b8f222e-en
Smahel, D., Machackova, H., Mascheroni, G., Dedkova,
L., Staksrud, E., Ólafsson, K., Livingstone, S. &
Hasebrink, U. (2020. EU Kids Online 2020: Survey results
from 19 countries.
https://doi.org/10.21953/lse.47fdeqj01ofo
Staksrud, E. & Livingstone, S. (2009). Children and online
risk: Powerless victims or resourceful participants?
Information, Communication and Society, 12(3): 364–
387. http://eprints.lse.ac.uk/30122/
Staksrud, E., Livingstone, S., Haddon, L. & Ólafsson, K.
(2009). What do we know about children’s use of online
technologies: A report on data availability and research
gaps in Europe (2nd edn). EU Kids Online.
http://eprints.lse.ac.uk/24367/
Stoilova, M., Livingstone, S. & Khazbak, R. (2021).
Investigating risks and opportunities for children in a
digital world: A rapid review of the evidence on children’s
internet use and outcomes. Innocenti Discussion Paper
2020-03. UNICEF Office of Research – Innocenti.
www.unicef-irc.org/publications/1183-investigating-risks-
and-opportunities-for-children-in-a-digital-world.html
Stoilova, M., Livingstone, S. & Nandagiri, R. (2020).
Digital by default: Children’s capacity to understand and
manage online data and privacy. Media and
Communication, 8(4), 197–207.
www.cogitatiopress.com/mediaandcommunication/articl
e/view/3407
UN (United Nations) Committee on the Rights of the Child
(2021). General Comment 25 on children’s rights in
relation to the digital environment. Geneva: UN.
https://tbinternet.ohchr.org/_layouts/15/treatybodyextern
al/TBSearch.aspx?Lang=en&TreatyID=5&DocTypeID=1
1
UNICEF (2017). State of the world’s children: Children in
a digital world.
www.unicef.org/publications/index_101992.html
About the authors
Sonia Livingstone FBA,
OBE is a professor in the
Department of Media and
Communications, London
School of Economics and
Political Science (LSE). Her
20 books include Parenting for
a digital future: How hopes and fears about
technology shape children’s lives. She directs the
Digital Futures Commission (with 5Rights
Foundation) and Global Kids Online (with UNICEF
Office of Research – Innocenti) and has advised the
UN Committee on the Rights of the Child, European
Commission, European Parliament, Council of
Europe, ITU, OECD and others on children’s risks
and rights in a digital age.
See www.sonialivingstone.net
Mariya Stoilova is a post-
doctoral researcher at the
Department of Media and
Communications, London
School of Economics and
Political Science (LSE). Her
work falls at the intersection of
child rights and digital technology, focusing
particularly on the opportunities and risks of digital
media use in the everyday lives of children and
young people, data and privacy online, digital skills,
and pathways to harm and wellbeing. Mariya’s work
incorporates multi-method evidence generation and
cross-national comparative analyses. For projects
and publications, see http://www.lse.ac.uk/media-
and-communications/people/research-staff/mariya-
stoilova
... Im sogenannten 4C-Modell der Online-Risiken für Kinder und Jugendliche werden vier Risikobereiche unterschieden (Livingstone & Stoilova, 2021): Kontakt mit potenziell schädlichen Inhalten (Content/Inhalt), kommerzielle Ausbeutung von persönlichen Daten (Contract/Nutzungsbedingungen), Erfahrungen mit potenziell schädlichen Kontaktpersonen (Contact/Kontakt) sowie Beeinflussung durch potenziell schädliche Kommunikationen und Handlungen (Conduct/ Verhalten). In Politik, Praxis und Forschung wurde in den letzten Jahrzehnten den potenziell schädlichen Inhalten (Content) viel Aufmerksamkeit gewidmet. ...
... B. körperliche Unversehrtheit, psychische Gesundheit, Menschenwürde). Dabei ist es wichtig, zwischen Risiko (risk) und Schädigung (harm) zu differenzieren: Risiko bezieht sich auf eine mögliche Schädigung oder die Wahrscheinlichkeit, mit der ein Schaden aus einer Aktivität entstehen kann; Schädigung hingegen auf tatsächlich eintretende negative Konsequenzen für das emotionale, physische oder psychische Wohlbefinden (Livingstone & Stoilova, 2021 ...
... Im Kontext der Internetnutzung wird eine Vielzahl an Risiken diskutiert, mit denen Kinder und Jugendliche konfrontiert werden können (Livingstone & Stoilova, 2021). Risiken bezeichnen die Unsicherheit über und die Schwere von Konsequenzen bezüglich persönlich wichtiger Aspekte, die aus einer Aktivität resultieren. ...
Book
Full-text available
Soziale Interaktionen von Kindern und Jugendlichen finden längst nicht mehr nur auf dem Schulhof statt, sondern zunehmend auch in virtuellen Räumen. Dieses Buch beleuchtet die zentralen Risiken, mit denen Kinder und Jugendliche bei ihren Interaktionen im Internet konfrontiert werden können: Cybermobbing, Online-Hatespeech, non-konsensuales Sexting und Cybergrooming. Auf der Grundlage entwicklungs- und medienpsychologischer Befunde und Theorien werden Gemeinsamkeiten und Besonderheiten dieser Risiken, Präventions- und Interventionsansätze sowie Empfehlungen für Forschung und Praxis vorgestellt. Das Buch bietet einen wissenschaftlich fundierten und praxisrelevanten Überblick zu aktuellen Themen der Online-Nutzung im Kindes- und Jugendalter.
... These data are similar to those found in the population of children with IDs or ASD [23,38]. In terms of risks, these can come from four sources: content (receiving or sending content inappropriate for minors, such as pornography, drugs, or violence), contacts (receiving or making contact inappropriate for a minor, such as sexual advances by an adult or harassment), behaviours (engaging in or receiving inappropriate behaviour, such as cyberbullying), or contracts (exposure to misleading advertising, commercial persuasion, or unwanted exploitation of personal data) [39]. These authors also consider some risks that cut across the different categories (limitation of personal relationships or interactions, physical health, development of Internet addictions). ...
... (1) Safety in the use of the Internet (6 items [39], questions were asked about exposure to certain hazards related to content, contact, behaviour, contract, as well as certain cross-cutting risks. ...
Article
Full-text available
People with disabilities have difficulties in digital inclusion, although it is considered essential for participation in the knowledge-based society. This form of inclusion seeks to ensure equal opportunities in the use of digital technologies and their active participation as citizens in the virtual world. The educational environment is key to this digital inclusion, but teacher attitudes and training influence its effectiveness. The aim of this study was to explore, through a descriptive cross-sectional study, Chilean teachers’ perspectives on the safety, benefits, and risks of the Internet for students with intellectual disabilities or autism spectrum disorder. A questionnaire was administered to 211 pre-service and in-service teachers. The results highlight the perception of the Internet as an unsafe environment for these students, where risks prevail over potential benefits. These findings underline the need to improve both initial and ongoing teacher training in digital skills and risk mediation for these students in order to ensure the digital participation of all students.
... Finally, contract risks involve exposure to misleading advertising, commercial manipulation or misuse of personal data. Recently, an additional category has been described as crosscutting risks, which can affect all categories, such as the limitation of personal relationships, physical health problems and the development of Internet addictions [19]. ...
... • Perceived level of competence in Internet use: using a scale from 0 to 10 (not competent-fully competent). • Perceived level of competence to prevent and/or manage Internet risks (teacher mediation): using a 5-point Likert scale (1 = not competent, 5 = fully competent), they were asked to rate their level of competence to prevent or manage the five types of risks proposed by Livingstone and Stoilova [19] (content, contact, conduct, contract and cross-cutting risks). The analysis of the internal consistency of this scale, measured by Cronbach's Alpha coefficient, yielded a value of 0.93. ...
Article
Full-text available
Teachers need digital skills to optimise the educational benefits of the Internet and mediate its risks. This study investigates digital and mediation competencies among teachers, focusing on their preparation for guiding students in the safe use of the Internet. Using a descriptive, cross-sectional survey design, data were collected from 550 Spanish teachers across various educational settings and levels. The findings reveal that while nearly half of the teachers received some online safety training, the average training duration was relatively low. Notably, differences emerged based on school type and educational stage, with secondary school and special education teachers receiving more training. Teachers generally reported moderate to high digital competence, though those with greater teaching experience perceived themselves as less digitally competent. Additionally, teachers felt only moderately prepared to mediate risks. Special education teachers expressed a higher perceived competence in addressing certain risks than their counterparts in regular education. These findings underscore the need for enhanced, context-specific training in digital safety and mediation skills across educational contexts, addressing both technical aspects and broader digital safety competencies. The study concludes by recommending ongoing, accessible training, particularly for experienced teachers, to align with evolving digital challenges in education.
... The children explored examples of 'kind' and 'unkind' behaviours they had experienced or witnessed online, and the resultant conduct (Livingstone & Stoilova, 2021) or emotional impact of those behaviours. For a small number of child co-researchers, their experiences were heavily mediated by the exposure to digital environments during lockdowns (during the height of the COVID-19 pandemic): ...
Article
To explore children's online conduct from a child's digital rights perspective , a small-scale child participatory study was conducted. Eighteen children (5 males; 13 females) in the UK, aged 10-11 years old, participated by conducting interviews in pairs with one another. All children had previously engaged with a lesson to develop their interview skills, framed more broadly within the usual curriculum. A reflexive, organic, thematic analysis was conducted which identified four themes: i) Online experiences and associated conduct, ii) Emotions and Interoception, iii) Agency and action, and iv) Mentalising. The findings demonstrated children's competencies using digital technology including their insights regarding how their social online interactions and conduct can make themselves and others feel. In conclusion, the research highlights the need for larger-scale studies in partnership with academics, technology companies, political representatives and most importantly children, to be solution-focused to ensure children's digital rights are enacted. ARTICLE HISTORY
... Our study employs the CO:RE classification of online risk by Livingstone and Stoilova (2021), which views online risk as arising from the interaction between a child's agency and the digital environment, including algorithms. This classification outlines four dimensions of risk: content, contact, conduct, and contract. ...
Article
Full-text available
Digital skills play a crucial role in shaping adolescents’ online experiences, serving both as a shield against harmful content and as a gateway to accessing it. Previous studies on online harmful content have predominantly focused on general exposure, overlooking the distinction between intended and unintended exposure (i.e., whether the adolescent deliberately sought out the content or was unexpectedly exposed to it). Moreover, existing studies did not consider the role of adolescents’ digital skills. This exploratory study aims to newly examine the role of the subtypes of digital skills in the intended and unintended exposure to harmful online content among adolescents from four European countries, as well as the influence of protective and risky factors according to the problem behavior theory. Using multinomial logistic regression, a sample of 3,934 adolescents aged 12 to 17 ( M = 14.4, SD = 1.3; 51% boys) from Estonia, Finland, Italy, and Poland was examined. The results show different associations with respect to the type of exposure. For instance, knowledge skills and technical/operational skills were found to be associated with unintentional exposure to harmful online content, but not with intentional exposure. Similarly, the protective role of the family was suggested in intentional exposure but not in unintentional exposure. These findings underscore the importance of raising awareness among educators and parents regarding the dual nature of digital skills. Rather than solely emphasizing their protective potential, we shall acknowledge and address the potential risks associated with certain facets of digital proficiency.
... Second, stakeholders should remain vigilant of policies toward greater transparency and privacy in online education, as contracts between children and digital providers are a source of inherent risk (Livingstone & Stoilova, 2021;Regan & Jesse, 2019). Even MAs, which seek to assuage modern anxieties about children's online activity, could play in the "dramatic rise in the commercialization of children's personal data" (p. ...
Article
Monitoring applications (MAs) use digital and online tools to collect and track data on student behavior, and they have become increasingly popular among schools. Empirical research on these complex surveillance platforms is scant, and little is known about the efficacy or impact that they have on students. This study used a multi-method investigation: First, it conducted a thematic analysis of the purveyances of popular MAs, examining their datafication of students through three interrelated processes: Reduction, abstraction, and individualization. Second, it surveyed students who were in high school during the Pandemic about their experience with MAs during that time, using descriptive and inferential statistics to better understand the student perspective. This study offers several approaches for increasing awareness of MAs and their potential to datafy, as well as recommendations for further research on the efficacy and impact of MAs.
... These platforms can amplify marginalized voices and be powerful communication tools for health and information [13]. The 4Cs framework [14] recognizes four online risk categories: Content, Contact, Conduct, and Commercial. Many of these risks are amplified in MWCs [2,3,15], including concerns such as sexual exploitation, violence, and exposure to harmful content. ...
Chapter
Full-text available
With the proliferation of technology, the digital environment has become integral to youth globally. We provide an overview of research from Majority World countries (MWCs), where most children and adolescents are located and represent the fastest growing demographic of users. Digital inequalities in terms of access, use and skills, and risks and benefits impact the online and offline lives of youth in MWCs. We focus on micro-, meso-, and macro-level factors including gender, the role of parents and schools in mediation and scaffolding, digital literacy, and review recent regulatory initiatives. We highlight the unique challenges and opportunities that youth in MWCs face in navigating the digital environment and how these have been impacted by the COVID-19 pandemic. We identify areas for future research, including the need for more focus on children and younger adolescents, contextualized approaches that incorporate qualitative methodologies, and attention to the long-term consequences of the pandemic on youth’s digital technology use and well-being. Research and practical recommendations are included. We argue that a better understanding of youth’s experiences in MWCs can help inform more effective and equitable policies and programs that leverage the potential of digital technologies to improve the lives of youth globally, especially as regulatory initiatives gain momentum.
... On the one hand, the limitation can prevent children from getting engaged in risky contact and viewing risky content [79]. On the other hand, it may also prevent children from conducting risky behavior such as creating hateful materials, inciting racism, or posting or distributing sexual content [67]. Given the intricacies of UGC creation and the ethical and governance challenges it entails, it may be worth reconsidering whether to implement age limits for creators to publish UGCs on such platforms. ...
Article
Full-text available
An increasing number of game platforms, such as Roblox, enable game creators to develop user-generated games (UGGs). Yet, these platforms often come under scrutiny for hosting UGGs that contain harmful content, ranging from sexually explicit material to Nazi-themed roleplay. Limited attention has been paid to how harmful UGGs are ideated by game creators. To address this question, we studied an online Roblox creator community, where Roblox creators collectively engage in design ideation to brainstorm design ideas for UGGs. Through an inductive thematic analysis, we found three primary ways where Roblox creators' design ideation becomes risky, including how Roblox creators generate risky game design ideas, navigate through policy boundaries to develop these ideas, and share strategies of bypassing moderation. Based on our findings, we discuss ethical and governance challenges facing user-generated games. We propose design implications to support game creators in developing ethical game design ideas and safe game designs.
Book
Full-text available
Online risk behaviour, online disinhibition effect, cyberbullying and social skills
Article
Full-text available
Age assurance is a way to prevent children accessing content, products or services that are potentially harmful to them, ranging from gambling services or alcohol or tobacco or, increasingly, certain products and services online. Now that children’s lives are mediated by digital technologies, policymakers are deliberating over the legal, technical and practical challenges. These have been little examined from the perspective of children’s rights. By combining legal and social research methods, this article examines the legal requirements for age assurance in Europe, assesses compliance by companies and reveals the consequences for family life. In law and practice, we show that age assurance is often ineffective in protecting children from online risk of harm. Further, as currently implemented it risks children’s other rights – to non-discrimination, privacy, to be heard, and their civil rights and freedoms, and remedy. We identify promising directions for the use of age assurance in child online protection, focusing on European policy, regulators and civil society actors.
Article
Full-text available
How do children understand the privacy implications of the contemporary digital environment? This question is pressing as technologies transform children’s lives into data which is recorded, tracked, aggregated, analysed and monetized. This article takes a child-centred, qualitative approach to charting the nature and limits of children’s understanding of privacy in digital contexts. We conducted focus group interviews with 169 UK children aged 11–16 to explore their understanding of privacy in three distinct digital contexts—interpersonal, institutional and commercial. We find, first, that children pri- marily conceptualize privacy in relation to interpersonal contexts, conceiving of personal information as something they have agency and control over as regards deciding when and with whom to share it, even if they do not always exercise such control. This leads them to some misapprehensions about how personal data is collected, inferred and used by orga- nizations, be these public institutions such as their schools or commercial businesses. Children’s expectation of agency in interpersonal contexts, and their tendency to trust familiar institutions such as their schools, make for a doubly problem- atic orientation towards data and privacy online in commercial contexts, leading to a mix of frustration, misapprehension and risk. We argue that, since the complexity of the digital environment challenges teachers’ capacity to address children’s knowledge gaps, businesses, educators, parents and the state must exercise a shared responsibility to create a legible, transparent and privacy-respecting digital environment in which children can exercise genuine choice and agency.
Technical Report
Full-text available
This report presents the findings from a survey of children aged 9–16 from 19 European countries. The data were collected between autumn 2017 and summer 2019 from 25,101 children by national teams from the EU Kids Online network. A theoretical model and a common methodology to guide this work was developed during four phases of the network’s work, and is discussed at the outset of this report. The main findings from the key topic areas are summarised, which correspond to the factors identified in the theoretical model: Access, Practices and skills, Risks and opportunities, and Social context.
Article
Full-text available
In this article, we reflect critically on the research agenda on children’s Internet use, framing our analysis using Wellman’s three ages of Internet studies and taking as our case study the three phases of research by the EU Kids Online network from 2006 to 2014. Following the heyday of moral panics, risk discourses and censorious policy-making that led to the European Commission’s first Internet Action Plan 1999–2002, EU Kids Online focused on conceptual clarification, evidence review and debunking of myths, thereby illustrating the value of systematic documentation and mapping, and grounding academic, public and policy-makers’ understanding of ‘the Internet’ in children’s lives. Consonant with Wellman’s third age, which emphasizes analysis and contextualization, the EU Kids Online model of children’s online risks and opportunities helps shift the agenda from how children engage with the Internet as a medium to how they engage with the world mediated by the Internet.
Article
Full-text available
After a decade or more in which research has examined the opportunities and risks encountered by children on the internet, this article assesses the contribution and challenges of producing an evidence base to inform policy in a hotly contested field. It offers critical analysis and new findings, drawing on the EU Kids Online project, a major study of children’s internet use in 25 countries. Building on the distinction between risk (a calculation based on the probability and severity of harm), and harm itself, research and policy on children’s online risk faces particular problems in measuring harm and, therefore, risk. Further complications arise from the interdependencies among opportunity, risk-taking, resilience and vulnerability. Such complexities must be recognised if we are to advance beyond the entrenched positions that so often polarise debate.
Book
Full-text available
As internet use is extending to younger children, there is an increasing need for research focus on the risks young users are experiencing, as well as the opportunities, and how they should cope. With expert contributions from diverse disciplines and a uniquely cross-national breadth, this timely book examines the prospect of enhanced opportunities for learning, creativity and communication set against the fear of cyberbullying, pornography and invaded privacy by both strangers and peers. Based on an impressive in-depth survey of 25,000 children carried out by the EU Kids Online network, it offers wholly new findings that extend previous research and counter both the optimistic and the pessimistic hype. It argues that, in the main, children are gaining the digital skills, coping strategies and social support they need to navigate this fast-changing terrain. But it also identifies the struggles they encounter, pinpointing those for whom harm can follow from risky online encounters. Each chapter presents new findings and analyses to inform both researchers and students in the social sciences and policy makers in government, industry or child welfare who are working to enhance children's digital experiences.
Child online safety: Minimizing the risk of violence, abuse and exploitation online
Broadbnd Commission for Sustainable Development (2019). Child online safety: Minimizing the risk of violence, abuse and exploitation online. ITU and UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000374365?p osInSet=1&queryId=1a93f340-75cf-42d8-adfe-4f4b718fcad3
Assessment of the emerging trends and evolutions in ICT services
  • J Croll
Croll, J. (2016). Let's play it safe: Children and youths in the digital world. Assessment of the emerging trends and evolutions in ICT services. ICT Coalition.
Keeping children safe online. Nominet, NPC and Parent Zone
  • A Green
  • C Wilkins
  • G Wyld
Green, A., Wilkins, C. & Wyld, G. (2019). Keeping children safe online. Nominet, NPC and Parent Zone. www.thinknpc.org/wp-content/uploads/2019/07/Keeping-Children-Safe-Online-NPC-Nominet-ParentZone-2019.pdf
What are you concerned about? Classifying children's and parents' concerns regarding online communication. ECREA Conference
  • U Hasebrink
  • M Rechlitz
  • S Dreyer
  • N Brüggen
  • C Gebel
  • C Lampert
Hasebrink, U., Rechlitz, M., Dreyer, S., Brüggen, N., Gebel, C. & Lampert, C. (2018). What are you concerned about? Classifying children's and parents' concerns regarding online communication. ECREA Conference, November. https://leibnizhbi.de/uploads/media/default/cms/media/hl9lir5_2018-11-01_ECREA_Hasebrink%20et%20al_What