Conference PaperPDF Available

"Hey Google is it OK if I eat you?": Initial Explorations in Child-Agent Interaction


Abstract and Figures

Autonomous technology is becoming more prevalent in our daily lives. We investigated how children perceive this technology by studying how 26 participants (3-10 years old) interact with Amazon Alexa, Google Home, Cozmo, and Julie Chatbot. We refer to them as "agents" in the context of this paper. After playing with the agents, children answered questions about trust, intelligence, social entity, personality, and engagement. We identify four themes in child-agent interaction: perceived intelligence, identity attribution, playfulness and understanding. Our findings show how different modalities of interaction may change the way children perceive their intelligence in comparison to the agents'. We also propose a series of design considerations for future child-agent interaction around voice and prosody, interactive engagement and facilitating understanding.
Content may be subject to copyright.
“Hey Google is it OK if I eat you?”
Initial Explorations in Child-Agent
Stefania Druga*
MIT Media Lab
Cambridge, MA, 02139 USA
Cynthia Breazeal
MIT Media Lab
Cambridge, MA, 02139 USA
Randi Williams*
MIT Media Lab
Cambridge, MA, 02139 USA
Mitchel Resnick
MIT Media Lab
Cambridge, MA 02139 USA
Permission to make digital or hard copies of part or all of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. Copyrights
for third-party components of this work must be honored. For all other
uses, contact the Owner/Author.
IDC '17, June 27-30, 2017, Stanford, CA, USA
© 2017 Copyright is held by the owner/author(s).
ACM ISBN 978-1-4503-4921-5/17/06.
Autonomous technology is becoming more prevalent in our
daily lives. We investigated how children perceive this
technology by studying how 26 participants (3-10 years old)
interact with Amazon Alexa, Google Home, Cozmo, and
Julie Chatbot. We refer to them as “agents“ in the context of
this paper. After playing with the agents, children answered
questions about trust, intelligence, social entity, personality,
and engagement. We identify four themes in child-agent
interaction: perceived intelligence, identity attribution,
playfulness and understanding. Our findings show how
different modalities of interaction may change the way
children perceive their intelligence in comparison to the
agents’. We also propose a series of design considerations
for future child-agent interaction around voice and prosody,
interactive engagement and facilitating understanding.
Author Keywords
Child-agent interaction; interaction modalities
ACM Classification Keywords
H.5.2 [Information interfaces and presentation (e.g., HCI)]:
Interaction Styles
Prior research in child-computer interaction explores the
role of computers in shaping the way children think. In her
book ”Second Self”, Sherry Turkle describes them as
intersectional objects that allow children to investigate
”matter, life, and mind”[7]. Similarly, emerging autonomous
technologies invite children to think about what it means to
be intelligent and conscious. These technologies are
becoming increasingly prevalent in our daily lives. Yet, there
is little research on how children perceive them. This paper
presents the preliminary findings of a study on how children,
ages 3 to 10, interact with modern autonomous agents. In
our results, we analyze how and why children ascribe
particular attributes to agents. We address how different
modalities affect children’s perception of their own
Figure 1: Alexa
Internet search, games, jokes,
Morphology: Cylinder, LED
ring, female voice
Input: voice
Outputs: voice and LEDs
Figure 2: Google home
Internet search, games, jokes,
Morphology: Cylinder, LED
ring, female voice
Inputs: voice, touch
Outputs: voice, LEDs
Related Work
In the fields of human-computer interaction (HCI),
human-robot interaction (HRI), and applied developmental
psychology, there is extensive research on how children
perceive robotic and conversational agents. Tanaka found
that children build relationships with these agents the same
way that they build relationships with people [6]. Kahn found
that they consider robots as ontologically different from
other objects, including computers [3]. He also saw that age
and prior experience with technology led to more thoughtful
reasonings about robots’ nature [3]. Turkle argues that the
intelligence of computers encourages children to revise their
ideas about animacy and thinking [7]. She observed that
children attributed intent and emotion to objects that they
could engage with socially and psychologically. Bernstein’s
study revealed that voice, movement, and physical
appearance are other factors that children consider when
deciding how to place agents [2].
To understand why children may attribute characteristics of
living beings to inanimate objects, we must consider Theory
of Mind development [3]. Those with a developed Theory of
Mind can perceive the emotional and mental states of other
beings. An analysis of 178 false-belief studies led to a
model that showed that across cultures, Theory of Mind
usually develops when a child is 3 to 5 years old [8].
This finding indicates that age is a significant factor in
children‘s reasoning about technology. Through our work,
we build on prior research by analyzing how children
interact with and perceive modern agents. In our analysis,
we consider developmental differences between children
and functional differences between agents.
Participants were randomly divided into four groups, roughly
4-5 in each group, then groups were assigned to a station.
We had four stations, one for each agent. Each station had
enough devices for participants to interact alone or in pairs.
At the stations researchers introduced the agent, then
allowed participants to engage with it. After playing with the
first agent for 15 minutes, participants rotated to the next
station to interact with a second agent. Each session of
structured play was followed by a questionnaire, in the form
of a game, analyzing children’s perception of the agent. We
interviewed 5 children (3 boys and 2 girls), to further probe
their reasoning. We selected children that played with
different agents and displayed different interaction behavior.
Between interviews, participants were allowed to free-play
with the agents. This methodology is based on the prior
research work of Turkle [7].
This study consisted of 26 participants. Of these
participants, four were not able to complete the entire study
due to lack of focus. The children were grouped according
to their age range to gather data about how reasoning
changes at different development stages. There were 12
(46.15%) children aged 3 or 4 years old classified as our
younger children group. The older children group contained
14 (53.85%) children between the ages of 6 and 10. Of the
26 participants, 16 offered information about previous
technology experience. A total of 6 (37.5%) of the
respondents had programmed before and 9 (56.25%) had
interacted with similar agents (Google, Siri, Alexa, and
Cortana) previously.
Figure 3: Cozmo
Games, autonomous explo-
ration, object/face recognition
Morphology: robot (wheels,
lift), garbled voice, block acces-
Inputs: visual information, dis-
tance, app
Outputs: sound, movement,
LED eyes, block LEDs, app
Figure 4: Julie
Conversations, jokes, and
Morphology: Android app, tablet
TTS voice
Inputs: text, voice
Outputs: text, voice
The children interacted with four different agents: Amazon‘s
Echo Dot “Alexa" (Figure 1), Google Home (Figure 2),
Anki‘s Cozmo (Figure 3), and Julie Chatbot (Figure 4).
These agents were chosen for their commercial prevalence
and playful physical or conversational characteristics. Alexa
is a customizable, voice-controlled digital assistant. Google
Home is also a voice-controlled device, for home
automation. Cozmo is an autonomous robot toy. Finally,
Julie is a conversational chatbot.
The Monster game: interactive questionnaire
After interacting with an agent, participants completed a ten
item questionnaire in the form of a monster game. The
game was adapted from the work of Park, and was
used to keep younger children engaged [4]. In the game,
two monsters would share a belief about an agent then
children placed a sticker closer to the monster they most
agreed with. We vetted the usability of this method and the
clarity of the questions in a pilot test. The questions queried
how children felt about the agent in terms of trust,
intelligence, identity attribution, personality, and
engagement and were adapted from a larger questionnaire
found in the work of Bartneck[1].
Overall, most participants agreed that the agents are
friendly (Figures 5,6) and trustworthy (Figures 7,8). The
younger children (3-4 years old) experienced difficulty
Age Response Alexa Google Cozmo Julie
Younger Smarter 20% 0% 40% 60%
Neutral 20% 100% 0% 40%
Not as smart 60% 0% 60% 0%
Older Smarter 100% 43% 20%
Neutral 0% 57% 40%
Not as smart 0% 0% 40%
Table 1: How children perceive agent’s intelligence compared to
theirs. Note: Only one session of interaction and questionnaire
data with Julie was collected, the group was restless after
interacting with the conversational and chat agents. They
enjoyed very much playing with the Cozmo. The older
children (6-10 years old) enjoyed interacting with all the
agents, although they had their favorites based on the
different modalities of interaction. Responses varied
depending on the agent and age of the participant. These
are discussed below as: perceived intelligence, identity
attributions, playfulness, and understanding.
"She probably already is as smart as me" - perceived intelli-
Younger participants had mixed responses, but many older
participants said that the agents are smarter than them.
The older participants often related the agent‘s intelligence
to its access to information.
For example, Violet1(7 years old) played with Alexa and
Google Home during structured playtime, then Julie and
Cozmo during free play. She systematically analyzed the
1Participants names are changed
intelligence of the agents by comparing their answers in
regards to a topic she knew a lot about: sloths.“Alexa she
knows nothing about sloths, I asked her something about
sloths but she didn‘t answer... but Google did answer so I
think that‘s a little bit smarter because he knows a little
more". Mia(9.5 years old) played with Julie and Google
Home during the structured play, then tried all the other
agents during free play. She used a similar strategy of
referring to the things she knows when trying to probe the
intelligence of the agent: “[Google Home] probably already
is as smart as me because we had a few questions back
and forth about things that I already know“.
Figure 5: How children perceive
friendliness of agents by age
Figure 6: How children perceive
friendliness by agent
Figure 7: How children perceive
truthfulness of agents by age
Figure 8: How children perceive
the truthfulness of each agent
“ What are you?" - identity attribution and playfulness
We observed probing behavior, where the participants were
trying to understand the agent. Younger children tried to
understand the agents like a person, “[Alexa], what is your
favorite color“, “Hey Alexa, how old are you“. Older children
tested what the agent would do when asked to perform
actions that humans do, “Can you open doors?" “[Cozmo]
can you jump?" They also asked the agents how they
worked, “Do you have a phone inside you?" and how they
defined themselves, “What are you?" The children used
gender interchangeably when talking about the agents.
Gary and Larry referred to Cozmo as both “he“ and “she“. “I
don’t really know which one it is“ - Gary. “It’s a boy...maybe
because of the name but then again you could use a boy
name for a girl and and girl name for a boy“ - Larry. Later,
they concluded that Cozmo “is a bob-cat with eyes“.
Multiple agents of the same type were provided during the
playtest which led the participants to believe that different
Alexas could give different answers , “She doesn’t know the
answer, ask the other Alexa“.They would repeatedly also
ask the same question to a device to see if it would change
its answer.
As they became familiar with the agents we observed
children playfully testing their limits. A 6 year old girl asked
several agents “Is it OK if I eat you?". Other children offered
the agents food and asked if they could see the objects in
front of them, “Alexa what kind of nut am I holding?"
“Sorry I don‘t understand that question“ - understanding
We observed that the majority of participants experienced
various challenges getting the agents to understand their
questions. Several children tried to increase the level of
their voice or make more pauses in their questions, things
that may help people understand them. However, it did not
always lead to better recognition with the agents.
Often, when the agents would not understand them,
children would try to reword their question or make it more
specific. Throughout the playtest, we observed shifts in
children‘s strategies, under the encouragement of the
facilitators and parents or by observing strategies that
worked for other children.
Despite these challenges, participants became fluent in
voice interaction quickly and even tried to talk with the
agents that didn‘t have this ability (e.g Cozmo).
Building on prior work and based on the observations from
this study, we propose the following considerations for
child-agent interaction design: voice and prosody,
interactive engagement, and facilitating understanding.
Voice and prosody
Voice and tone made a difference in how friendly
participants thought the agent was (Figure 6). When asked
about differences between the agents Mia replied, “I liked
Julie more because she was more like a normal person, she
had more feelings. Google Home was like ‘I know
everything‘...Felt like she [Julie] actually understood what I
was saying to her“.
Her favorite interaction with the agents was the following
exchange she had with Julie, “She said ‘I‘m working on it‘’...I
said ‘What do you mean you‘re working on it ? ‘ She said ‘I
don‘t speak Chinese‘[laughs] I wrote back ‘I‘m not speaking
Chinese‘ and she said ‘It sounded like it‘. That is an
interaction I would have with a normal human“. Mia‘s
perception of this interaction could be a result of the
“mirroring effect“, where the agent‘s unexpected response is
perceived as a sense of humor, a reflection of Mia‘s own
style of communication. Through this interaction and
previous experiments ran by Disney Research, we see an
opportunity for future agents to imitate the communication
style of children and create a prosodic synchrony in the
conversations in order to build rapport [5].
Interactive engagement
Gary and Larry said they liked interacting with Cozmo the
most “because she could actually move and all the other
ones that we did she couldn‘t move“. Also because Cozmo
had expressions, “he has feelings, he can do this with his
little shaft and he can move his eyes like a person, confused
eyes, angry eyes, happy eyes...Everybody else like they
didn‘t have eyes, they didn‘t have arms, they didn‘t have a
head, it was just like a flat cylinder“. This testimony reveals
how mobile and responsive agents appeal to children and
how the form factor plays a significant role in the interaction.
Through its eyes and movements, Cozmo was able to
effectively communicate emotion, and so the children
believed that Cozmo had feelings and intelligence. Many
participants, who tried to engage in dialogue with the
agents, were limited by the fact that the agents weren’t able
to ask clarifying questions. While the children were
attracted to the voice and expressions of the agents at first,
they lost interest when the agent could not understand their
questions. We recognize the potential for designing a voice
interface that could engage in conversations with the
children by referring to their previous questions, by asking
more clarifying questions and by expressing various
reactions to children inputs.
Facilitating understanding
During the play-test the facilitators, parents, and peers
helped the children rephrase or refine their questions. We
wonder how some of this facilitation could be embedded in
the design of the agent’s mode of interaction. If the agent
could let the children know why they cannot answer the
question and differentiate between not understanding the
question and not having access to a specific information,
this would help the users decide how to change their
question, either by rephrasing it, or by being more specific.
Another issue we recognized was that sometimes the
amount of information provided to the participants was
overwhelming. The agent’s answers could be scaffolded to
provide information gradually. This would enable the
children to decide how much they want to know about a
specific topic and get more engaged by having a
conversation with the agent.
This research raises the following question: if these agents
are becoming embedded in our lives, how could they
influence children’s perception of intelligence and the way
they make sense of the world? During our study the
participants believed that they could teach the agents, and
they could learn from them. This leads us to imagine novel
ways in which the agents could become learning
companions. In future work, we hope to design interactions
where children are able to tinker with and program the
agents and thus expand their perception of their own
intelligence and different ways to develop it.
We would like to thank the students at the Media Lab who
assisted with the study, our friends and mentors who
reviewed the various drafts, the parents and children who
participated in the study, and our colleagues who offered
advice on this work. This research was supported by the
National Science Foundation (NSF) under Grant
CCF-1138986. Any opinions, findings and conclusions, or
recommendations expressed in this paper are those of the
authors and do not represent the views of the NSF.
1. Christoph Bartneck, Dana Kuli´
c, Elizabeth Croft, and
Susana Zoghbi. 2009. Measurement instruments for
the anthropomorphism, animacy, likeability, perceived
intelligence, and perceived safety of robots.
International journal of social robotics 1, 1 (2009),
2. Debra Bernstein and Kevin Crowley. 2008. Searching
for signs of intelligent life: An investigation of young
children’s beliefs about robot intelligence. The Journal
of the Learning Sciences 17, 2 (2008), 225–247.
3. Peter H Kahn Jr, Batya Friedman, Deanne R
Perez-Granados, and Nathan G Freier. 2006. Robotic
pets in the lives of preschool children. Interaction
Studies 7, 3 (2006), 405–436.
4. Hae Won Park, Rinat Rosenberg-Kima, Maor
Rosenberg, Goren Gordon, and Cynthia Breazeal.
2017. Growing growth mindset with a social robot peer.
In Proceedings of the 12th ACM/IEEE international
conference on Human robot interaction. ACM.
5. Najmeh Sadoughi, André Pereira, Rishub Jain, Iolanda
Leite, and Jill Fain Lehman. 2017. Creating Prosodic
Synchrony for a Robot Co-player in a
Speech-controlled Game for Children. In Proceedings
of the 2017 ACM/IEEE International Conference on
Human-Robot Interaction. ACM, 91–99.
6. Fumihide Tanaka, Aaron Cicourel, and Javier R
Movellan. 2007. Socialization between toddlers and
robots at an early childhood education center.
Proceedings of the National Academy of Sciences 104,
46 (2007), 17954–17958.
7. Sherry Turkle. 2005. The second self: Computers and
the human spirit. Mit Press.
8. Henry M Wellman, David Cross, and Julanne Watson.
2001. Meta-analysis of theory-of-mind development:
the truth about false belief. Child development 72, 3
(2001), 655–684.
... The 4 As children's behavior when the family interacts with robots or interactive devices together (Lee et al., 2006). We observed the same behavior when families interact with voice user interfaces (VUIs), particularly when parents help children repair various communication breakdowns with the conversational agents (Beneteau et al., 2019;Druga et al., 2017;Lovato & Piper, 2015). For instance, Beneteau and her colleagues (2019) ...
... What seems at first to be a playful interaction between a child and a voice assistant can easily trigger events of real consequences (stories of children buying dollhouses and candy with Alexa without parental approval has already made national news). Our prior work (Druga et al., 2017) shows that overall, children found the AI agents to be friendly and trustworthy but that age strongly affected how they attributed intelligence to these devices. Younger participants (4-6 years old) were more skeptical of the devices' intelligence, while most older children (7-10 years old) declared the devices were more intelligent than they were. ...
... Within this frame, we define sensemaking as a process by which people come across unfamiliar situations or contexts but need to process and understand to move forward (Klein et al., 2006 Adapt dimension: Trick the AI In the second design session, we invited participants to engage directly with the smart technologies and see if they could trick them. We wanted to provide the children with concrete ways in which they could test the device's limitations and bias, and we learned from our prior studies that children enjoy finding glitches and ways to make a program or a device fail (Druga, 2018 (Beneteau et al., 2019;Druga et al., 2017). While trying to probe and trick the voice assistant, children voiced several privacy concerns. ...
Full-text available
Essays on the challenges and risks of designing algorithms and platforms for children, with an emphasis on algorithmic justice, learning, and equity. One in three Internet users worldwide is a child, and what children see and experience online is increasingly shaped by algorithms. Though children's rights and protections are at the center of debates on digital privacy, safety, and Internet governance, the dominant online platforms have not been constructed with the needs and interests of children in mind. The editors of this volume, Mizuko Ito, Remy Cross, Karthik Dinakar, and Candice Odgers, focus on understanding diverse children's evolving relationships with algorithms, digital data, and platforms and offer guidance on how stakeholders can shape these relationships in ways that support children's agency and protect them from harm. This book includes essays reporting original research on educational programs in AI relational robots and Scratch programming, on children's views on digital privacy and artificial intelligence, and on discourses around educational technologies. Shorter opinion pieces add the perspectives of an instructional designer, a social worker, and parents. The contributing social, behavioral, and computer scientists represent perspectives and contexts that span education, commercial tech platforms, and home settings. They analyze problems and offer solutions that elevate the voices and agency of parents and children. Their essays also build on recent research examining how social media, digital games, and learning technologies reflect and reinforce unequal childhoods. Contributors:Paulo Blikstein, Izidoro Blikstein, Marion Boulicault, Cynthia Breazeal, Michelle Ciccone, Sayamindu Dasgupta, Devin Dillon, Stefania Druga, Jacqueline M. Kory-Westlund, Aviv Y. Landau, Benjamin Mako Hill, Adriana Manago, Siva Mathiyazhagan, Maureen Mauk, Stephanie Nguyen, W. Ian O'Byrne, Kathleen A. Paciga, Milo Phillips-Brown, Michael Preston, Stephanie M. Reich, Nicholas D. Santer, Allison Stark, Elizabeth Stevens, Kristen Turner, Desmond Upton Patton, Veena Vasudevan, Jason Yip
... She observed that children attributed intent and emotion to objects encouraging social and psychological engagement. Druga et al. (2017Druga et al. ( , 2018 investigated how children perceive a digital assistant. In particular, an experimental study was conducted with 26 participants (3-10 years old) interacting with Amazon Alexa, Google Home, Cozmo, and Julie Chatbot. ...
... Being polite to an AI system means admitting they can be considered humans, which could negatively impact future generations (Elgan 2023). Mainly, some people think that promoting human-like behavior toward AI systems in children-machine interaction may influence their development (Lee et al. 2021;Kahn et al. 2006;Melson et al. 2009;Friedman et al. 2003;Kahn et al. 2012) and they may be induced to believe that these systems are living beings (Wellman et al. 2001;Turkle 2005;Elgan 2023;Druga et al. 2017Druga et al. , 2018. On the other hand, research on the CASA paradigm Takeuchi 1998;Nass and Moon 2000;Takeuchi et al. 2000;Johnson et al. 2004;Nass et al. 1999;Johnson and Gardner 2009;Lombard and Xu 2021) has documented that people tend to attribute human proprieties to AI systems and interact almost unconsciously with them by applying social rules, norms and expectations as in interpersonal relations, thus perceiving the AI system as a social entity. ...
... It has been also observed that the different developmental stages of children can engender different perceptions of technology and politeness (Lee et al. 2021;Turkle 2005;Druga et al. 2017Druga et al. , 2018. It would be advantageous to investigate potential associations between Theory of Mind (Wellman et al. 2001), politeness strategies (Holmes 2006;Brown et al. 1987;Watts 2003;Lakoff 1973), and technology perception within a broader experimental framework encompassing children of varying age groups. ...
Full-text available
The growing prevalence of interactions between humans and machines, coupled with the rapid development of intelligent and human-like features in technology, necessitates considering the potential implications that an increasingly inter-personal interaction style might have on human behavior. Particularly, since human–human interactions are fundamentally affected by politeness rules, several researchers are investigating if such social norms have some implications also within human–machine interactions. This paper reviews scientific works dealing with politeness issues within human–machine interactions by considering a variety of artificial intelligence systems, such as smart devices, robots, digital assistants, and self-driving cars. This paper aims to analyze scientific results to answer the questions of why technological devices should behave politely toward humans, but above all, why human beings should be polite toward a technological device. As a result of the analysis, this paper wants to outline future research directions for the design of more effective, socially competent, acceptable, and trustworthy intelligent systems.
... • Over-trust: children may view CAs as friends [45,46], which can lead to data disclosure risks. Transparent information, as demonstrated by Straten et al. [47], helps to reduce over-reliance on the system. ...
Full-text available
Conversational agents (CAs) have been increasingly used in various domains, including education, health and entertainment. One of the growing areas of research is the use of CAs with children. However, the development and deployment of CAs for children come with many specific challenges and ethical and social responsibility concerns. This chapter aims to review the related work on CAs and children, point out the most popular topics and identify opportunities and risks. We also present our proposal for ethical guidelines on the development of trustworthy artificial intelligence (AI), which provide a framework for the ethical design and deployment of CAs with children. The chapter highlights, among other principles, the importance of transparency and inclusivity to safeguard user rights in AI technologies. Additionally, we present the adaptation of previous AI ethical guidelines to the specific case of CAs and children, highlighting the importance of data protection and human agency. Finally, the application of ethical guidelines to the design of a conversational agent is presented, serving as an example of how these guidelines can be integrated into the development process of these systems. Ethical principles should guide the research and development of CAs for children to enhance their learning and social development.
... Lopatovska et al. [98] found a similar display of politeness. Children are also found to exhibit anthropomorphic behavior by projecting personality and attributing gender to VUIs [53]. Similarly, in our study, a slight majority of participants (57.1%) showed anthropomorphic behavior by saying "please" and "thank you," and half the participants used human metaphors for the VUI. ...
Full-text available
We explore a range of different metaphors used for Voice User Interfaces (VUIs) by designers, end-users, manufacturers, and researchers using a novel framework derived from semi-structured interviews and a literature review. We focus less on the well-established idea of metaphors as a way for interface designers to help novice users learn how to interact with novel technology, and more on other ways metaphors can be used. We find that metaphors people use are contextually fluid, can change with the mode of conversation, and can reveal differences in how people perceive VUIs compared to other devices. Not all metaphors are helpful, and some may be offensive. Analyzing this broader class of metaphors can help understand, perhaps even predict problems. Metaphor analysis can be a low-cost tool to inspire design creativity and facilitate complex discussions about sociotechnical issues, enabling us to spot potential opportunities and problems in the situated use of technologies.
... Third, social bots, non-social interactions, or conversational media (Ferrara et al., 2016;Rheault and Musulan, 2021) also have a great potential to influence the technology-driven self. Fourth, quantified self or self-tracking allows the development of the profiles of the users (Puntoni et al., 2021;Druga et al., 2017;Neff and Nafus, 2016;Sadowski, 2019) for personalisation and predictive analysis. All these options augment the self, resulting in a broader spectrum for self-concept research. ...
Full-text available
Advanced digital technologies broadly penetrate self-activities, such as algorithms, machine learning, or artificial intelligence. This trend is most evident on social media, where contents, attitudes and evaluative judgments meet on technology-driven platforms. Moreover, human networks also started communicating with social bots or conversational interfaces. All these challenges can trigger a redesign of self-concept via technology. Therefore, the paper investigates how social media machines affect self-concept-related academic research. First, pioneers of the field are presented. Second, the self-concept research in digital technology and social media is summarised. Topic networks illustrate critical research fields with the latest trends and future implications. Last but not least, we also investigate how emerging media phenomena affect academic trends in the case of social bots or fake news. The study aims to support the connected research in psychology, business, management, education, political science, medicine and media studies with an understanding of the latest trends. The additional goal is to highlight the potential of market-based research cooperation with academia supporting significant developments and funding.
... Children interact with artificial intelligence (AI) regularly and may have limited recognition that they have encountered an AI-driven system [3]. Growing reliance on AI-driven systems may feel natural to young children who readily ask Siri for help or Alexa for entertainment [20]. ...
Nature has been a plentiful source of materials, replenishment, inspiration, and creativity. Nature collage, as a crafting technique, is a fun and educational activity for children to explore nature and engage their creativity. However, the raw material collection is limited to static things such as leaves, ignoring inspiration from nature sounds and dynamic elements such as babbling creeks. Using a mobile application, we hope to encourage children's creativity by renewing collage materials collection and careful observation in nature. To explore this, we conducted a formative study with children (N=20) and a design workshop with experts (N=6) to formulate NaCanva, an AI-assisted multi-modal collage creation system for children. Drawing on the interactivity between children and nature, NaCanva enables the multi-modal material collection, including images, sound, and videos, which differs our system from traditional collages. We validated this system with a between-subject user study (N=30), and the results suggested that NaCanva unleashes children's creativity in nature collage creation by enhancing children's multidimensional observation and engagement in nature.
Conference Paper
Technologies based on AI/ML are playing an increasingly prominent role in teenagers’ everyday lives. Mirroring this trend is a concomitant interest in teaching young people about intelligent technologies. Whereas previous research in the field of Child–Computer Interaction has proposed curriculum and learning activities that describe what teenagers need to learn about AI/ML, there is still a shortage of studies which specifically address teenager-centred perspectives in the teaching of AI/ML. This paper presents a study of teenagers’ everyday understanding of AI/ML technologies. Using a thematic analysis of the teenagers’ own explanations during a series of workshops, we present a conceptual map of the teenagers’ understandings of these technologies. We go on to propose five general recommendations for the teaching of AI/ML to teenagers through the lens of Computational Empowerment. Taken together, these recommendations serve as a teenager-centred starting point for teaching young people about intelligent technologies, an approach that can be implemented in future research interventions with similar objectives.
Conference Paper
Full-text available
Mindset has been shown to have a large impact on people's academic, social, and work achievements. A growth mindset, i.e., the belief that success comes from effort and perseverance, is a better indicator of higher achievements as compared to a fixed mindset, i.e., the belief that things are set and cannot be changed. Interventions aimed at promoting a growth mindset in children range from teaching about the brain's ability to learn and change, to playing computer games that grant brain points for effort rather than success. This work explores a novel paradigm to foster a growth mindset in young children where they play a puzzle solving game with a peer-like social robot. The social robot is fully autonomous and programmed with behaviors suggestive of it having either a growth mindset or a neutral mindset as it plays puzzle games with the child. We measure the mindset of children before and after interacting with the peer-like robot, in addition to measuring their problem solving behavior when faced with a challenging puzzle. We found that children who played with a growth-mindset robot 1) self-reported having a stronger growth mindset and 2) tried harder during a challenging task, as compared to children who played with the neutral-mindset robot. These results suggest that interacting with peer-like social robot with a growth mindset can promote the same mindset in children.
Full-text available
Children's worlds are increasingly populated by intelligent technologies. This has raised a number of questions about the ways in which technology can change children's ideas about important concepts, like what it means to be alive or smart. In this study, we examined the impact of experience with intelligent technologies on children's ideas about robot intelligence. A total of 60 children aged 4 through 7 were asked to identify the intellectual, psychological, and biological characteristics of 8 entities that differed in terms of their life status and intellectual capabilities. Results indicated that as children gained experience in this domain, they began to differentiate robots from other familiar entities. This differentiation was indicated by a unique pattern of responses about the intellectual and psychological characteristics of robots. These findings suggest that experience may yield a more highly developed viewpoint that reflects an appreciation of the distinctions between biological life, machines, and artificially intelligent technologies.People who grew up in the world of the mechanical are more comfortable with a definition of what is alive that excludes all but the biological and resist shifting definitions of aliveness.… Children who have grown up with computational objects don't experience that dichotomy. They turn the dichotomy into a menu and cycle through its choices. (Turkle, 199952. Turkle , S. 1999. “What are we thinking about when we are thinking about computers”. In The science studies reader, Edited by: Biagioli , M. 543–552. New York: Routledge. View all references, p. 552)
Full-text available
This study emphasizes the need for standardized measurement tools for human robot interaction (HRI). If we are to make progress in this field then we must be able to compare the results from different studies. Aliterature review has been performed on the measurements of five key concepts in HRI: anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety. The results have been distilled into five consistent questionnaires using semantic differential scales. We report reliability and validity indicators based on several empirical studies that used these questionnaires. It is our hope that these questionnaires can be used by robot developers to monitor their progress. Psychologists are invited to further develop the questionnaires by adding new concepts, and to conduct further validations where it appears necessary.
Full-text available
A state-of-the-art social robot was immersed in a classroom of toddlers for >5 months. The quality of the interaction between children and robots improved steadily for 27 sessions, quickly deteriorated for 15 sessions when the robot was reprogrammed to behave in a predictable manner, and improved in the last three sessions when the robot displayed again its full behavioral repertoire. Initially, the children treated the robot very differently than the way they treated each other. By the last sessions, 5 months later, they treated the robot as a peer rather than as a toy. Results indicate that current robot technology is surprisingly close to achieving autonomous bonding and socialization with human toddlers for sustained periods of time and that it could have great potential in educational settings assisting teachers and enriching the classroom environment. • human–robot interaction • social development • social robotics
In The Second Self , Sherry Turkle looks at the computer not as a "tool," but as part of our social and psychological lives; she looks beyond how we use computer games and spreadsheets to explore how the computer affects our awareness of ourselves, of one another, and of our relationship with the world. "Technology," she writes, "catalyzes changes not only in what we do but in how we think." First published in 1984, The Second Self is still essential reading as a primer in the psychology of computation. This twentieth anniversary edition allows us to reconsider two decades of computer culture--to (re)experience what was and is most novel in our new media culture and to view our own contemporary relationship with technology with fresh eyes. Turkle frames this classic work with a new introduction, a new epilogue, and extensive notes added to the original text. Turkle talks to children, college students, engineers, AI scientists, hackers, and personal computer owners--people confronting machines that seem to think and at the same time suggest a new way for us to think--about human thought, emotion, memory, and understanding. Her interviews reveal that we experience computers as being on the border between inanimate and animate, as both an extension of the self and part of the external world. Their special place betwixt and between traditional categories is part of what makes them compelling and evocative. (In the introduction to this edition, Turkle quotes a PDA user as saying, "When my Palm crashed, it was like a death. I thought I had lost my mind.") Why we think of the workings of a machine in psychological terms--how this happens, and what it means for all of us--is the ever more timely subject of The Second Self .
Conference Paper
Synchrony is an essential aspect of human-human interactions. In previous work, we have seen how synchrony manifests in low-level acoustic phenomena like fundamental frequency, loudness, and the duration of keywords during the play of child-child pairs in a fast-paced, cooperative, language-based game. The correlation between the increase in such low-level synchrony and increase in enjoyment of the game suggests that a similar dynamic between child and robot co-players might also improve the child's experience. We report an approach to creating on-line acoustic synchrony by using a dynamic Bayesian network learned from prior recordings of child-child play to select from a predefined space of robot speech in response to real-time measurement of the child's prosodic features. Data were collected from 40 new children, each playing the game with both a synchronizing and non-synchronizing version of the robot. Results show a significant order effect: although all children grew to enjoy the game more over time, those that began with the synchronous robot maintained their own synchrony to it and achieved higher engagement compared with those that did not.
This study examined preschool children's reasoning about and behavioral interactions with one of the most advanced robotic pets currently on the retail market, Sony's robotic dog AIBO. Eighty children, equally divided between two age groups, 34-50 months and 58-74 months, participated in individual sessions with two artifacts: AIBO and a stuffed dog. Evaluation and justification results showed similarities in children's reasoning across artifacts. In contrast, children engaged more often in apprehensive behavior and attempts at reciprocity with AIBO, and more often mistreated the stuffed dog and endowed it with animation. Discussion focuses on how robotic pets, as representative of an emerging technological genre, may be (a) blurring foundational ontological categories, and (b) impacting children's social and moral development.
Research on theory of mind increasingly encompasses apparently contradictory findings. In particular, in initial studies, older preschoolers consistently passed false-belief tasks — a so-called “definitive” test of mental-state understanding — whereas younger children systematically erred. More recent studies, however, have found evidence of false-belief understanding in 3-year-olds or have demonstrated conditions that improve children's performance. A meta-analysis was conducted (N= 178 separate studies) to address the empirical inconsistencies and theoretical controversies. When organized into a systematic set of factors that vary across studies, false-belief results cluster systematically with the exception of only a few outliers. A combined model that included age, country of origin, and four task factors (e.g., whether the task objects were transformed in order to deceive the protagonist or not) yielded a multiple R of .74 and an R2 of .55; thus, the model accounts for 55% of the variance in false-belief performance. Moreover, false-belief performance showed a consistent developmental pattern, even across various countries and various task manipulations: preschoolers went from below-chance performance to above-chance performance. The findings are inconsistent with early competence proposals that claim that developmental changes are due to tasks artifacts, and thus disappear in simpler, revised false-belief tasks; and are, instead, consistent with theoretical accounts that propose that understanding of belief, and, relatedly, understanding of mind, exhibit genuine conceptual change in the preschool years.
Meta-analysis of theory-of-mind development: the truth about false belief
  • David Henry M Wellman
  • Julanne Cross
  • Watson