Conference PaperPDF Available

"Hey Google is it OK if I eat you?": Initial Explorations in Child-Agent Interaction

Authors:

Abstract and Figures

Autonomous technology is becoming more prevalent in our daily lives. We investigated how children perceive this technology by studying how 26 participants (3-10 years old) interact with Amazon Alexa, Google Home, Cozmo, and Julie Chatbot. We refer to them as "agents" in the context of this paper. After playing with the agents, children answered questions about trust, intelligence, social entity, personality, and engagement. We identify four themes in child-agent interaction: perceived intelligence, identity attribution, playfulness and understanding. Our findings show how different modalities of interaction may change the way children perceive their intelligence in comparison to the agents'. We also propose a series of design considerations for future child-agent interaction around voice and prosody, interactive engagement and facilitating understanding.
Content may be subject to copyright.
“Hey Google is it OK if I eat you?”
Initial Explorations in Child-Agent
Interaction
Stefania Druga*
MIT Media Lab
Cambridge, MA, 02139 USA
sdruga@media.mit.edu
Cynthia Breazeal
MIT Media Lab
Cambridge, MA, 02139 USA
cynthiab@media.mit.edu
Randi Williams*
MIT Media Lab
Cambridge, MA, 02139 USA
randiw12@media.mit.edu
Mitchel Resnick
MIT Media Lab
Cambridge, MA 02139 USA
mitch@media.mit.edu
Permission to make digital or hard copies of part or all of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. Copyrights
for third-party components of this work must be honored. For all other
uses, contact the Owner/Author.
IDC '17, June 27-30, 2017, Stanford, CA, USA
© 2017 Copyright is held by the owner/author(s).
ACM ISBN 978-1-4503-4921-5/17/06.
http://dx.doi.org/10.1145/3078072.3084330
Abstract
Autonomous technology is becoming more prevalent in our
daily lives. We investigated how children perceive this
technology by studying how 26 participants (3-10 years old)
interact with Amazon Alexa, Google Home, Cozmo, and
Julie Chatbot. We refer to them as “agents“ in the context of
this paper. After playing with the agents, children answered
questions about trust, intelligence, social entity, personality,
and engagement. We identify four themes in child-agent
interaction: perceived intelligence, identity attribution,
playfulness and understanding. Our findings show how
different modalities of interaction may change the way
children perceive their intelligence in comparison to the
agents’. We also propose a series of design considerations
for future child-agent interaction around voice and prosody,
interactive engagement and facilitating understanding.
Author Keywords
Child-agent interaction; interaction modalities
ACM Classification Keywords
H.5.2 [Information interfaces and presentation (e.g., HCI)]:
Interaction Styles
Introduction
Prior research in child-computer interaction explores the
role of computers in shaping the way children think. In her
book ”Second Self”, Sherry Turkle describes them as
intersectional objects that allow children to investigate
”matter, life, and mind”[7]. Similarly, emerging autonomous
technologies invite children to think about what it means to
be intelligent and conscious. These technologies are
becoming increasingly prevalent in our daily lives. Yet, there
is little research on how children perceive them. This paper
presents the preliminary findings of a study on how children,
ages 3 to 10, interact with modern autonomous agents. In
our results, we analyze how and why children ascribe
particular attributes to agents. We address how different
modalities affect children’s perception of their own
intelligence.
Figure 1: Alexa
Internet search, games, jokes,
music
Morphology: Cylinder, LED
ring, female voice
Input: voice
Outputs: voice and LEDs
Figure 2: Google home
Internet search, games, jokes,
music
Morphology: Cylinder, LED
ring, female voice
Inputs: voice, touch
Outputs: voice, LEDs
Related Work
In the fields of human-computer interaction (HCI),
human-robot interaction (HRI), and applied developmental
psychology, there is extensive research on how children
perceive robotic and conversational agents. Tanaka found
that children build relationships with these agents the same
way that they build relationships with people [6]. Kahn found
that they consider robots as ontologically different from
other objects, including computers [3]. He also saw that age
and prior experience with technology led to more thoughtful
reasonings about robots’ nature [3]. Turkle argues that the
intelligence of computers encourages children to revise their
ideas about animacy and thinking [7]. She observed that
children attributed intent and emotion to objects that they
could engage with socially and psychologically. Bernstein’s
study revealed that voice, movement, and physical
appearance are other factors that children consider when
deciding how to place agents [2].
To understand why children may attribute characteristics of
living beings to inanimate objects, we must consider Theory
of Mind development [3]. Those with a developed Theory of
Mind can perceive the emotional and mental states of other
beings. An analysis of 178 false-belief studies led to a
model that showed that across cultures, Theory of Mind
usually develops when a child is 3 to 5 years old [8].
This finding indicates that age is a significant factor in
children‘s reasoning about technology. Through our work,
we build on prior research by analyzing how children
interact with and perceive modern agents. In our analysis,
we consider developmental differences between children
and functional differences between agents.
Method
Participants were randomly divided into four groups, roughly
4-5 in each group, then groups were assigned to a station.
We had four stations, one for each agent. Each station had
enough devices for participants to interact alone or in pairs.
At the stations researchers introduced the agent, then
allowed participants to engage with it. After playing with the
first agent for 15 minutes, participants rotated to the next
station to interact with a second agent. Each session of
structured play was followed by a questionnaire, in the form
of a game, analyzing children’s perception of the agent. We
interviewed 5 children (3 boys and 2 girls), to further probe
their reasoning. We selected children that played with
different agents and displayed different interaction behavior.
Between interviews, participants were allowed to free-play
with the agents. This methodology is based on the prior
research work of Turkle [7].
Participants
This study consisted of 26 participants. Of these
participants, four were not able to complete the entire study
due to lack of focus. The children were grouped according
to their age range to gather data about how reasoning
changes at different development stages. There were 12
(46.15%) children aged 3 or 4 years old classified as our
younger children group. The older children group contained
14 (53.85%) children between the ages of 6 and 10. Of the
26 participants, 16 offered information about previous
technology experience. A total of 6 (37.5%) of the
respondents had programmed before and 9 (56.25%) had
interacted with similar agents (Google, Siri, Alexa, and
Cortana) previously.
Figure 3: Cozmo
Games, autonomous explo-
ration, object/face recognition
Morphology: robot (wheels,
lift), garbled voice, block acces-
sories
Inputs: visual information, dis-
tance, app
Outputs: sound, movement,
LED eyes, block LEDs, app
Figure 4: Julie
Conversations, jokes, and
games
Morphology: Android app, tablet
TTS voice
Inputs: text, voice
Outputs: text, voice
Agents
The children interacted with four different agents: Amazon‘s
Echo Dot “Alexa" (Figure 1), Google Home (Figure 2),
Anki‘s Cozmo (Figure 3), and Julie Chatbot (Figure 4).
These agents were chosen for their commercial prevalence
and playful physical or conversational characteristics. Alexa
is a customizable, voice-controlled digital assistant. Google
Home is also a voice-controlled device, for home
automation. Cozmo is an autonomous robot toy. Finally,
Julie is a conversational chatbot.
The Monster game: interactive questionnaire
After interacting with an agent, participants completed a ten
item questionnaire in the form of a monster game. The
game was adapted from the work of Park et.al, and was
used to keep younger children engaged [4]. In the game,
two monsters would share a belief about an agent then
children placed a sticker closer to the monster they most
agreed with. We vetted the usability of this method and the
clarity of the questions in a pilot test. The questions queried
how children felt about the agent in terms of trust,
intelligence, identity attribution, personality, and
engagement and were adapted from a larger questionnaire
found in the work of Bartneck[1].
Findings
Overall, most participants agreed that the agents are
friendly (Figures 5,6) and trustworthy (Figures 7,8). The
younger children (3-4 years old) experienced difficulty
Agents
Age Response Alexa Google Cozmo Julie
Younger Smarter 20% 0% 40% 60%
Neutral 20% 100% 0% 40%
Not as smart 60% 0% 60% 0%
Older Smarter 100% 43% 20%
Neutral 0% 57% 40%
Not as smart 0% 0% 40%
Table 1: How children perceive agent’s intelligence compared to
theirs. Note: Only one session of interaction and questionnaire
data with Julie was collected, the group was restless after
interaction
interacting with the conversational and chat agents. They
enjoyed very much playing with the Cozmo. The older
children (6-10 years old) enjoyed interacting with all the
agents, although they had their favorites based on the
different modalities of interaction. Responses varied
depending on the agent and age of the participant. These
are discussed below as: perceived intelligence, identity
attributions, playfulness, and understanding.
"She probably already is as smart as me" - perceived intelli-
gence
Younger participants had mixed responses, but many older
participants said that the agents are smarter than them.
The older participants often related the agent‘s intelligence
to its access to information.
For example, Violet1(7 years old) played with Alexa and
Google Home during structured playtime, then Julie and
Cozmo during free play. She systematically analyzed the
1Participants names are changed
intelligence of the agents by comparing their answers in
regards to a topic she knew a lot about: sloths.“Alexa she
knows nothing about sloths, I asked her something about
sloths but she didn‘t answer... but Google did answer so I
think that‘s a little bit smarter because he knows a little
more". Mia(9.5 years old) played with Julie and Google
Home during the structured play, then tried all the other
agents during free play. She used a similar strategy of
referring to the things she knows when trying to probe the
intelligence of the agent: “[Google Home] probably already
is as smart as me because we had a few questions back
and forth about things that I already know“.
Figure 5: How children perceive
friendliness of agents by age
Figure 6: How children perceive
friendliness by agent
Figure 7: How children perceive
truthfulness of agents by age
Figure 8: How children perceive
the truthfulness of each agent
“ What are you?" - identity attribution and playfulness
We observed probing behavior, where the participants were
trying to understand the agent. Younger children tried to
understand the agents like a person, “[Alexa], what is your
favorite color“, “Hey Alexa, how old are you“. Older children
tested what the agent would do when asked to perform
actions that humans do, “Can you open doors?" “[Cozmo]
can you jump?" They also asked the agents how they
worked, “Do you have a phone inside you?" and how they
defined themselves, “What are you?" The children used
gender interchangeably when talking about the agents.
Gary and Larry referred to Cozmo as both “he“ and “she“. “I
don’t really know which one it is“ - Gary. “It’s a boy...maybe
because of the name but then again you could use a boy
name for a girl and and girl name for a boy“ - Larry. Later,
they concluded that Cozmo “is a bob-cat with eyes“.
Multiple agents of the same type were provided during the
playtest which led the participants to believe that different
Alexas could give different answers , “She doesn’t know the
answer, ask the other Alexa“.They would repeatedly also
ask the same question to a device to see if it would change
its answer.
As they became familiar with the agents we observed
children playfully testing their limits. A 6 year old girl asked
several agents “Is it OK if I eat you?". Other children offered
the agents food and asked if they could see the objects in
front of them, “Alexa what kind of nut am I holding?"
“Sorry I don‘t understand that question“ - understanding
We observed that the majority of participants experienced
various challenges getting the agents to understand their
questions. Several children tried to increase the level of
their voice or make more pauses in their questions, things
that may help people understand them. However, it did not
always lead to better recognition with the agents.
Often, when the agents would not understand them,
children would try to reword their question or make it more
specific. Throughout the playtest, we observed shifts in
children‘s strategies, under the encouragement of the
facilitators and parents or by observing strategies that
worked for other children.
Despite these challenges, participants became fluent in
voice interaction quickly and even tried to talk with the
agents that didn‘t have this ability (e.g Cozmo).
Discussion
Building on prior work and based on the observations from
this study, we propose the following considerations for
child-agent interaction design: voice and prosody,
interactive engagement, and facilitating understanding.
Voice and prosody
Voice and tone made a difference in how friendly
participants thought the agent was (Figure 6). When asked
about differences between the agents Mia replied, “I liked
Julie more because she was more like a normal person, she
had more feelings. Google Home was like ‘I know
everything‘...Felt like she [Julie] actually understood what I
was saying to her“.
Her favorite interaction with the agents was the following
exchange she had with Julie, “She said ‘I‘m working on it‘’...I
said ‘What do you mean you‘re working on it ? ‘ She said ‘I
don‘t speak Chinese‘[laughs] I wrote back ‘I‘m not speaking
Chinese‘ and she said ‘It sounded like it‘. That is an
interaction I would have with a normal human“. Mia‘s
perception of this interaction could be a result of the
“mirroring effect“, where the agent‘s unexpected response is
perceived as a sense of humor, a reflection of Mia‘s own
style of communication. Through this interaction and
previous experiments ran by Disney Research, we see an
opportunity for future agents to imitate the communication
style of children and create a prosodic synchrony in the
conversations in order to build rapport [5].
Interactive engagement
Gary and Larry said they liked interacting with Cozmo the
most “because she could actually move and all the other
ones that we did she couldn‘t move“. Also because Cozmo
had expressions, “he has feelings, he can do this with his
little shaft and he can move his eyes like a person, confused
eyes, angry eyes, happy eyes...Everybody else like they
didn‘t have eyes, they didn‘t have arms, they didn‘t have a
head, it was just like a flat cylinder“. This testimony reveals
how mobile and responsive agents appeal to children and
how the form factor plays a significant role in the interaction.
Through its eyes and movements, Cozmo was able to
effectively communicate emotion, and so the children
believed that Cozmo had feelings and intelligence. Many
participants, who tried to engage in dialogue with the
agents, were limited by the fact that the agents weren’t able
to ask clarifying questions. While the children were
attracted to the voice and expressions of the agents at first,
they lost interest when the agent could not understand their
questions. We recognize the potential for designing a voice
interface that could engage in conversations with the
children by referring to their previous questions, by asking
more clarifying questions and by expressing various
reactions to children inputs.
Facilitating understanding
During the play-test the facilitators, parents, and peers
helped the children rephrase or refine their questions. We
wonder how some of this facilitation could be embedded in
the design of the agent’s mode of interaction. If the agent
could let the children know why they cannot answer the
question and differentiate between not understanding the
question and not having access to a specific information,
this would help the users decide how to change their
question, either by rephrasing it, or by being more specific.
Another issue we recognized was that sometimes the
amount of information provided to the participants was
overwhelming. The agent’s answers could be scaffolded to
provide information gradually. This would enable the
children to decide how much they want to know about a
specific topic and get more engaged by having a
conversation with the agent.
Conclusion
This research raises the following question: if these agents
are becoming embedded in our lives, how could they
influence children’s perception of intelligence and the way
they make sense of the world? During our study the
participants believed that they could teach the agents, and
they could learn from them. This leads us to imagine novel
ways in which the agents could become learning
companions. In future work, we hope to design interactions
where children are able to tinker with and program the
agents and thus expand their perception of their own
intelligence and different ways to develop it.
Acknowledgements
We would like to thank the students at the Media Lab who
assisted with the study, our friends and mentors who
reviewed the various drafts, the parents and children who
participated in the study, and our colleagues who offered
advice on this work. This research was supported by the
National Science Foundation (NSF) under Grant
CCF-1138986. Any opinions, findings and conclusions, or
recommendations expressed in this paper are those of the
authors and do not represent the views of the NSF.
REFERENCES
1. Christoph Bartneck, Dana Kuli´
c, Elizabeth Croft, and
Susana Zoghbi. 2009. Measurement instruments for
the anthropomorphism, animacy, likeability, perceived
intelligence, and perceived safety of robots.
International journal of social robotics 1, 1 (2009),
71–81.
2. Debra Bernstein and Kevin Crowley. 2008. Searching
for signs of intelligent life: An investigation of young
children’s beliefs about robot intelligence. The Journal
of the Learning Sciences 17, 2 (2008), 225–247.
3. Peter H Kahn Jr, Batya Friedman, Deanne R
Perez-Granados, and Nathan G Freier. 2006. Robotic
pets in the lives of preschool children. Interaction
Studies 7, 3 (2006), 405–436.
4. Hae Won Park, Rinat Rosenberg-Kima, Maor
Rosenberg, Goren Gordon, and Cynthia Breazeal.
2017. Growing growth mindset with a social robot peer.
In Proceedings of the 12th ACM/IEEE international
conference on Human robot interaction. ACM.
5. Najmeh Sadoughi, André Pereira, Rishub Jain, Iolanda
Leite, and Jill Fain Lehman. 2017. Creating Prosodic
Synchrony for a Robot Co-player in a
Speech-controlled Game for Children. In Proceedings
of the 2017 ACM/IEEE International Conference on
Human-Robot Interaction. ACM, 91–99.
6. Fumihide Tanaka, Aaron Cicourel, and Javier R
Movellan. 2007. Socialization between toddlers and
robots at an early childhood education center.
Proceedings of the National Academy of Sciences 104,
46 (2007), 17954–17958.
7. Sherry Turkle. 2005. The second self: Computers and
the human spirit. Mit Press.
8. Henry M Wellman, David Cross, and Julanne Watson.
2001. Meta-analysis of theory-of-mind development:
the truth about false belief. Child development 72, 3
(2001), 655–684.
... Thus, such research was not included in the review. There are also few studies that have focused on children's interactions with or explanations about AI-enabled technologies (e.g., Druga et al., 2017Druga et al., , 2018Lovato & Piper, 2015;Vartiainen et al., 2020Vartiainen et al., , 2021. Druga et al. (2017), for instance, asked children to reason how a (AI-enabled) robot can navigate through a maze. ...
... There are also few studies that have focused on children's interactions with or explanations about AI-enabled technologies (e.g., Druga et al., 2017Druga et al., , 2018Lovato & Piper, 2015;Vartiainen et al., 2020Vartiainen et al., , 2021. Druga et al. (2017), for instance, asked children to reason how a (AI-enabled) robot can navigate through a maze. However, since the actual term AI was not used when discussing these technologies with children, it is highly speculative whether the children were talking specifically about AI due the opaque nature of these technologies (Long & Magerko, 2020). ...
Preprint
Full-text available
There is a common agreement that teaching children about AI should be an integral part of contemporary education. However, not much is known about children's pre-instructional conceptions of AI, which means that the design of relevant curricula has less than solid footing. The present paper contributes to filling this gap in knowledge by reporting the findings of a qualitative survey study that investigated 195 Finnish 5th and 6th grade students' conceptions of 1) AI as a technology, 2) where AI is used, and 3) why AI is used. The students conceptualized AI in varying forms, such as robots, devices with sensors, and voice assistants, which, in terms of level of concreteness, varied from high (e.g., sensory technology) to low (e.g., dangerous technology). AI was conceptualized as situated in a variety of locales between home and work environments and everyday and non-everyday contexts, and even ubiquitously. Third, ranging between micro (everyday life) and macro (global policies) levels, AI was perceived as making things easier, providing efficiency, and gaining power over others. Pedagogical implications are discussed.
... Lopatovska et al. [93] found a similar display of politeness. Children are also found to exhibit anthropomorphic behavior by projecting personality and attributing gender to VUIs [49]. Similarly, in our study, a slight majority of participants (57.1%) showed anthropomorphic behavior by saying "please" and "thank you," and half the participants used human metaphors for the VUI. ...
Preprint
Full-text available
We explore a range of different metaphors used for Voice User Interfaces (VUIs) by designers, end-users, manufacturers, and researchers using a novel framework derived from semi-structured interviews and a literature review. We focus less on the well-established idea of metaphors as a way for interface designers to help novice users learn how to interact with novel technology and more on other ways metaphors can be used. We find that metaphors people use are contextually fluid, can change with the mode of conversation, and can reveal differences in how people perceive VUIs compared to other devices. Not all metaphors are helpful, and some may be offensive. Analyzing this broader class of metaphors can help understand, perhaps even predict problems. Metaphor analysis can be a low-cost tool to inspire design creativity and facilitate complex discussions about sociotechnical issues, enabling us to spot potential opportunities and problems in the situated use of technologies.
... This may lead to breach of personal information, or more nefarious acts such as steering of public opinion or distortion of facts. Despite the implications of such digital technologies, children's conceptions of technology, its potential and implications may be vague, inaccurate or distorted (Druga et al., 2017;Mertala, 2019). It is therefore crucial-especially for the younger generationsto understand how AI is used, how it works, and what its pitfalls are. ...
Article
Full-text available
Artificial Intelligence (AI) and Machine Learning (ML) algorithms are increasingly being adopted to create and filter online digital content viewed by audiences from diverse demographics. From an early age, children grow into habitual use of online services but are usually unaware of how such algorithms operate, or even of their presence. Design decisions and biases inherent in the ML algorithms or in the datasets they are trained on shape the everyday digital lives of present and future generations. It is therefore important to disseminate a general understanding of AI and ML, and the ethical concerns associated with their use. As a response, the digital game ArtBot was designed and developed to teach fundamental principles about AI and ML, and to promote critical thinking about their functionality and shortcomings in everyday digital life. The game is intended as a learning tool in primary and secondary school classrooms. To assess the effectiveness of the ArtBot game as a learning experience we collected data from over 2,000 players across different platforms focusing on the degree of usage, interface efficiency, learners' performance and user experience. The quantitative usage data collected within the game was complemented by over 160 survey responses from teachers and students during early pilots of ArtBot. The evaluation analysis performed in this paper gauges the usability and usefulness of the game, and identifies areas of the game design which need improvement.
... Thus, ethical analyses and guidelines are necessary to avoid negative repercussions of AI on society. Furthermore, research has raised questions about the extent to which students are aware of AI's impact on their everyday lives and its application in fields and industries of the future (Druga et al., 2017;Hasse et al., 2019). This lack of awareness may limit students' understanding of the relevance of AI and thus their interest in pursuing learning trajectories that lead toward careers in AI. ...
Article
The rapid expansion of artificial intelligence (AI) necessitates promoting AI education at the K-12 level. However, educating young learners to become AI literate citizens poses several challenges. The components of AI literacy are ill-defined and it is unclear to what extent middle school students can engage in learning about AI as a sociotechnical system with socio-political implications. In this paper we posit that students must learn three core domains of AI: technical concepts and processes, ethical and societal implications, and career futures in the AI era. This paper describes the design and implementation of the Developing AI Literacy (DAILy) workshop that aimed to integrate middle school students' learning of the three domains. We found that after the workshop, most students developed a general understanding of AI concepts and processes (e.g., supervised learning and logic systems). More importantly, they were able to identify bias, describe ways to mitigate bias in machine learning, and start to consider how AI may impact their future lives and careers. At exit, nearly half of the students explained AI as not just a technical subject, but one that has personal, career, and societal implications. Overall, this finding suggests that the approach of incorporating ethics and career futures into AI education is age appropriate and effective for developing AI literacy among middle school students. This study contributes to the field of AI Education by presenting a model of integrating ethics into the teaching of AI that is appropriate for middle school students.
... Each participant classified a VCA into one of the four categories: human, speaker, system, or space object. In a similar vein, Druga et al. (2017) investigated children's perceptions of VCAs. They observed that younger children aged three to four attempted to understand the agent as a person, whereas older children aged five to 10 tested the agent's identity by asking a series of questions to determine whether the VCA was human or artifact. ...
Artificial agents, such as voice-controlled conversational agents (VCAs) built into smart devices, are becoming more prevalent in daily and educational contexts, enhancing the possibility of using them as language partners. However, research has primarily focused on the cognitive or affective outcomes of using these agents, overlooking questions about language learners' perceptions of agents as human-like conversational partners and about what aspects of agents generate the social interaction schema required for language learning. To address these gaps, this exploratory study examined to what extent EFL students perceived a VCA as a human-like language partner and what knowledge regarding language partners these students used to justify their perceptions. Sixty-seven Korean EFL students, all of them being nine years old, completed three interactional tasks with a VCA designed to act as a language partner. They then participated in a drawing task and in-depth interview implemented to explore students' perceptions toward the VCA as a language partner. Thematic analysis of students' drawings and interview transcripts found that the majority of students identified human elements in the VCA, perceiving it either as a human-like partner or as something between artifact and human. This strong tendency toward anthropomorphism indicates VCAs' great potential as interactive language partners in EFL contexts. Additionally, this study discusses the reasons for and implications of students' strong anthropo-morphism tendency and how students used their knowledge regarding language partners to justify their VCA perceptions. Future recommendations for the use of VCAs were suggested from both pedagogical and technological standpoints.
Conference Paper
Full-text available
Mindset has been shown to have a large impact on people's academic, social, and work achievements. A growth mindset, i.e., the belief that success comes from effort and perseverance, is a better indicator of higher achievements as compared to a fixed mindset, i.e., the belief that things are set and cannot be changed. Interventions aimed at promoting a growth mindset in children range from teaching about the brain's ability to learn and change, to playing computer games that grant brain points for effort rather than success. This work explores a novel paradigm to foster a growth mindset in young children where they play a puzzle solving game with a peer-like social robot. The social robot is fully autonomous and programmed with behaviors suggestive of it having either a growth mindset or a neutral mindset as it plays puzzle games with the child. We measure the mindset of children before and after interacting with the peer-like robot, in addition to measuring their problem solving behavior when faced with a challenging puzzle. We found that children who played with a growth-mindset robot 1) self-reported having a stronger growth mindset and 2) tried harder during a challenging task, as compared to children who played with the neutral-mindset robot. These results suggest that interacting with peer-like social robot with a growth mindset can promote the same mindset in children.
Article
Full-text available
Children's worlds are increasingly populated by intelligent technologies. This has raised a number of questions about the ways in which technology can change children's ideas about important concepts, like what it means to be alive or smart. In this study, we examined the impact of experience with intelligent technologies on children's ideas about robot intelligence. A total of 60 children aged 4 through 7 were asked to identify the intellectual, psychological, and biological characteristics of 8 entities that differed in terms of their life status and intellectual capabilities. Results indicated that as children gained experience in this domain, they began to differentiate robots from other familiar entities. This differentiation was indicated by a unique pattern of responses about the intellectual and psychological characteristics of robots. These findings suggest that experience may yield a more highly developed viewpoint that reflects an appreciation of the distinctions between biological life, machines, and artificially intelligent technologies.People who grew up in the world of the mechanical are more comfortable with a definition of what is alive that excludes all but the biological and resist shifting definitions of aliveness.… Children who have grown up with computational objects don't experience that dichotomy. They turn the dichotomy into a menu and cycle through its choices. (Turkle, 199952. Turkle , S. 1999. “What are we thinking about when we are thinking about computers”. In The science studies reader, Edited by: Biagioli , M. 543–552. New York: Routledge. View all references, p. 552)
Article
Full-text available
This study emphasizes the need for standardized measurement tools for human robot interaction (HRI). If we are to make progress in this field then we must be able to compare the results from different studies. Aliterature review has been performed on the measurements of five key concepts in HRI: anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety. The results have been distilled into five consistent questionnaires using semantic differential scales. We report reliability and validity indicators based on several empirical studies that used these questionnaires. It is our hope that these questionnaires can be used by robot developers to monitor their progress. Psychologists are invited to further develop the questionnaires by adding new concepts, and to conduct further validations where it appears necessary.
Article
Full-text available
A state-of-the-art social robot was immersed in a classroom of toddlers for >5 months. The quality of the interaction between children and robots improved steadily for 27 sessions, quickly deteriorated for 15 sessions when the robot was reprogrammed to behave in a predictable manner, and improved in the last three sessions when the robot displayed again its full behavioral repertoire. Initially, the children treated the robot very differently than the way they treated each other. By the last sessions, 5 months later, they treated the robot as a peer rather than as a toy. Results indicate that current robot technology is surprisingly close to achieving autonomous bonding and socialization with human toddlers for sustained periods of time and that it could have great potential in educational settings assisting teachers and enriching the classroom environment. • human–robot interaction • social development • social robotics
Book
In The Second Self , Sherry Turkle looks at the computer not as a "tool," but as part of our social and psychological lives; she looks beyond how we use computer games and spreadsheets to explore how the computer affects our awareness of ourselves, of one another, and of our relationship with the world. "Technology," she writes, "catalyzes changes not only in what we do but in how we think." First published in 1984, The Second Self is still essential reading as a primer in the psychology of computation. This twentieth anniversary edition allows us to reconsider two decades of computer culture--to (re)experience what was and is most novel in our new media culture and to view our own contemporary relationship with technology with fresh eyes. Turkle frames this classic work with a new introduction, a new epilogue, and extensive notes added to the original text. Turkle talks to children, college students, engineers, AI scientists, hackers, and personal computer owners--people confronting machines that seem to think and at the same time suggest a new way for us to think--about human thought, emotion, memory, and understanding. Her interviews reveal that we experience computers as being on the border between inanimate and animate, as both an extension of the self and part of the external world. Their special place betwixt and between traditional categories is part of what makes them compelling and evocative. (In the introduction to this edition, Turkle quotes a PDA user as saying, "When my Palm crashed, it was like a death. I thought I had lost my mind.") Why we think of the workings of a machine in psychological terms--how this happens, and what it means for all of us--is the ever more timely subject of The Second Self .
Conference Paper
Synchrony is an essential aspect of human-human interactions. In previous work, we have seen how synchrony manifests in low-level acoustic phenomena like fundamental frequency, loudness, and the duration of keywords during the play of child-child pairs in a fast-paced, cooperative, language-based game. The correlation between the increase in such low-level synchrony and increase in enjoyment of the game suggests that a similar dynamic between child and robot co-players might also improve the child's experience. We report an approach to creating on-line acoustic synchrony by using a dynamic Bayesian network learned from prior recordings of child-child play to select from a predefined space of robot speech in response to real-time measurement of the child's prosodic features. Data were collected from 40 new children, each playing the game with both a synchronizing and non-synchronizing version of the robot. Results show a significant order effect: although all children grew to enjoy the game more over time, those that began with the synchronous robot maintained their own synchrony to it and achieved higher engagement compared with those that did not.
Article
This study examined preschool children's reasoning about and behavioral interactions with one of the most advanced robotic pets currently on the retail market, Sony's robotic dog AIBO. Eighty children, equally divided between two age groups, 34-50 months and 58-74 months, participated in individual sessions with two artifacts: AIBO and a stuffed dog. Evaluation and justification results showed similarities in children's reasoning across artifacts. In contrast, children engaged more often in apprehensive behavior and attempts at reciprocity with AIBO, and more often mistreated the stuffed dog and endowed it with animation. Discussion focuses on how robotic pets, as representative of an emerging technological genre, may be (a) blurring foundational ontological categories, and (b) impacting children's social and moral development.
Article
Research on theory of mind increasingly encompasses apparently contradictory findings. In particular, in initial studies, older preschoolers consistently passed false-belief tasks — a so-called “definitive” test of mental-state understanding — whereas younger children systematically erred. More recent studies, however, have found evidence of false-belief understanding in 3-year-olds or have demonstrated conditions that improve children's performance. A meta-analysis was conducted (N= 178 separate studies) to address the empirical inconsistencies and theoretical controversies. When organized into a systematic set of factors that vary across studies, false-belief results cluster systematically with the exception of only a few outliers. A combined model that included age, country of origin, and four task factors (e.g., whether the task objects were transformed in order to deceive the protagonist or not) yielded a multiple R of .74 and an R2 of .55; thus, the model accounts for 55% of the variance in false-belief performance. Moreover, false-belief performance showed a consistent developmental pattern, even across various countries and various task manipulations: preschoolers went from below-chance performance to above-chance performance. The findings are inconsistent with early competence proposals that claim that developmental changes are due to tasks artifacts, and thus disappear in simpler, revised false-belief tasks; and are, instead, consistent with theoretical accounts that propose that understanding of belief, and, relatedly, understanding of mind, exhibit genuine conceptual change in the preschool years.
Meta-analysis of theory-of-mind development: the truth about false belief
  • David Henry M Wellman
  • Julanne Cross
  • Watson