ArticlePDF Available

The crying shame of robot nannies An ethical appraisal

Authors:

Abstract

Childcare robots are being manufactured and developed with the long term aim of creating surrogate carers. While total childcare is not yet being promoted, there are indications that it is 'on the cards' . We examine recent research and developments in childcare robots and speculate on progress over the coming years by extrapolating from other ongoing robotics work. Our main aim is to raise ethical questions about the part or full-time replacement of primary carers. The questions are about human rights, privacy, robot use of restraint, deception of children and accountability. But the most pressing ethical issues throughout the paper concern the consequences for the psychological and emotional wellbeing of children. We set these in the context of the child development literature on the pathology and causes of attachment disorders. We then consider the adequacy of current legislation and international ethical guidelines on the protection of children from the overuse of robot care. Who's to say that at some distant moment there might be an assembly line producing a gentle product in the form of a grandmother – whose stock in trade is love. From I Sing the Body Electric, Twilight Zone, Series 3, Episode 35, 1960 . Introduction A babysitter/companion on call round the clock to supervise and entertain the kids is the dream of many working parents. Now robot manufacturers in South Korea and Japan are racing to fulfil that dream with affordable robot "nannies". These currently have game playing, quizzes, speech recognition, face recognition and limited conversation to capture the preschool child's interest and attention. Their mobility and semi-autonomous function combined with facilities for visual and auditory monitoring are designed to keep the child from harm. Most are pro-hibitively expensive at present but prices are falling and some cheap versions are already becoming available.
Interaction Studies 11:2 (2010), . doi 10.1075/is.11.2.01sha
issn 15720373 / e-issn 15720381 © John Benjamins Publishing Company
e crying shame of robot nannies
An ethical appraisal
Noel Sharkey & Amanda Sharkey
University of Sheeld, UK
Childcare robots are being manufactured and developed with the long term aim
of creating surrogate carers. While total childcare is not yet being promoted,
there are indications that it is ‘on the cards. We examine recent research and
developments in childcare robots and speculate on progress over the coming
years by extrapolating from other ongoing robotics work. Our main aim is to
raise ethical questions about the part or full-time replacement of primary carers.
e questions are about human rights, privacy, robot use of restraint, deception of
children and accountability. But the most pressing ethical issues throughout the
paper concern the consequences for the psychological and emotional wellbeing
of children. We set these in the context of the child development literature on the
pathology and causes of attachment disorders. We then consider the adequacy
of current legislation and international ethical guidelines on the protection of
children from the overuse of robot care.
Who’s to say that at some distant moment there might
be an assembly line producing a gentle product in
the form of a grandmother – whose stock in trade is
love. From I Sing the Body Electric, Twilight Zone,
Series 3, Episode 35, 1960
. Introduction
A babysitter/companion on call round the clock to supervise and entertain the
kids is the dream of many working parents. Now robot manufacturers in South
Korea and Japan are racing to full that dream with aordable robot “nannies”.
ese currently have game playing, quizzes, speech recognition, face recognition
and limited conversation to capture the preschool child’s interest and attention.
eir mobility and semi-autonomous function combined with facilities for visual
and auditory monitoring are designed to keep the child from harm. Most are pro-
hibitively expensive at present but prices are falling and some cheap versions are
already becoming available.
 Noel Sharkey & Amanda Sharkey
Children love robots as indicated by the numbers taking part in robot com-
petitions worldwide. Even in a war zone, when bomb disposal robots entered a
village in Iraq, they were swamped with excited children (Personal communica-
tion, Ronald C. Arkin, 2008). ere is a growing body of research showing positive
interactions between children and robots in the home (e.g. Turkle et al. 2006 a,b),
and in the classroom (e.g. Tanaka et al. 2007; Kanda et al., 2009). Robots have also
been shown to be useful in therapeutic applications for children (e.g. Shibata et al.,
2001, Dautenhahn, 2003; Dautenhahn & Werry, 2004; Marti et al., 2005; Liu et al.,
2008). e natural engagement value of robots makes them a great motivational
tool for education in science and engineering. We raise no ethical objections to
the use of robots for such purposes or with their use in experimental research or
even as toys.
Our concerns are about the evolving use of childcare robots and the poten-
tial dangers they pose for children and society (Sharkey, 2008a). By extrapolating
from ongoing developments in other areas of robotics, we can get a reasonable
idea of the facilities that childcare robots could have available to them over the
next 5 to 15 years. We make no claims about the precision of the time estimate as
this has proved to be almost impossible for robotics and AI developments (Sharkey,
2008b). Our approach is conservative and explicitly avoids entanglement with
issues about strong AI and super smart machines. Nonetheless, it may not be long
before robots can be used to keep children safe and maintain their physical needs
for as long as required.
To be commercially viable, robot carers will need to enable considerably lon-
ger parent/carer absences than can be obtained from leaving a child sitting in front
of a video or television programme. Television and video have long been used by
busy parents to entertain children for short periods of time. But they are a passive
form of entertainment and children get dgety aer a while and become unsafe.
ey need to be monitored with frequent “pop-ins” or the parent has to work in
the same room as the child and suer the same DVDs while trying to concen-
trate. e robot can extend the length of parent absences by keeping the child safe
from harm, keeping her entertained and, ideally, by creating a relationship bond
between child and robot (Turkle et al. 2006b).
We start with a simple example, the Hello Kitty Robot, which parents are
already beginning to use if the marketing website is to be believed. It gives an
idea of how these robots are already getting a ‘foot in the door. Even for such a
robotically simple and relatively cheap robot, the marketing claims are that, “is
is a perfect robot for whoever does not have a lot time [sic] to stay with their
child.” (Hello Kitty website). Although Hello Kitty is not mobile, it creates a lifelike
appearance by autonomously moving its head to four angles and moving its arms.
What gives it an edge is that it can recognise voices and faces so that it can call
e crying shame of robot nannies 
children by their names. It has a stereo CCD camera that allows it to track faces
and it can chat. For children this may be enough to create the illusion that it has
mental states (Melson et al. in press b).
Busy working parents might be tempted to think that a robot nanny could pro-
vide constant supervision, entertainment and companionship for their children.
Some of the customer reviews of the “Hello Kitty Robot”, on the internet made
interesting reading. ese have now been removed but we kept a copy: (there is
also a copy of some of the comments at (Bittybobo))
Since we have invited Hello Kitty (Kiki-as my son calls her), life has been so
much easier for everyone. My daughter is no longer the built in babysitter for
my son. Hello Kitty does all the work. I always set Kiki to parent mode, and
she does a great job. My two year old is already learning words in Japanese,
German, and French.
As a single executive mom, I spend most of my home time on the computer
and phone and so don’t have a lot of chance to interact with my 18-month old.
e HK robot does a great job of talking to her and keeping her occupied for
hours on end. Last night I came into the playroom around 1AM to nd her,
still dressed (in her Hello Kitty regalia of course), curled sound asleep around
the big plastic Kitty Robo. How cute! (And, how nice not to hear those heart-
breaking lonely cries while I’m trying to get some work done.)
Robo Kitty is like another parent at our house. She talks so kindly to my little
boy. He’s even starting to speak with her accent! It’s so cute. Robo Kitty puts
Max to sleep, watches TV with him, watches him in the bath, listens to him
read. It’s amazing, like a best friend, or as Max says “Kitty Mommy!” Now
when I’m working from home I don’t have to worry about Max asking a bunch
of questions or wanting to play or having to read to him. He hardly even talks
to me at all! He no longer asks to go to the park or the zoo – being a parent has
NEVER been so easy! ank you Robo Kitty!”
We are not presenting these anecdotal examples as rigorous evidence of how a
simple robot like Hello Kitty will generally be used. Other parents commenting
on the website were highly critical about these mothers being cold or undeserv-
ing of having children. We cannot authenticate these comments. Nonetheless
this example provides a worrying indication of what might be and what we need
to be prepared for. Perhaps it is only a small minority of parents who would rely
on such a simple robot to mind their pre-school children. But as more sophis-
ticated robots of the type we describe later become aordable, their use could
increase dramatically.
What follows is an examination of the present day and near-future childcare
robots and a discussion of potential ethical dangers that arise from their extended
 Noel Sharkey & Amanda Sharkey
use in caring for babies and young children. Our biggest concern is about what
will happen if children are le in the regular or near-exclusive care of robots.
First we briey examine how near-future robots will be able to keep children
safe from harm and what ethical issues this may raise. en we make the case,
from the results of research on child–robot interaction, that children can and will
form pseudo-relationships with robots and attribute mental states and sociality
to them. Children’s natural anthropomorphism could be amplied and exploited
by the addition of a number of methods being developed through research on
human–robot interaction, for example, in the areas of conversation, speech, touch,
face and emotion recognition. We draw upon evidence from the psychological
literature on attachment and neglect to look at the possible emotional harm that
could result from children spending too much time exclusively in the company of
mechanical minders.
In the nal section, we turn to current legislation and international ethical
guidelines on the care and rights of children to nd out what protections they have
from sustained or exclusive robot care. Our aim is not to oer answers or solutions
to the ethical dangers but to inform and raise the issues for discussion. It is up to
society, the legislature and the professional bodies to provide codes of conduct to
deal with future robot childcare.
. Keeping children from physical harm
An essential ingredient for consumer trust in childcare robots is that they keep
children safe from physical harm. e main method used at present is mobile
monitoring. For example, the PaPeRo Personal Partner Robot by NEC (Yoshiro
et al., 2005) uses cameras in the robot’s ‘eyesto transmit images of the child to a
window on the parent-carers computer or to their mobile phone. e carer can
then see and control the robot to nd the child if she moves out of sight. is is
like having a portable baby monitor but it defeats the purpose of mechanical care.
ere is little point in having a childcare robot if the busy carer has to continuously
monitor their child’s behaviour. For costly childcare robots to be attractive to con-
sumers or institutions, they will need to have sucient autonomous functioning
to free the carer’s time and call upon them only in unusual circumstances.
As a start in this direction, some childcare robots keep track of the location of
children and alert adults if they move outside of a preset perimeter. e PaPeRo
robot comes with PaPeSacks, each containing an ultrasonic sensor with a unique
signature. e robot can then detect the exact whereabouts of several children at
the same time and know which child is which. Similarly the Japanese Tmsuk robot
e crying shame of robot nannies 
uses radio frequency identication tags. But more naturalistic methods of tracking
are now being developed that will eventually nd their way into the care robot
market. For example, Lopes et al., (2009) have developed a method for tracking
people in a range of environments and lighting conditions without the use of sensor
beacons. is means that the robot will be able to follow a child outside and alert
carers of her location or encourage and guide her back into the home.
We may also see the integration of care robots with other home sensing and
monitoring systems. ere is considerable research on the development of smart
sensing homes for the frail elderly. ese can monitor a range of potentially dan-
gerous activities such as leaving on water taps or cookers. ey can monitor a per-
son getting out of bed and wandering. ey can prompt the person with a voice to
remind them to go to the toilet and switch the toilet light on for them (Orpwood
et al., 2008). Vision systems can detect a fall and other sensors can determine if
assistance is required (Toronto Rehabilitation Unit Annual Report 2008, 40–41).
Simple versions of such systems could be adapted for use in robot childcare.
One ethical issue arising from such close monitoring is that every child has a
right to privacy under Articles 16 and 40 of the UN Convention on Child Rights.
It is ne for parents to listen out for their children with a baby alarm. Parents
also frequently video and photograph their young children’s activities. In most
circumstances legal guardians have the right to full disclosure regarding a very
young child. However, there is something dierent about an adult being present to
observe a child and a child being covertly monitored when she thinks that she is
alone with her robot friend.
Without making too much of this issue, when a child discusses something
with an adult, she may expect the discussion will be reported to a third party
especially her parents. But sometimes conversations about issues concerning the
parents, such as abuse or injustice, should be treated in condence. A robot might
not be able to keep such condences from the parents before reporting the incident
to the appropriate authorities. Moreover, when a child has a discussion with a peer
friend (or robot friend) they may be doing so in the belief that it is in condence.
With the massive memory hard drives available today, it would be possible
to record a child’s entire life. is gives rise to concerns about whether such close
invigilation is acceptable. Important questions need to be discussed here such as,
who will be allowed access to the recordings? Will the child, in later life have the
right to destroy the records?
Privacy aside, an additional way to increase autonomous supervision would
be to allow customisation of home maps so that a robot could encode danger
areas. is could be extended with better vision systems that could detect poten-
tially dangerous activities like climbing on furniture to jump. A robot could make
 Noel Sharkey & Amanda Sharkey
a rst pass at warning a child to stop doing or engaging in a potentially dangerous
activity in the same way that smart sensing homes do for the elderly. But there is
another ethical problem lurking in the shadows here.
If a robot could predict a dangerous situation, it could also be programmed
to autonomously take steps to physically prevent it rather than merely warn. For
example, it could take matches from the hands of a child, get between a child and
a danger area such as a re, or even restrain a child from carrying out a dangerous
or naughty action. However, restraining a child to avoid harm could be a slip-
pery slope towards authoritarian robotics. We must ask how acceptable it is for a
robot to make decisions that can aect the lives of our children by constraining
their behaviour.
It would be easy to construct scenarios where it would be hard to deny such
robot action. For example, if a child was about to run across the road into heavy
oncoming trac and a robot could stop her, should it not do so? e problem is
in trusting the classications and sensing systems of a robot to determine what is
a dangerous activity. As an extreme case, imagine a child having doughnuts taken
from her because the robot wanted to prevent her from becoming obese. ere are
many discussions to be had over the extremes of robots blocking human actions
and where to draw the line (c.f. Wallach & Allen, 2009).
Another ethically tricky area of autonomous care is in the development of
robots to do what some might consider to be the ‘dull and dirty’ work of childcare.
ey may eventually be able to carry out tasks such as changing nappies, bathing,
dressing, feeding and adjusting clothing and bedding to accord with temperature
changes. Certainly, robot facilities like these are being thought about and developed
in Japan with an eye to caring for their aging population (Sharkey and Sharkey,
in press). Performing such duties would allow lengthier absences from human
carers but could be a step too far in childcare robotics; care routines are an impor-
tant component in fostering the relationship between a child and her primary carer
to promote healthy mental development. If we are not careful to lay out guidelines,
robots performing care routines could exacerbate some of the problems we discuss
later in the section on the psychological harm of robot childcare.
Carers who wish to leave their charges at home alone with a robot will need
to be concerned about the possibility of intruders entering the home for nefari-
ous purposes. Security is a major growth area in robotics and care robots could
incorporate some of the features being developed. For example, the Seoul authori-
ties, in combination with the private security company KT Telecop use a school
guard robot, OFRO, to watch out for potential paedophiles in school playgrounds.
It can autonomously patrol areas on pre-programmed routes and alert teachers if
it spots a person over a specic height, (Metro, May 31st 2007; e Korea Times,
e crying shame of robot nannies 
May, 30th 2007). If we combine this with face recognition, already available on
some of the care robots, they could stop adults to determine if they were on the
trusted list and alert the authorities if necessary.
. Relating to the inanimate
Another essential ingredient for consumer trust in childcare robots is that chil-
dren must want to spend time with them. Research has already begun to nd
ways to sustain long term relationships between humans and robots (e.g. Kanda
et al., 2004; Mitsunaga, et al., 2006; Mavridis et al., 2009). Care robots are already
being designed to exploit both natural human anthropomorphism and the bond
that children can form with personal toys. e attribution of animacy to objects
possessing certain key characteristics is part of being human (Sharkey & Sharkey
2006). Puppeteers have understood and exploited the willing or unconscious
“suspension of disbelief” for thousands of years as have modern animators and
cartoonists. e characteristics they exploit can be visual, behavioural or auditory.
Even the vaguest suggestion of a face brings an object to life; something as simple
as a sock can be moved in a way that makes it into a cute creature (Rocks, et al.,
in press). Robots, by comparison, can greatly amplify anthropomorphic and zoo-
morphic tendencies. Unlike other objects, a robot can combine visual, movement
and auditory features to present a powerful illusion of animacy without a controller
being present.
Young children invest emotionally in their most treasured cuddly toy. ey
may have diculty sleeping without it and become distraught if it gets misplaced
or lost. e child can be asked, “What does Bear think about X?”. Bear can reply
through the child’s voice or by whispering in the child’s ear or by simply nod-
ding or waving an arm. is is a part of normal childhood play and pretence that
requires imagination, with the child in control of the action. As Cayton (2006
p. 283) points out, “When children play make-believe, ‘let’s pretend’ games they
absolutely know it is pretend… Real play is a conscious activity. Ask a child who
is playing with a doll what they are doing and they may tell you matter-of-factly
that they are going to the shops or that the doll is sick but they will also tell you
that they are playing.
A puppet, on the other hand, is outside of the child’s control and less imagina-
tion and pretence is required. But a child le alone with a puppet soon realises the
illusion and the puppet can then be classied in the ‘let’s pretend’ category. e
dierence with a robot is that it can still operate and act when no one is stand-
ing next to it or even when the child is alone with it. is could create physical,
 Noel Sharkey & Amanda Sharkey
social and relational anthropomorphism that a child might perceive as ‘real’ and
not illusion.
ere is a gradually accumulating body of evidence that children of all ages
can come to believe in the reality of a relationship they have with robots. Melson
et al. (in press a) report three studies that employed Sony’s robotic dog AIBO: (i) a
content analysis of 6,438 internet discussion forum postings by 182 AIBO owners;
(ii) observations and interviews with 80 preschoolers during a 40-minute play
period with AIBO and a stued dog; and (iii) observations and interviews with
72 school-age children from 7 to 15 years old who played with both AIBO and a
living dog. e majority of participants across all three studies viewed AIBO as a
social companion: both the preschool and older children said that AIBO could
be their friend, that they could be a friend to AIBO, and that if they were sad, they
would like to be in the company of AIBO”.
In a related study, Kahn et al. (2006) looked at the responses of two groups
of preschoolers – 34–50 months and 58–74 months, in a comparison between an
AIBO and a stued dog. ey found that a quarter of the children, in verbal evalu-
ations, accorded animacy to the AIBO, half accorded biological properties and
around two-thirds accorded mental states. But a very similar pattern of evalua-
tion was found for the stued dog. e interesting thing here is that the childrens
behaviour towards the two artefacts did not t with their evaluations. Based on
2,360 coded behavioural interactions, the children exhibited signicantly more
apprehensive and reciprocal behaviours with the AIBO whilst they more oen
mistreated the stued dog (184 occurrences versus 39 for AIBO). us the verbal
reports were not as reliable an indicator as the behavioural observations. e robot
was treated more like a living creature than the stued dog.
Children can also form relationships with humanoid robots. Tanaka et al. (2007)
placed a “state-of-the-art” social robot (QRIO) in a day care centre for 5 months.
ey report that children between 10 and 24 months bonded with the robot in a
way that was signicantly greater than their bonding with a teddy bear. Tanaka
et al. claim that the toddlers came to treat the robot as one of their peers. ey
looked aer it, played with it, and hugged it. ey touched the robot more than
they hugged or touched a static toy robot, or a teddy bear. e researchers related
the childrens relationship with the robot to Harlow’s (1958) “aectional responses”.
ey claimed that “long-term bonding and socialization occurred between toddlers
and a state-of-the-art social robot” (Tanaka et al., 2007 p. 17957).
Turkle et al. (2006a) report a number of individual case studies that attest
to children’s willingness to become attached to robots. For example, one of the
case studies was of a ten year old girl, Melanie who was allowed to take home a
robotic doll, “My Real Baby”, and an AIBO for several weeks. e development
e crying shame of robot nannies 
of a relationship of the girl with the robots is apparent from her interview with
the researcher.
“Researcher: Do you think the doll is dierent now than when you rst started
playing with it?
Melanie: Yeah. I think we really got to know each other a whole lot better. Our
relationship, it grows bigger. Maybe when I rst started playing with her, she
didn’t really know me so she wasn’t making as much [sic] of these noises, but
now that she’s played with me a lot more, she really knows me and is a lot more
outgoing. Same with AIBO” (Turkle et al. 2006b pp. 352).
In another paper, Turkle et al. (2006b) chart the rst encounters of 60 children
between the ages of ve and thirteen with the MIT robots Cog and Kismet. e
children anthropomorphised the robots, made up “back stories” about their
behaviour, and developed “a range of novel strategies for seeing the robots not
only as ‘sort of alive’ but as capable of being friends and companions”. e chil-
dren were so ready to form relationships with the robots, that when they failed
to respond appropriately to their interactions, the children created explanations
of their behaviour that preserved their view of the robot as being something with
which they could have a relationship. For example, when Kismet failed to speak to
them, children would explain that this was because it was deaf, or ill, or too young
to understand, or shy, or sleeping. eir view of the robots did not even seem to
change when the researchers spent some time showing them how they worked,
and emphasising their underlying machinery.
Melson and her colleagues (Melson et al., in press b) directly compared chil-
dren’s views of and interactions with a living, and a robot dog. e children did
see the live dog as being more likely than the AIBO to have physical essences,
mental states, sociality and moral standing. However, a majority of the children
still thought of and interacted with AIBO as if it were a real dog; they were as likely
to give commands to the AIBO as to the living dog and over 60% armed that
AIBO had “mental states, sociality and moral standing”.
Overall, the pattern of evidence indicates that the illusion of robot animacy
works well for children from preschool to at least early teens. Robots appear to
amplify natural anthropomorphism. Children who spent time with robots saw
them as friends and felt that they had formed relationships with them. ey even
believed that a relatively simple robot was getting to know them better as they
played with it more. A large percentage was also willing to attribute mental states,
sociality and moral standing to a simple robot dog. Kahn et al. (2006) suggest that
a new technological genre of autonomous, adaptive, personied and embodied
artefacts is emerging that the English language is not well-equipped to handle.
 Noel Sharkey & Amanda Sharkey
ey believe that there may be need for a new ontological category beyond the
traditional distinction between animate and inanimate.
. Extending the reach of childcare robots
ere are a number of ways in which current childcare robots interact with chil-
dren. e main methods involve touch, language with speech recognition, tracking,
maintaining eye contact and face recognition among others. Extending social
interaction with better computational conversation and the ability to respond
contingently with facial expressions could result in more powerful illusions of
personhood and intent to a young child. It could make child-robot relationships
stronger and maintain them for longer. We discuss each of the current interactive
features in turn together with their possible near-future extensions.
Touch is an important element of human interaction (Hertenstein et al., 2006)
particularly with young children (Hertenstein, 2002). It has been exploited in the
development of robot companions and several of the manufacturers have inte-
grated touch sensitivity into their childcare machines in dierent ways. It seems
obvious that a robot responding contingently to touch by purring or making pleas-
ing gestures will increase its appeal. For example, Tanaka et al. (2007) reported
that children were more interested in the QRIO robot when they discovered that
patting it on the head caused it to ‘giggle’.
e PaPeRo robot has four touch sensors on the head and ve around its body
so that it can tell if it is being patted or hit. iRobiQ has a bump sensor, and touch
screen as well as touch sensors on the head, arms and wheels. e Probo robot
(Goris et al., 2008, 2009) is being developed to recognise dierent types of aective
touch such as slap, tickle, pet and poke. e Huggable (Stiehl et al. 2005, 2006) has
a dense sensor network for detecting the aective component of touch in rubbing,
petting, tapping, scratching and other types of interactions that a person nor-
mally has with a pet animal. It has four modalities for touch, pain, temperature
and kinaesthetic information.
Ongoing experimental research on touch is nding out the best way to create
emotional responses (Yohanan et al., 2005; Yohanan and Maclean, 2008). ere is
also research on the impact of a robot proactively touching people – like a “gimme
ve” gesture or an encouraging pat on the shoulder (Cramer et al. 2009). Touch
technology will improve over the next few years with better, cheaper and smaller
sensors available to create higher resolution haptic sensitivity. is will greatly
improve the interaction and friendship links with small children.
Robots could even have an advantage over humans in being allowed to touch
children. In the UK, for example, there has been considerable discussion about
the appropriateness of touching children by teachers and child minders. Teachers
e crying shame of robot nannies 
are reluctant to restrain children from hurting other children for fear of being
charged with sexual oences or assault. Similarly childcare workers and infant
school teachers are advised strongly not to touch children or hug them. Even
music teachers are asked not to touch children’s hands to instruct them on how to
hold an instrument unless absolutely necessary and then only aer warning them
very explicitly and asking for their permission. ese restrictions would not apply
to a robot because it could not be accused of having sexual intent and so there are
no particular ethical concerns. e only concern would be the child’s safety, e.g.
not being crushed by a hugging robot.
Another key element in interaction is spoken language. Even a doll with a
recorded set of phrases that can be activated by pulling a string, can keep children
entertained for hours by increasing the feeling of living reality for the child. We
found eight of the current childcare robots that could talk to some extent and had
speech recognition capability for simple commands. For example, iRobi, by Yujin
Robotics of South Korea responds to 1000 words of voice commands. None had
a full blown natural language processing interface, yet they can create the illusion
of understanding.
e PaPeRo robot is one of the most advanced and can answer some simple
questions. For example, when asked, “What kind of person do you like?” it answers,
“I like gentle people”. It can even give children simple quizzes and recognise if
their answers are correct. PaPeRo gets out of conversational diculties by making
jokes or by dancing to distract children. is is very rudimentary compared to
what is available in the rapidly advancing areas of computational natural language
processing and speech recognition. Such developments could lead to care robots
being able to converse with young children in a supercially convincing way
within the next 5 to 10 years.
Face recognition is another important factor in developing relationships (Kanda
et al., 2004). Some care robots are already able to store and recognise a limited num-
ber of faces, allowing them to distinguish between children and call them by name.
e RUBI robot system has built-in face detection that enables it to autonomously
nd and gaze at a face. is is a very useful way to engage a child and convince her
that the robot has “intent”. Spurred on by their importance in security applications,
face recognition methods are improving rapidly. Childcare robots of the future will
adopt this technology to provide rapid face recognition of a wide range of people.
An even more compelling way to create the illusion of a robot having mental
states and intention, is to give it the ability to recognise the emotion conveyed by
a child’s facial expression. e RUBI project team has been working on expres-
sion recognition for about 15 years with their computer expression recogni-
tion toolbox (CERT) (Bartlett et al., 2008). is uses Ekman’s facial action units
(Ekman & Friesen, 1978) which were developed to classify all human expressions.
 Noel Sharkey & Amanda Sharkey
e latest development uses CERT in combination with a sophisticated robot head
to mimic people’s emotional expressions.
e head, by David Hanson, resembles Albert Einstein and is made of a poly-
mer material called Flubber that makes it resemble human skin and provides exi-
bility of movement. Javier Movellan, the team leader said that, “We got the Einstein
robot head and did a rst pass at driving it with our expression recognition system.
In particular we had Einstein looking at himself in a mirror and learning how to
make expressions using feedback from our expression recognition. is is a trivial
machine learning problem. (personal communication, February 27, 2009). e
head can mimic up to 5,000 dierent expressions. is is still at an early stage of
development but will eventually, “assist with the development of cognitive, social
and emotional skills of your children” (ibid).
Robots can be programmed to react politely to us, to imitate us, and to behave
acceptably in the presence of humans (Fong et al., 2003). As the evidence pre-
sented earlier suggests, we have reached a point where it is possible to make chil-
dren believe that robots can understand them at least some of the time. Advances
in language processing, touch and expression recognition will act to strengthen
the illusion. Although such developments are impressive, they are not without
ethical concerns.
An infant entertaining a relationship with a robot may not be in a position
to distinguish this from a relationship with a socially and emotionally compe-
tent being. As Sparrow pointed out about relationships with robot pets, “[they]
are predicated on mistaking, at a conscious or unconscious level, the robot for a
real animal. For an individual to benet signicantly from ownership of a robot
pet they must systematically delude themselves regarding the real nature of their
relation with the animal. It requires sentimentality of a morally deplorable sort.
Indulging in such sentimentality violates a (weak) duty that we have to ourselves
to apprehend the world accurately. e design and manufacture of these robots is
unethical in so far as it presupposes or encourages this” (Sparrow, 2002).
Sparrow was talking about the vulnerable elderly but the evidence presented
in this section suggests that young children are also highly susceptible to the belief
that they are forming a genuine relationship with a robot. We could say in abso-
lute terms that it is ethically unacceptable to create a robot that appears to have
mental states and emotional understanding. However, if it is the child’s natural
anthropomorphism that is deceiving her, then it could be argued that there are no
moral concerns for the roboticist or manufacturer. Aer all, there are many similar
illusions that appear perfectly acceptable to our society. As in our earlier example,
when we take a child to a puppet show, the puppeteer creates the illusion that the
puppets are interacting with each other and the audience. e ‘pretend’ attitude of
e crying shame of robot nannies 
the puppeteer may be supported by the parents to ‘deceivevery young children
into thinking that the puppets have mental states. But this minor ‘deception’ might
better be called ‘pretence’ and is not harmful in itself as long as it is not exploited
for unethical purposes.
It is dicult to take an absolutist ethical approach to questions about robots
and deception. Surely the moral correctness comes down to the intended applica-
tion of an illusion and its consequences. Drawing an illusion on a piece of paper to
fool our senses is an entertainment, but drawing it on the road to fool drivers into
crashing is morally unjustiable. Similarly, if the illusion of a robot with mental
states is created for a movie or a funfair or even to motivate and inspire children
at school there is no harm.
e moral issue arises and the illusion becomes a harmful deceit both when
it is used to lure a child into a false relationship with a robot and when it leads
parents to overestimate the capabilities of a robot. If such an illusory relationship is
used in combination with near-exclusive exposure to robot care, it could possibly
damage a child emotionally and psychologically, as we now discuss.
. Psychological risks of robot childcare
It is possible that exclusive or near exclusive care of a child by a robot could result in
cognitive and linguistic impairments. We only touch on these issues in this section
as our main focus here is on the ways in which a child’s relationship with a robot
carer could aect the child’s emotional and social development and potentially
lead to pathological states. e experimental research on robot–child interaction
to date has been short term with limited daily exposure to robots and mostly under
adult supervision. It would be unethical to conduct experiments on long term care
of children by robots. What we can do though, is make a ‘smash and grab raid’ on
the developmental psychology literature to extract pointers to what a child needs
for a successful relationship with a carer.
A fruitful place to start is with the considerable body of experimental research
on the theory of attachment (Ainsworth et al. 1978; Bowlby, 1969, 1980, 1998). is
work grew out of concerns about young children raised in contexts of less-than-
adequate care giving, who had later diculties in social relatedness (Zeanah et al.,
2000). Although the term ‘attachment’ has some denitional diculties, Hofer
(2006) has noted that it has “found a new usefulness as a general descriptive term
for the processes that maintain and regulate sustained social relationships, much
the same way that appetite refers to a cluster of behavioral and physiological pro-
cesses that regulate food intake” (p. 84).
 Noel Sharkey & Amanda Sharkey
A fairly standard denition that suits our purposes here is that “Infant attach-
ment is the deep emotional connection that an infant forms with his or her primary
caregiver, oen the mother. It is a tie that binds them together, endures over
time, and leads the infant to experience pleasure, joy, safety, and comfort in the
caregiver’s company. e baby feels distress when that person is absent. Sooth-
ing, comforting, and providing pleasure are primary elements of the relationship.
Attachment theory holds that a consistent primary caregiver is necessary for a
child’s optimal development.” (Swartout-Corbeil, 2006). Criticising such deni-
tions, Mercer (in press) acknowledges that while it is true that attachment has a
strong emotional component, cognitive and behavioural factors are also present.
ere is always controversy within developmental psychology about the
detailed aspects of attachment. Our aim is not to present a novel approach to
attachment theory but to use the more established ndings to warn about the
possibility of harmful outcomes from robot care of children. Here we take a broad
brush stroke approach to the psychological data. Given the paucity of research
on childcare robots we have not been age specic, but our concerns are predomi-
nantly with the lower age groups – babies to preschoolers up to ve years old – that
appear to be the target group of the manufacturers.
One well established nding is that becoming well adjusted and socially attuned
requires a carer with sucient maternal sensitivity to perceive and understand an
infant’s cues and to respond to them promptly and appropriately (Ainsworth et al.,
1974). It is this that promotes the development of secure attachment in infants and
allows them to explore their environment and develop socially. But insecure forms
of attachment can develop even when the primary carer is human. Extrapolating
from the developmental literature, we will argue below that a child le with a robot
in the belief that she has formed a relationship with it, would at best, form an
insecure attachment to the robot but is more likely to suer from a pathological
attachment disorder.
Responding appropriately to an infant’s cues requires a sensitive and subtle
understanding of the infant’s needs. We have already discussed a number of ways
in which the relationship between a child and a robot can be enhanced when the
robot responds contingently to the child’s actions with touch, speech or emo-
tional expressions. When the responses are not contingent, pre-school children
quickly lose interest as Tanaka et al. (2007) found when they programmed a robot
to perform a set dance routine. However, there is a signicant dierence between
responding contingently and responding appropriately to subtle cues and signals.
We humans understand and empathise with a child’s tears when she falls because
we have experienced similar injuries when we were children, and we know what
comforted us.
e crying shame of robot nannies 
ere is more to the meaning of emotional signals than simply analysing
and classifying expressions. Our ability to understand the behaviour of others is
thought to be facilitated by our mirror neurons (Rizzolatti et al., 2000; Caggiano
et al., 2009). Gallese (2001) argues that a mirror matching system underlies our
ability to perceive the sensations and emotions of others. For instance, it is possible
to show that the same neurons become active when a person feels pain as when
observing another feeling pain (Hutchinson et al., 1999).
Responding appropriately to the emotions of others is a contextually sensitive
ability that humans are particularly skilled at from a very young age. Even new-
borns can locate human faces and imitate their facial gestures (Meltzo & Moore,
1977). By 12 months, infants are able to interpret actions in context (Woodward &
Somerville, 2000). By 18 months, they can understand what another person
intends to do with an instrument and they will complete a goal-directed behaviour
that someone else fails to complete (Meltzo, 1995; Herrmann et al., 2007).
No matter how good a machine is at classifying expressions or even respond-
ing with matching expressions, children require an understanding of the reasons
for their emotional signals. A good carer’s response is based on grasping the cause
of emotions rather than simply acting on the emotions displayed. We should
respond dierently to a child crying because she has lost her toy than because she
has been abused. A child may over-react to a small event and a caring human may
realise that there is something else going on in the child’s life like the parents hav-
ing a row the night before. Appropriate responses require human common sense
reasoning over a very large, possibly innite, number of circumstances to ascertain
what may have caused an unhappy expression. “Come on now, cheer up, might
not always be the best response to a sad face.
A human carer may not get a full and complete understanding of the context
of an emotion every time but they will make good guess with a high hit rate and
can then recalculate based on the child’s subsequent responses.
Advances in natural language processing using statistical methods to search
databases containing millions of words could lead to supercially convincing con-
versations between robots and children in the near-future. However we should
not mistake such interactions as being meaningful in the same way as caring
adult–child interactions. It is one thing for a machine to give a convincing con-
versational response to a remark or question and a completely dierent thing to
provide appropriate guidance or well founded answers to puzzling cultural ques-
tions. ere are many cues that an adult human uses to understand what answer
the child requires and at what level.
Language interactions between very young children and adults are transac-
tional in nature both participants change over time. Adults change register
 Noel Sharkey & Amanda Sharkey
according to the child’s abilities and understanding. ey continuously assess the
child’s comprehension abilities through both language and non-verbal cues and
push the child’s understanding along. is is required both for language devel-
opment and cognitive development in general. It would be extremely dicult to
nd speciable rules that a robot could apply for transactional communication to
adequately replace a carer’s intuitions about appropriate guidance.
e consequence for children of contingent but inappropriate responses could
be an insecure attachment called ‘anxious avoidant attachment’. Typically, mothers
with insecurely attached children are, “less able to read their infant’s behaviour,
leading them to try to socialise with the baby when he is hungry, play with him
when he is tired, and feed him when he is trying to initiate social interaction
(Ainsworth et al., 1974 p. 129). Babies with withdrawn or depressed mothers are
more likely to suer aberrant forms of attachment: avoidance, or disorganised
attachment (Martins & Gaan, 2000).
‘Maternal sensitivity’1 provides a detailed understanding of an infant’s emo-
tional state. Responses need to be tailor made for each child’s particular personality.
A timid child will need a dierent response from an outgoing one, and a tired
child needs dierent treatment from a bored one. O-the-shelf responses, how-
ever benign, will not create secure attachment for a child: “If he’s bored he needs
a distraction. If he’s hungry he needs food. If he has caught his foot in a blanket, it
needs releasing. Each situation requires its own tailor-made response, suitable for
the personality of a particular baby. Clearly, it isn’t much use being given a rattle
when you are hungry, nor being rocked in your basket if your foot is uncomfort-
ably stuck” (Gerhardt, 2004 pp. 197).
Another important aspect of maternal sensitivity is the role played by “mind-
mindedness”, or the tendency of a mother to “treat her infant as an individual with
a mind rather than merely as a creature with needs that must be satised” (Meins
et al., 2001). Mind-mindedness has also been shown to be a predictor of the security
of attachment between the infant and mother. It comes from the human ability to
form a theory of mind based on knowledge of one’s own mind and the experience
of others. It allows predictions about what an infant may be thinking or intend-
ing by its actions, expressions and body language. A machine without a full blown
theory of mind (or a mind) could not easily demonstrate mind-mindedness.
Other types of insecure attachment are caused by not paying close enough
attention to a child’s needs. If the primary carer responds unpredictably, it
can lead to an ambivalent attachment where the child tends to overly cling to
her caregiver and to others. More recently, a fourth attachment category, dis-
organised attachment, has been identied (Solomon & George, 1999; Schore,
2001). It tends to result from parents who are overtly hostile and frightening to
their children, or who are so frightened themselves that they cannot attend to
e crying shame of robot nannies 
their children’s needs. Children with disorganised attachment have no consistent
attachment behaviour patterns.
While it seems unlikely that a robot could show a sucient level of sensitivity
to engender secure attachment, it could be argued that the robot is only stand-
ing in for the mother in the same way as a human nanny stands in. But a poor
nanny can also cause emotional or psychological damage to a child. Children and
babies are resilient but there is clear evidence that children do better when placed
with childminders who are highly responsive to them. Elicker et al. (1999) found
that the security of attachment of children (aged 12 to 19 months) to their child-
care providers varied depending on the quality of their interactions. Dettling et al.
(2000) studied children aged between 3 and 5 years old in home-based day care.
ey found that when they were looked aer by a focused and responsive carer,
their stress levels, as measured by swabbing them for cortisol, were similar to those
of children cared for at home by their mother. In contrast, cortisol testing of chil-
dren cared for in group settings with less focused attention indicated increased
levels of stress. Belsky et al. (2007) found that children between 4.5 and 12 years
old were more likely to have problems, as reported by teachers, if they had spent
more time in childcare centres. At the same time they found that an eect of higher
quality care showed up in higher vocabulary scores.
us even regular part-time care by a robot may cause some stress and minor
behavioural problems for children. But we are not suggesting that occasional use
will be harmful, especially if the child is securely attached to their primary carer;
it may be no more harmful than watching television for a few hours. However, it is
dicult at present, without the proper research, to compare the impact of passive
entertainment to a potentially damaging relationship with an interactive artefact.
e impact will depend on a number of factors such as the age of the child, the
type of robot and the tasks that the robot performs.
In our earlier discussion of robot–child interaction research, we noted claims
that children had formed bonds and friendships with robots. However, in such
research, the terms ‘attachment, ‘bonding’ and ‘relationship are oen used in a
more informal or dierent way than in developmental psychology. is makes it
dicult to join them at the seams. Attachment theorists are not just concerned
with the types of attachment but also with their consequences. As Fonagy (2003)
pointed out, attachment is not an end in itself, although secure attachment is
associated with better development of a wide range of abilities and competencies.
Secure attachment provides the opportunity “to generate a higher order regulatory
mechanism: the mechanism for appraisal and reorganisation of mental contents”
(Fonagy, 2003 pp. 230).
A securely attached child develops the ability to take another’s perspective.
When the mother or carer imitates or reects their baby’s emotional distress in
 Noel Sharkey & Amanda Sharkey
their facial expression, it helps the baby to form a representation of their own
emotions. is social biofeedback leads to the development of a second order sym-
bolic representation of the infant’s own emotional state (Fonagy, 2003; Gergely &
Watson, 1996, 1999), and facilitates the development of the ability to empathise,
and understand the emotions and intentions of others. ese are not skills that any
near-future robot is likely to have.
When a young child encounters unfamiliar, or ambiguous circumstances,
they will, if securely attached, look to their caregiver for clues about how to behave.
is behaviour is termed “social referencing” (Feinman 1982). e mother or
carer provides clues about the dangers, or otherwise, of the world, particularly
by means of their facial expressions. For example, Hornik et al. (1987) found that
securely attached infants, played more with toys that their mothers made positive
emotional expressions about, and less with those that received negative expres-
sions. A more convincing example of the powerful eect of social referencing is
provided by research using a Gibson visual cli. e apparatus, frequently used in
depth-perception studies, gives the child an illusion of a sheer drop onto the oor
(the drop is actually made safe by being covered with a clear plexiglass panel). Ten
month olds will look at their mother’s face, and continue to crawl over the appar-
ent perilous edge towards an attractive toy if their mothers smile and nod. ey
back away if their mothers look fearful or doubtful (Scorce et al., 1985).
It would certainly be possible to create a robot that provided facial indications
of approval or disapproval of certain actions for the child. But before a robot can
approve or disapprove, it needs to be able to predict and recognise what action the
child is intending. And even if it could predict accurately, it would need to have a
sense of what is or is not a sensible action for a given child in a particular circum-
stance. With such a wide range and large number of possible actions that a child
could intend, it seems unlikely that we could devise a robot system to make appro-
priate decisions. As noted from the studies cited above, it is important that responses
are individually tailored, sensitive to the child’s needs, consistent and predictable.
. Is robot care better than minimal care?
Despite the drawbacks of robot care, it could be argued that it would be prefera-
ble and less harmful than leaving a child with minimal human contact. Studies
of the shocking conditions in Romanian orphanages show the eects of extreme
neglect. Nelson et al. (2007) compared the cognitive development of young children
reared in Romanian institutions to that of those moved to foster care with fami-
lies. Children were randomly assigned to be either fostered, or to remain in insti-
tutional care. e results showed that children reared in institutions manifested
greatly diminished intellectual performance (borderline mental retardation) com-
pared to children reared in their foster families. Chugani et al. (2001) found that
e crying shame of robot nannies 
Romanian orphans who had experienced virtually no mothering, diered from
children of comparable ages in their brain development – and had less active orbito-
frontal cortex, hippocampus, amygdala and temporal areas.
But would a robot do a better job than scant human contact? We have no
explicit evidence but we can get some clues from animal research in the 1950s
when they were less concerned about ethical treatment. Harlow (1959) compared
the eect on baby monkeys of being raised in isolation with two dierent types of
articial “mother”: a wire-covered, or a so terry-cloth covered wire frame surro-
gate “mother”. ose raised with the so mother substitute became attached to it,
and spent more time with it than with the wire covered surrogate even when the
wire surrogate provided them with their food. eir attachment to the surrogate
was demonstrated by their increased condence when it was present – they would
return and cling to it for reassurance, and would be braver – venturing to explore
a new room and unfamiliar toys, instead of cowering in a corner. e babies fed
quickly from the wire surrogate and then returned to cuddle and cling to the terry
cloth one.
is suggests that human infants might do better with a robot carer than with
no carer at all. But the news is not all good. Even though the baby monkeys became
attached to their cloth covered surrogates, and obtained comfort and reassur-
ance from them, they did not develop normally. ey exhibited odd behaviours
and “displayed the characteristic syndrome of the socially-deprived macaque:
they clutched themselves, engaged in non-nutritive sucking, developed stereo-
typed body-rocking and other abnormal motor acts, and showed aberrant social
responses” (Mason & Berkson, 1975).
Although Harlow’s monkeys clearly formed attachments to inanimate surro-
gate mothers, the surrogates le them seriously lacking in the skills needed to
reach successful maturity. Of course, a robot nanny could be more responsive than
the cuddly surrogate statues. In fact when the surrogate terry-cloth mother was
hung from the ceiling so that the baby monkeys had to work harder to hug it as
it swung, they developed more normally that when the surrogate was stationary
(Mason & Berkson, 1975). But these were not ideal substitutes for living mothers.
e monkeys did even better when they were raised in the company of dogs which
were not mother substitutes at all.
We could conclude that robots would be better than nothing in horric situa-
tions like the Romanian orphanages. But they would really need to be a last resort.
Without systematic experimental work we cannot tell whether or not exclusive
care by a robot would be pathogenic. It is even possible that the severe deprivation
exclusive care might engender could lead to the type of impaired development
pattern found in Reactive Attachment Disorder (RAD) (Zeanah et al., 2000). RAD
was rst introduced in DSM-III (American Psychiatric Association, 1980). e
term is used in both the World Health Organization’s International Statistical
 Noel Sharkey & Amanda Sharkey
Classication of Diseases and Related Health Problems (ICD-10) and in the DSM-
IV-TR, (American Psychiatric Association, 1994).
Reactive Attachment Disorder is dened by inappropriate social relatedness,
as manifest either in (i) failure to appropriately initiate or respond to social encoun-
ters, or (2) indiscriminate sociability or diuse attachment. Although Rushton and
Mayes (1997) warn against the overuse of the diagnosis of RAD it is still possible
that the inappropriate and exclusive care of a child by a robot could lead to behaviour
indicative of RAD.
Another worry is that a “robots are better than nothing” argument could lead
to a more widespread use of the technology in situations where there is a shortage
of funding, and where what is actually needed is more sta and better regulation.
It is a dierent matter to use a teleoperated robot as a parental stand in for children
who are in hospitals, perhaps quarantined or whose parent needs to be far away.
Robots under development like the MIT Huggable (Stiehl et al. 2005, 2006) or the
Probo (Goris et al. 2008, 2009) full that role and allow carers to communicate
and hug their children remotely. Such robots do not give rise to the same ethical
concerns as exclusive or near exclusive care by autonomous robots.
Overall, the evidence presented in this section points to the kinds of emo-
tional harm that robot carers might cause if infants and young children, lacking
appropriate human attachment, were overexposed to them at critical periods in
their development. We have reviewed evidence of the kinds of human skills and
sensitivities required to create securely attached children and compared these
with current robot functionality. While we have no direct experimental support as
yet, it seems clear that the robots lack the necessary abilities to adequately replace
human carers. Given the potential dangers, much more investigation needs to be
carried out before robot nannies are freely available on the market.
. Legal protections and accountability
e whole idea of robot childcare is a new one and has not had time to get into the
statute books. ere have been no legal test cases yet and there is little provision in
the law. e various international nanny codes of ethics (e.g. FICE Bulletin 1998)
do not deal with the robot nanny but require the human nanny to ensure that the
child is socialised with other children and adults and that they are taught social
responsibility and values. ese requirements are not enforceable by the law.
ere are a number of variations in the laws for child protection of dierent
European countries, USA and other developed countries, but essentially legal
cases against the overuse of robot care would have to be mounted on grounds
of neglect, abuse or mistreatment and perhaps on grounds of delaying social
e crying shame of robot nannies 
and mental development. e National Society for the Prevention of Cruelty
to Children (NSPCC) in the UK regards neglect as “the persistent lack of appro-
priate care of children, including love, stimulation, safety, nourishment, warmth,
education and medical attention. It can have a serious eect on a child’s physical,
mental and emotional development. For babies and very young children, it can be
life-threatening.
ere are currently no international guidelines, codes of practice or legislation
specically dealing with a child being le in the care of a robot. ere has been talk
from the Japanese Ministry of Trade and Industry (Lewis, 2007), and the South
Korean Ministry of Economy, Trade and Industry (Yoon-mi, 2007) about drawing
up ethical and safety guidelines. e European Robotics Research Network also
suggests a number of areas in robotics needing ethical guidelines (Verrogio, 2006)
but no guidelines or codes have yet appeared from any of these sources. Some even
argue that, “because dierent cultures may disagree on the most appropriate uses
for robots, it is unrealistic and impractical to make an internationally unied code
of ethics.” (Guo & Zhang, 2009). ere is certainly some substance to this argu-
ment as Guo and Zhang (2009) point out: “the value placed on the development of
independence in infants and toddlers could lead to totally divergent views of the
use of robots as caregivers for children. However, despite cultural dierences,
we believe that there are certain inviolable rights that should be aorded to all
children regardless of culture, e.g. all children have a right not to be treated cruelly,
neglected, abused or emotionally harmed.
e United Nations Convention on the Rights of the Child gives 40 major
rights to children and young persons under 18. e most pertinent of these is
Article 19 which states that, Governments must do everything to protect children
from all forms of violence, abuse, neglect and mistreatment. Article 27 requires
that, “States Parties recognize the right of every child to a standard of living ade-
quate for the child’s physical, mental, spiritual, moral and social development.
ese articles could be seen to vaguely apply to the care of children by robots but
it is certainly far from being clear.
In the USA, Federal legislation identies a minimum set of acts or behaviours
that dene child abuse and neglect. e Federal Child Abuse Prevention and Treat-
ment Act (CAPTA), (42 U.S.C.A. §5106g), as amended by the Keeping Children
and Families Safe Act of 2003, denes child abuse and neglect as, at minimum:
Any recent act or failure to act on the part of a parent or caretaker which results
in death, serious physical or emotional harm, sexual abuse or exploitation; or
An act or failure to act which presents an imminent risk of serious harm.
Under US federal law, neglect is divided into a number of dierent sections. e
most appropriate for our purposes, and one that does not appear under UK or
 Noel Sharkey & Amanda Sharkey
European law, is emotional or psychological abuse. Emotional or psychological
abuse is dened as, “a pattern of behavior that impairs a child’s emotional develop-
ment or sense of self-worth.” is may include constant criticism, threats, or rejec-
tion, as well as withholding love, support, or guidance. Emotional abuse is oen
dicult to prove and, therefore, child protective services may not be able to inter-
vene without evidence of harm or mental injury to the child. “Emotional abuse is
almost always present when other forms are identied.” (What is Child Abuse and
Neglect Factsheet).
Although much of the research on child–robot interaction has been conducted
in the USA, the main manufacturers and currently the main target audience is in
Japan and South Korea. As in the other countries mentioned, the only legisla-
tion available to protect Japanese children from overextended care by robots is
the Child Abuse Prevention Law 2000. “e Law denes child abuse and neglect
into four categories: (i) causing external injuries or other injuries by violence;
(ii) committing acts of indecency on a child or forcing a child to commit indecent
acts; (iii) neglecting a child’s needs such as meals, leaving them for a long time,
etc.; and (iv) speaking and behaving in a manner which causes mental distress for
a child.” (Nakamura, 2002).
In South Korea it may be harder to prevent the use of extended robot child-
care. Hahm and Guterman (2001) point out that “South Korea has had a remark-
ably high incidence and prevalence rates of physical violence against children,
yet the problem has received only limited public and professional attention until
very recently” (p. 169). e problem is that “South Koreans strongly resist inter-
ference in family lives by outsiders because family aairs, especially with regard
to child-rearing practices are considered strictly the family’s own business.e
one place where it might be possible to secure a legal case against near-exclusive
care by robots is in the recently revised Special Law for Family Violence Criminal
Prohibition (1998). is includes the Child Abuse and Neglect Prevention Act
which is similar to the laws of other civilised countries: “the new law recognises
that child maltreatment may entail physical abuse, sexual abuse, emotional abuse
or neglect”.
In the UK, a case against robot care would have to be built on provisions
in the Children and Young Persons Act (1933 with recent updates) for leaving a
child unsupervised “in a manner likely to cause unnecessary suering or injury to
health. e law does not even specify at what age a person can be a baby sitter; it
only states that when a baby-sitter is under the age of 16 years old, the parents of
the child being “sat” are legally responsible to ensure that the child does not come
to harm.
Under UK law, a child does not have to suer actual harm for a case of neglect
to be brought. It is sucient to show that the child has been kept in, “a manner
e crying shame of robot nannies 
likely to cause him unnecessary suering and injury to health, as in the case of R v
Jasmin, L (2004) 1CR, App.R (s) 3. e Appellants had gone to work for periods of
up to 3 hours leaving their 16 month old child alone in the home. is happened
on approximately three separate occasions. e Appellants were both found guilty
of oences relating to neglect contrary to S1(1) Children and Young Persons Act
1933 and were sentenced to concurrent terms of 2 years imprisonment. Summing
up, Lord Justice Law said that, there was no evidence of any physical harm
resulting from this neglect [but] …both parents had diculty in accepting the
idea that their child was in any danger.
e outcome would have been dierent if the parents had le the child alone
in exactly the same way but had stayed at home in a dierent room. If they could
have shown that they were monitoring the child with a baby monitor (and perhaps
a CCTV camera), the case against them would have been weak and it is highly
unlikely that they would have been prosecuted.
is case is relevant for the protection of children against robot care because
near-future robots, as discussed earlier, could provide safety from physical harm
and allow remote monitoring combined with autonomous alerting and a way for
the parents to remotely communicate with their children. e mobile remote
monitoring available on a robot would be signicantly better than a static camera
and baby monitor. If absent parents had such a robot system and could reach the
child within a couple of minutes, it would be dicult to prove negligence. e
time to get home is probably crucial. We could play the game of gradually moving
the parents’ place of work further and further away to get a threshold time of per-
missibility. It then becomes like the discussion of how many hairs do you have to
remove from someone’s head before they can be called bald. ese are the kind of
issues that will only be decided by legal precedence.
Another important question about robot care is who would be responsible
and accountable for psychological and emotional harm to the child? Under current
legislations it would be the parents or primary carers. But it may not be fair to
hold parents or primary carers entirely responsible. Assuming that the robot could
demonstrably keep the child safe from physical harm, the parents may have been
misled by the nature of the product. For example, if a carer’s anthropomorphism
had been amplied as a result of some very clever robot–human interaction, then
that carer may have falsely believed that the robot had mental states, and could
form ‘real’ relationships.
is leads to problems in determining accountability beyond the primary
carer. Allocating responsibility to the robot would be ridiculous. at would be
like holding a knife responsible for a murder we are not talking about hypo-
thetical sentient robots here. But blaming others also has its diculties. ere is a
potentially long chain of responsibility that may involve the carer, the manufacturer
 Noel Sharkey & Amanda Sharkey
and a number of a third parties such as the programmers and the researchers who
developed the kit. is is yet another of the many reasons why there is a need to
examine the ethical issues before the technology is developed for the mass market.
Codes of practice and even legislation are required to ensure that the advertising
claims are realistic and that the product contains warnings about potential danger
of overuse.
If a case of neglect is eventually brought to court because of robot care, a large
corporation with commercial interests may put the nest legal teams to work.
eir argument could be based on demonstrating that a robot could both keep a
child safe from physical harm and alert a designated adult about imminent dan-
gers in time for intervention. It would be more dicult to prove emotional harm
because many children have emotional problems regardless of their upbringing.
Pathological states can be genetic in origin or result from prenatal brain damage
among other possible causes. us a legal case of neglect is most likely to be won if
an infant or a baby is discovered at home alone with an unsafe robot.
. Conclusions
We have discussed a trajectory for childcare robotics that appears to be moving
towards sustained periods of care with the possibility of near-exclusive care. We
examined how childcare robots could be developed to keep children safe from
physical harm. en we looked at research that showed children forming relation-
ships and friendships with robots and how they came to believe that the robots
had mental states. Aer that, we examined the functionality of current childcare
robots and discussed how these could be extended in the near future to create
more ‘realistic’ interactions between children and robots, and intensify the illusion
of genuine relationships.
Our main focus throughout has been on the potential ethical risks that robot
childcare poses. e ethical problems discussed here could be among those that
society will have to solve over the next 20 years. e main issues and questions we
raised were:
Privacy: every child has a right to privacy under Articles 16 and 40 of the
UN Convention on Child Rights. How much would the use of robot nannies
infringe these rights?
Restraint: ere are circumstances where a robot could keep a child from
serious physical harm by restraining her. But how much autonomous decision
authority should we give to a robot childminder?
e crying shame of robot nannies 
Deception: Is it ethically acceptable to create a robot that fools people into
believing that it has mental states and emotional understanding? In many cir-
cumstances this can be considered to be natural anthropomorphism, illusion
and fun pretence. Our concerns are twofold (i) it could lead parents to over-
estimate the capabilities of a robot carer and to imagine that it could meet the
emotional needs of a child and (ii) it could lure a child into a false relationship
that may possibly damage her emotionally and psychologically if the robot is
overused for her care.
Accountability: Who is morally responsible for leaving children in the care
of robots? e law on neglect puts the duty of care on the primary carer. But
should the primary carer shoulder the whole moral burden or should others,
such as the manufacturers, take some share in the responsibility?
Psychological damage: Is it ethically acceptable to use a robot as a nanny
substitute or as a primary carer? is was the main question explored. If our
analysis of the potentially devastating psychological and emotional harm that
could result is correct, then the answer is a resounding ‘no.
In our exploration of the developmental diculties that could be caused by robot
care, we have assumed that it would be regular, daily and possibly near exclusive
care. We also discussed evidence that part-time outside care can cause children
minor harm that they can later recover from. Realistically a couple of hours a day
in the care of a robot are unlikely to be any more harmful than watching televi-
sion – if we are careful about what we permit the robot to do. We just don’t know
if there is a continuum between the problems that could arise with exclusive care
and those that may arise with regular short-time care.
In a brief overview of international laws, we found that the main legal protec-
tion that children have is under the laws of neglect. A major concern was that as
the robots become safer, protect children from physical harm and ensure that they
are fed and watered, it will become harder to make a case for neglect. However,
the quality of robot interaction we can expect, combined with the evidence from
developmental studies on attachment, suggest that robots would at best be insensi-
tive carers unable to respond with sucient attention to the ne detailed needs of
individual children.
As we stated at the outset, we are seeking discussion of these matters rather
than attempting to oer answers or solutions. e robotics community needs to
consider questions like the ones we have raised, and take them up, where possible,
with their funders, the public and policy makers. Ultimately, it will be up to society,
the legislature and professional bodies to provide codes of conduct to deal with
future robot childcare.
 Noel Sharkey & Amanda Sharkey
Note
. Maternal sensitivity is a term used even when the primary carer is not the “mother”.
References
Ainsworth, M., Blehar, M., Waters, E. & Wall, S. (1978). Patterns of attachment: a psychological
study of the strange situation. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.
Ainsworth, M.D.S., Bell, S.M., & Stayton, D.J. (1974). Infant-mother attachment and social
development: Socialisation as a product of reciprocal responsiveness to signals. In
M.P.M. Richards (Ed.), e introduction of the child into a social world. London: Cambridge
University Press.
Bartlett, M.S., Littlewort-Ford, G.C., & Movellan, J.R. (2008). Computer Expression Recogni-
tion Toolbox. Demo: 8th International IEEE Conference on Automatic Face and Gesture
Recognition. Amsterdam.
Belsky, J., Vandell, D.L., Burchinal, M., Clarke-Stewart, K.A., McCartney, K., Owen, M.T. (2007).
Are there long-term eects of early child care? Child Development, 78 (2), 681–701.
Bittybobo. URL: http://bittybobo.blogspot.com/search?updated-min=2008-01-01T00%3A00%
3A00-08%3A00&updated-max=2009-01-01T00%3A00%3A00-08%3A00&max-results=23,
comments under April 17, 2008, last accessed 15 January 2010)
Blum, D. (2003). Love at Goon Park: Harry Harlow and the Science of Aection. John Wiley:
Chichester, England.
Bowlby, J. (1969). Attachment and Loss: Volume 1: Attachment. London: Hogarth Press.
Bowlby, J. (1980). Attachment and Loss: Volume 3: Loss. London: Hogarth Press.
Bowlby, J. (1998). (edition originally 1973) Attachment and Loss: Volume 2: Separation, anger
and anxiety. London: Pimlico.
Caggiano, V., Fogassi, L., Rizzolatti, G., ier, P., & Casile, A. (2009). Mirror neurons dierentially
encode peripersonal and extrapersonal space of monkeys. Science, Vol. 324, pp. 403–406.
Cayton, H. (2006). From childhood to childhood? Autonomy and dependence through the ages
of life. In Julian C. Hughes, Stephen J. Louw, Steven R. Sabat (Eds) Dementia: mind, mean-
ing, and the person, Oxford, UK: Oxford University Press 277–286.
Children and Young Persons Act 1933. UK Statute Law Database, Part 1 Prevention of cruelty
and exposure to moral and physical danger: Oences: 12 Failing to provide for safety of
children at entertainments. URL: http://www.statutelaw.gov.uk/legResults.aspx?LegType=
All+Legislation&searchEnacted=0&extentMatchOnly=0&confersPower=0&blanketAme
ndment=0&sortAlpha=0&PageNumber=0&NavFrom=0&activeTextDocId=1109288, last
accessed 15 January 2010.
Chugani, H., Behen, M., Muzik, O., Juhasz, C., Nagy, F. & Chugani, D. (2001). Local brain
functional activity following early deprivation: a study of post-institutionalised Romanian
orphans. Neuroimage, 14: 1290–1301.
Cramer, H.S., Kemper, N.A., Amin, A., & Evers, V. (2009). e eects of robot touch and
proactive behaviour on perceptions of human–robot interactions. In Proceedings of the
4th ACM/IEEE international Conference on Human Robot interaction (La Jolla, California,
USA, March 09–13, 2009). HRI ‘09. ACM, New York, NY, 275–276.
e crying shame of robot nannies 
Dautenhahn, K., Werry, I. (2004). Towards Interactive Robots in Autism erapy: Background,
Motivation and Challenges. Pragmatics and Cognition 12(1), pp. 1–35.
Dautenhahn, K. (2003). Roles and Functions of Robots in Human Society – Implications from
Research in Autism erapy. Robotica 21(4), pp. 443–452.
Dettling, A., Parker, S., Lane, S., Sebanc, A., & Gunnar, M. (2000). Quality of care determines
whether cortisol levels rise over the day for children in full-day childcare. Psychoneuroen-
docrinology, 25, 819±836.
Ekman P. & Friesen, W. (1978). Facial Action Coding System: A Technique for the Measurement
of Facial Movement, Consulting Psychologists Press, Palo Alto, CA.
Elicker, J., Fortner-Wood, C., & Noppe, I.C. (1999). e context of infant attachment in family
child care. Journal of Applied Developmental Psychology, 20, 2, 319–336.
Feinman, S., Roberts, D., Hsieh, K.F., Sawyer, D. & Swanson, K. (1992), A critical review of
social referencing in infancy, in Social Referencing and the Social Construction of Reality in
Infancy, S. Feinman, Ed. New York: Plenum Press.
Fice Bulletin (1998). A Code of Ethics for People Working with Children and Young People. URL:
http://www.ance.lu/index.php?option=com_content&view=article&id=69:a-code-of-
ethics-for-people-working-with-children-and-young-people&catid=10:ce-declaration-
2006&Itemid=29, last accessed 15 January 2010.
Fonagy, P., (2003). e development of psychopathology from infancy to adulthood: e
mysterious unfolding of disturbance in time. Infant Mental Health Journal Volume 24,
Issue 3, Date: May/June 2003, Pages: 212–239.
Fong. T., Nourbakhsh, I., & Dautenhahn, K. (2003). A Survey of Socially Interactive Robots,
Robotics and Autonomous Systems 42(3–4), 143–166.
Gallese, V. (2001). e shared manifold hypothesis: From mirror neurons to empathy. Journal of
Consciousness Studies, 8, 33–50.
Gergely, G., & Watson, J. (1996). e social biofeedback model of parental aect-mirroring.
International Journal of Psycho-Analysis, 77, 1181–1212.
Gergely, G., & Watson, J. (1999). Early social-emotional development: Contingency perception
and the social biofeedback model. In P. Rochat (Ed.), Early social cognition: Understanding
others in the rst months of life (pp. 101–137). Hillsdale, NJ: Erlbaum.
Gerhardt, S. (2004). Why love matters: how aection shapes a baby’s brain. Routledge Taylor and
Francis Group, London and New York.
Goris, K., Saldien, J., Vanderniepen, I., & Lefeber, D. (2008). e Huggable Robot Probo, a
Multi-disciplinary Research Platform. Proceedings of the EUROBOT Conference 2008,
Heidelberg, Germany, 22–24 May, 2008, pages 63–68.
Goris, K., Saldien, J., & Lefeber, D. 2009. Probo: a testbed for human robot interaction. In
Proceedings of the 4th ACM/IEEE International Conference on Human Robot interaction
(La Jolla, California, USA, March 09–13, 2009). HRI ‘09. ACM, New York, NY, 253–254.
Guo, S. & Zhang, G. (2009). Robot Rights, Letter to Science, 323, 876.
Hahm, H.C., Guterman N.B. (2001). e emerging problem of physical child abuse in South
Korea. Child maltreatment 6(2): 169–79.
Hello kitty web reference
URL: http://www.dreamkitty.com/Merchant5/merchant.mvc?Screen=PROD&Store_Code=
DK2000&Product_Code=K-EM070605&Category_Code=HKDL.
Hermann, E., Call, J., Hare, B., & Tomasello, M. (2007). Human evolved specialized skills of
social cognition: e cultural intelligence hypothesis. Science, 317(5843), 1360–1366.
 Noel Sharkey & Amanda Sharkey
Hertenstein, M.J. (2002). Touch: its communicative functions in infancy, Human Development,
45, 70–92.
Hertenstein, M.J., Verkamp, J.M., Kerestes, A.M., & Holmes, R.M. (2006). e communicative
functions of touch in humans, nonhuman primates, and rats: A review and synthesis of the
empirical research, Genetic, Social & General Psychology Monographs, 132(1), 5–94.
Hornik, R., Risenhoover, N., & Gunnar, M. (1987). e eects of maternal positive, neutral, and
negative aective communications on infant responses to new toys. Child Development,
58, 937–944.
Hutchinson, W., Davis, K., Lozano, A., Tasker, R., & Dostrovsky, J. (1999). Pain-related neurons
in the human cingulated cortex. Nature Neuroscience, 2, 403–5.
Kahn, P.H., Jr., Friedman, B., Perez-Granados, D., & Freier, N.G. (2006). Robotic pets in the lives
of preschool children. Interaction Studies, 7(3), 405–436.
Kanda, T., Takuyaki, H., Eaton, D. & Ishiguro, H. (2004). Interactive robots as social partners
and peer tutors for children: a eld trial, Human Computer Interaction, 19, 16–84.
Kanda, T., Nishio, S., Ishiguro, H., & Hagita, N. (2009). Interactive Humanoid Robots and
Androids in Children’s Lives. Children, Youth and Environments, 19 (1), 12–33. Available
from: www.colorado.edu/journals/cye.
Lewis, L. (2007). e robots are running riot! Quick, bring out the red tape, e Times Online,
April 6th. URL: http://www.timesonline.co.uk/tol/news/world/asia/article1620558.ece, last
accessed 15 January 2010.
Liu, C., Conn, K., Sarkar, N., & Stone, W. (2008). Online aect detection and robot behaviour
adaptation for intervention of children with autism. IEEE Transactions on Robotics, Vol. 24,
Issue 4, pp. 883–896.
Lopes, M.M., Koenig, N.P., Chernova, S.H., Jones, C.V., & Jenkins, O.C. (2009). Mobile human-
robot teaming with environmental tolerance. In Proceedings of the 4th ACM/IEEE inter-
national Conference on Human Robot interaction (La Jolla, California, USA, March 09–13,
2009). HRI ‘09. ACM, New York, NY, 157–164.
Marti, P., Palma, V., Pollini, A., Rullo, A. & Shibata, T. (2005). My Gym Robot, Proceeding of the
Symposium on Robot Companions: Hard Problems and Open Challenges in Robot–Human
Interaction, pp.64–73.
Martins, C. & Gaan, E.A. (2000). Eects of early maternal depression on patterns of infant-
mother attachment: A meta-analytic investigation, Journal of Child Psychology and Psychiatry
42, pp. 737–746.
Mason, W.A. & Berkson, G. (1975). Eects of Maternal Mobility on the Development of Rocking
and Other Behaviors in Rhesus Monkeys: A Study with Articial Mothers. Developmental
Psychobiology 8, 3, 197–211.
Mason, W.A. (2002). e Natural History of Primate Behavioural Development: An Organismic
Perspective. In Eds. D. Lewkowicz & R. Lickliter, Conceptions of Development: Lessons from
the Laboratory. Psychology Press. 105–135.
Mavridis, N., Chandan, D., Emami, S., Tanoto, A., BenAbdelkader, C. & Rabie, T. (2009).
FaceBots: Robots Utilizing and Publishing Social Information in Facebook. HRI’09,
March 11–13, 2009, La Jolla, California, USA. ACM 978-1-60558-404-1/09/03.
Melson, G.F., Kahn, P.H., Jr., Beck, A.M., & Friedman, B. (in press a). Robotic pets in human
lives: Implications for the human-animal bond and for human relationships with personied
technologies. Journal of Social Issues.
Melson, G.F., Kahn, P.H., Jr., Beck, A.M., Friedman, B., Roberts, T., Garrett, E., & Gill, B.T.
(in press b). Robots as dogs? – Children’s interactions with the robotic dog AIBO and a live
Australian shepherd. Journal of Applied Developmental Psychology.
e crying shame of robot nannies 
Mercer, J. (in press for 2010), Attachment theory, eory and Psychology.
Meins, E., Fernyhough, C., Fradley, E. & Tuckey, M. (2001). Rethinking maternal sensitivity:
mothers’ comments on infants’ mental processes predict security of attachment at 12 months.
Journal of Child Psychology and Psychiatry 42, pp. 637–48.
Meltzo, A.N. (1995). Understanding the intention of others: Re-enactment of intended acts by
18 month old children. Developmental Psychology 32, 838–850.
Meltzo, A.N. & Moore, M.K. (1977). Imitation of facial and manual gestures by human neonates.
Science, 198, 75–78.
Mitsunaga, N., Miyashita, T., Ishiguro, H., Kogure, K., & Hagita, N. (2006). Robovie-IV:
A Communication Robot Interacting with People Daily in an Office, In Proc of IROS,
5066–5072.
Nakamura, Y. (2002). Child abuse and neglect in Japan, Paediatrics International, 44, 580–581.
Nelson, C.A., Zeanah, C.H., Fox, N.A., Marshall, P.J., Smyke, A.T. & Guthrie, D. (2007). Cogni-
tive recovery in socially deprived young children: e Bucharest early intervention project.
Science, 318, no 5858, pp. 1937–1940.
Orpwood, R., Adlam, T., Evans, N., Chadd, J. (2008). Evaluation of an assisted-living smart
home for someone with dementia. Journal of Assistive Technologies, 2, 2, 13–21.
Rocks, C.L., Jenkins, S., Studley, M. & McGoran, D. (in press).’Heart Robot’, a public engagement
project. Robots in the Wild: Exploring Human–Robot Interaction in Naturalistic Environ-
ments. Special Issue of Interaction Studies.
Rushton, A. & Mayes, D. 1997. Forming Fresh Attachments in Childhood: A Research Update.
Child and Familiy Social Work 2(2): 121–127.
Sharkey, N. (2008a). e Ethical Frontiers of Robotics, Science, 322. 1800–1801.
Sharkey, N (2008b). Cassandra or False Prophet of Doom: AI Robots and War, IEEE Intelligent
Systems, vol. 23, no, 4, 14–17, July/August Issue.
Sharkey, N. & Sharkey, A. (in press) Living with robots: ethical tradeos in eldercare, In Wilks, Y.
Articial Companions in Society: scientic, economic, psychological and philosophical perspec-
tives. Amsterdam: John Benjamins.
Sharkey, N., & Sharkey, A. (2006). Articial Intelligence and Natural Magic, Articial Intelligence
Review, 25, 9–19.
Shibata, T., Mitsui, T., Wada, K., Touda, A., Kumasaka, T., Tagami, K. & Tanie, K. (2001). Mental
Commit Robot and its Application to erapy of Children, Proc. of 2001 IEEE/ASME Int.
Conf. on Advanced Intelligent Mechatronics, pp.1053–1058.
Schore, A. (2001). e eects of early relational trauma on right brain development, aect regu-
lation, and infant mental health. Infant Mental Health Journal, 22, 1–2, pp. 201–69.
Scorce, J.F., Ernde, R.N., Campos, J., & Klinnert, M.D. (1985). Maternal emotional signaling:
Its eect on the visual cli behavior of 1-year-olds. Developmental Psychology, 21(1),
195–200.
Solomon, J. & George, C. (eds) (1999). Attachment Disorganisation, New York: Guilford Press.
Sparrow, R. (2002). e March of the Robot Dogs, Ethics and Information Technology, Vol. 4.
No. 4, pp. 305–318.
Stiehl, W.D., Lieberman, J., Breazeal, C., Basel, L., & Lalla, L. (2005). e Design of the Huggable:
A erapeutic Robotic Companion for Relational, Aective Touch. AAAI Fall Symposium
on Caring Machines: AI in Eldercare, Washington, D.C.
Stiehl, W.D., Breazeal, C., Han, K., Lieberman, J., Lalla, L., Maymin, A., Salinas, J., Fuentes, D.,
Toscano, R., Tong, C.H., & Kishore, A. 2006. e huggable: a new type of therapeutic
robotic companion. In ACM SIGGRAPH 2006. Sketches (Boston, Massachusetts, July 30–
August 03, 2006). SIGGRAPH ‘06. ACM, New York, NY, 14.
 Noel Sharkey & Amanda Sharkey
Swartout-Corbeil, D.M. (2006). Attachment between infant and caregiver, In e Gale Encyclopedia
of Children’s Health: Infancy through Adolescence, MI: e Gale Group.
Tanaka F., Cicourel, A. & Movellan, J.R. (2007). Socialization Between Toddlers and Robots
at an Early Childhood Education Center. Proceedings of the National Academy of Science.
Vol 194, No 46, 17954 x17958.
Turkle, S., Taggart, W., Kidd, C.D., Dasté, O. (2006a). Relational Artifacts with Children and
Elders: e Complexities of Cybercompanionship. Connection Science, 18, 4, pp. 347–362.
Turkle, S., Breazeal, C., Dasté, O., & Scassellati, B., (2006b). First Encounters with Kismet and
Cog: Children Respond to Relational Artifacts. In Digital Media:
Transformations in Human Communication, Paul Messaris & Lee Humphreys (eds.). New York:
Peter Lang Publishing.
United Nations Convention on the Rights of the Child, URL: http://www2.ohchr.org/english/
law/crc.htm, last accessed 15 January 2010.
Verrugio, G. (2006). e EURON robotethics roadmap, 6th IEEE-RAS International Conference
on Humanoid Robots, 612–617.
What is Child Abuse and Neglect Factsheet, URL: http://www.childwelfare.gov/pubs/factsheets/
whatiscan.cfm, last accessed 15 January 2010.
Wallach, W., & Allen, C. (2009). Moral Machines: Teaching Robots Right from Wrong, Oxford
University Press, New York.
Woodward, A.L. & Sommerville, J.A. (2000). Twelve-month-old infants interpret action in context.
Psychological Science, 11, 73–77.
Yohanan, S., & MacLean, K.E. (2008). e Haptic Creature Project: Social Human–Robot Inter-
action through Aective Touch. In Proceedings of the AISB 2008 Symposium on the Reign of
Catz & Dogs: e Second AISB Symposium on the Role of Virtual Creatures in a Computer-
ised Society, volume 1, pages 7–11, Aberdeen, Scotland, UK, April, 2008.
Yohanan, S., Chan, M., Hopkins, J., Sun, H., & MacLean, K. (2005). Hapticat: Exploration of
Aective Touch. In ICMI ‘05: Proceedings of the 7th International Conference on Multi-
modal Interfaces, pages 222–229, Trento, Italy, 2005.
Yoon-mi, K. (2007). Korea dras Robot Ethics Charter, e Korea Herald, April 28.
Yoshiro, U., Shinichi, O., Yosuke, T., Junichi, F., Tooru, I., Toshihro, N., Tsuyoshi, S., Junichi, O,
(2005). Childcare Robot PaPeRo is designed to play with and watch over children at nurs-
ery, kindergarten, school and at home. Development of Childcare Robot PaPeRo, Nippon
Robotto #Gakkai Gakujutsu Koenkai Yokoshu, 1–11.
Zeanah, C.H., Boris, N.W. & Lieberman, A.F. (2000). Attachment disorders of Infancy In
Arnold J. Samero, Michael Lewis, Suzanne Melanie Miller (Eds) Handbook of develop-
mental psychopathology, Birkhäuser, 2nd Edition.
Authors’ address
Noel Sharkey & Amanda Sharkey
Department of Computer Science
University of Sheeld
Regent Court
211 Portobello
Sheeld, S1 4DP
UK
Email: noel@dcs.shef.ac.uk, amanda@dcs.shef.ac.uk
... This approach has also been supported by experimental studies that show the possibility of having a form of reciprocity in the relationship between humans and robots, even if that relationship is minimal. For example, they are adopted for a wide range of activities with humans: they can nanny [69], care [70], and can be used for sexual purposes [33]. ...
Article
Full-text available
One of the most challenging goals in social robotics is implementing emotional skills. Making robots capable of expressing and deciphering emotions is considered crucial for allowing humans to socially interact with them. In addition to presenting technical challenges, the implementation of artificial emotions in artificial systems raises intriguing ethical issues. In this paper, moving from the case study of a human android, we present a relational perspective on human–robot interaction, claiming that, since robots are material objects not endowed with subjectivity, only an asymmetrical relationship can be established between robots and humans. Based on this claim, we deal with some of the most relevant issues in roboethics, such as transparency, trust, and authenticity. We conclude suggesting that a machine-centered approach to ethics should be abandoned in favor of a relational approach, which revalues the centrality of human being in the Human–Robot Interaction.
... Particularly for this latter group of individuals (and for individuals of all ages, actually), it would be important to emphasize that a RP is for entertainment and companionship. Worthwhile highlighting in this line of reasoning is the ethical concerns that have been raised around RPs [37], as it is the case of considering the provision of a RP as a justification to leave older adults alone for longer periods [38]. ...
Article
Full-text available
Robotic pets (RPs) are being increasingly used with older adults to address behavioral and psychological symptoms of dementia, and as a companion in aged care facilities. Main benefits include increased well-being and better social interactions. To date, little is known about the use of RPs by healthy older adults outside institutional settings. The aim of this paper is to map the research about the use of RPs in community-dwelling healthy older adults considering the users’ perspectives. A scoping review was performed using four databases (Academic Search Complete, Web of Science, PubMed, and SCOPUS), searching for papers published until May 2021. A final selection of ten documents was submitted to thematic analysis. The participants in the studies were all potential users (not actual users, since RPs were still in development). Main findings point to three themes concerning the main expected characteristics of RPs from the perspective of community-dwelling healthy older adults: (i) like reals pets, but with less maintenance; (ii) performing multiple functions, with customizability; (iii) facilitators of interactions without promoting social stigma. Community-dwelling older adults seem open to use a RP as long as it promotes their well-being without facilitating ageism. More research is needed regarding the ethics of using robotics companion pets and comprising a person-centered approach.
... Even though this participant was hesitant about replacing a human, they believed that it may be a necessary evil as it would expand access to those who did not have it in the first place. The worry that robots might reduce human-human interactions has been highlighted by many robot ethicists as well [60,59]. As a solution, participants favored setting a threshold for the level of emotional care that should be done by a robot. ...
Preprint
Full-text available
As AI-enabled robots enter the realm of healthcare and caregiving, it is important to consider how they will address the dimensions of care and how they will interact not just with the direct receivers of assistance, but also with those who provide it (e.g., caregivers, healthcare providers, etc.). Caregiving in its best form addresses challenges in a multitude of dimensions of a person's life: from physical to social-emotional and sometimes even existential dimensions (such as issues surrounding life and death). In this study, we use semi-structured qualitative interviews administered to healthcare professionals with multidisciplinary backgrounds (physicians, public health professionals , social workers, and chaplains) to understand their expectations regarding the possible roles robots may play in the healthcare ecosystem in the future. We found that participants drew inspiration in their mental models of robots from both works of science fiction but also from existing commercial robots. Participants envisioned roles for robots in the full spectrum of care, from physical to social-emotional and even existential-spiritual dimensions, but also pointed out numerous limitations that robots have in being able to provide comprehensive humanistic care. While no dimension of care was deemed as exclusively the realm of humans, participants stressed the importance of caregiving humans as the primary providers of comprehensive care, with robots assisting with more narrowly focused tasks. Throughout the paper, we point out the encouraging confluence of ideas between the expectations of health-care providers and research trends in the human-robot interaction (HRI) literature.
... Even though this participant was hesitant about replacing a human, they believed that it may be a necessary evil as it would expand access to those who did not have it in the first place. The worry that robots might reduce human-human interactions has been highlighted by many robot ethicists as well [60,59]. As a solution, participants favored setting a threshold for the level of emotional care that should be done by a robot. ...
Preprint
Full-text available
As AI-enabled robots enter the realm of healthcare and caregiving, it is important to consider how they will address the dimensions of care and how they will interact not just with the direct receivers of assistance, but also with those who provide it (e.g., caregivers, healthcare providers etc.). Caregiving in its best form addresses challenges in a multitude of dimensions of a person's life: from physical, to social-emotional and sometimes even existential dimensions (such as issues surrounding life and death). In this study we use semi-structured qualitative interviews administered to healthcare professions with multidisciplinary backgrounds (physicians, public health professionals, social workers, and chaplains) to understand their expectations regarding the possible roles robots may play in the healthcare ecosystem in the future. We found that participants drew inspiration in their mental models of robots from both works of science fiction but also from existing commercial robots. Participants envisioned roles for robots in the full spectrum of care, from physical to social-emotional and even existential-spiritual dimensions, but also pointed out numerous limitations that robots have in being able to provide comprehensive humanistic care. While no dimension of care was deemed as exclusively the realm of humans, participants stressed the importance of caregiving humans as the primary providers of comprehensive care, with robots assisting with more narrowly focused tasks. Throughout the paper we point out the encouraging confluence of ideas between the expectations of healthcare providers and research trends in the human-robot interaction (HRI) literature.
... Furthermore, robots are increasingly entrusted with tasks that involve higher social responsibility, such as tutoring and schooling (Smakman and Konijn, 2020) or geriatric care (Agnihotri and Gaur, 2016). While it cannot be said that the use of robots in such tasks is either widespread or fully evolved, the general direction of development reveals a great need to take a careful look at the social interference these robots are likely to bring about on both, an individual, and a societal level (Sharkey and Sharkey, 2010). A social robot can be defined as follows: ...
Article
Full-text available
When do we follow requests and recommendations and which ones do we choose not to comply with? This publication combines definitions of compliance and reactance as behaviours and as affective processes in one model for application to human-robot interaction. The framework comprises three steps: human perception, comprehension, and selection of an action following a cue given by a robot. The paper outlines the application of the model in different study settings such as controlled experiments that allow for the assessment of cognition as well as observational field studies that lack this possibility. Guidance for defining and measuring compliance and reactance is outlined and strategies for improving robot behaviour are derived for each step in the process model. Design recommendations for each step are condensed into three principles on information economy, adequacy, and transparency. In summary, we suggest that in order to maximise the probability of compliance with a cue and to avoid reactance, interaction designers should aim for a high probability of perception, a high probability of comprehension and prevent negative affect. Finally, an example application is presented that uses existing data from a laboratory experiment in combination with data collected in an online survey to outline how the model can be applied to evaluate a new technology or interaction strategy using the concepts of compliance and reactance as behaviours and affective constructs.
... People find it easy to resonate with HIRs due to their humanoid behavior and emotions. From robot nannies (Sharkey and Sharkey, 2010) to robot caregivers ( Cai c et al., 2018) and even robot sexual service providers (Levy, 2007), HIRs have widely emerged in scientific and technological innovation (Van Doorn et al., 2017). HIRs are predicted to gradually replace human services in various industries (Harris et al., 2018). ...
Article
Full-text available
Purpose-A humanoid intelligent robot (HIR) possessing a human-like appearance can undertake human jobs, interact, communicate and even transmit emotions to human beings. Such robots have gradually been integrated into people's daily life and production scenarios. However, it is unclear whether and by what mechanism HIRs can stimulate people's risk perception and its impact on consumption attitudes. Based on the risk decision theory, this study aims to take the social value substitution attribute of a HIR as the incentive and analyzes the influence of social value substitution and risk perception on the customers' consumption attitudes. Design/methodology/approach-Three experiments were conducted to investigate the related questions about the social value substitution attribute of a HIR, its impact on risk perception and the customers' consumption attitudes. Findings-The results reveal that physical labor, intellectual labor, friendship, kinship and the ego constitute the hierarchical elements of social value substitution. Among them, physical labor and intellectual labor pertain to the dimension of social function value substitution, while friendship, kinship and ego pertain to the dimension of social presence value substitution; social function value substitution and social presence value substitution affect the subjects' risk perception positively, but the latter arouses a stronger risk perception; the 2 (risk perception of social function value: security/danger) Â 2 (risk perception of social presence value: security/danger) condition corresponds to diverse consumption attitudes. Originality/value-The results enrich the theories of the "cha-xu pattern" and "uncanny valley" and provide reference for the healthy development of the HIR industry.
... For example, elderly people who use robots recognize the robots' practical utility for the help they give in material things, but they report feeling humiliated in their human dignity because a human relationship with the robots is impossible (Sharkey 2014). The use of robots in education also presents problems for the children's wellbeing and human maturation (Sharkey and Sharkey 2010). ...
Article
Full-text available
The COVID pandemic has caused social relations to emerge as the leading players in our lives. This translation of a chapter from the book “Dopo la pandemia. Rigenerare la società con le relazioni” [“After the pandemic: Regenerating society with relations”, Rome: Città Nuova 2021], explores the phenomenology of social relationships in generating the pandemic and reacting to it. In particular the differences between interpersonal relations and role relations are highlighted, showing the insufficiency of Modernity’s understanding of the social fabric. It describes the acceleration of digitalization and its consequences on social relations, assessing the long-term challenges and showing that it is necessary to build up a new culture of relations that make for a good life.
Article
Housework is hard work. Keeping our homes clean, tidy and comfortable takes effort and every moment we spend on housework (that we would prefer to avoid) means we have less time to devote to our private lives. Over the past two decades, numerous companies have created robots designed to relieve their owners of housework. Having robots take care of housework for us, it seems, would enable us to focus our energy at home on private pursuits we find valuable, such as spending quality time with our loved ones, recreation, and relaxation. Although this line of reasoning helps explain why domestic robots are in high demand, this article will contest its validity throughout. By drawing from historical accounts of older, ostensibly labour-saving domestic technologies, it will argue that we should expect domestic robots to alter the nature of housework rather than reduce the need for it. Overall, it will argue that domestic robots change what needs to be done for their owners to enjoy their private lives.
Article
An essential component of human–machine interaction (HMI) is the information exchanged between humans and machines to achieve specific effects in the world or in the interacting machines and/or humans. However, such information exchange in HMI may also shape the beliefs, norms and values of involved humans. Thus, ultimately, it may shape not only individual values, but also societal ones. This article describes some lines of development in HMI, where significant value changes are already emerging. For this purpose, we introduce the general notion of eValuation, which serves as a starting point for elaborating three specific forms of value change, namely deValuation, reValuation and xValuation. We explain these along with examples of self-tracking practices and the use of social robots.
Article
Full-text available
A crucial philosophical problem of social robots is how much they perform a kind of sociality in interacting with humans. Scholarship diverges between those who sustain that humans and social robots cannot by default have social interactions and those who argue for the possibility of an asymmetric sociality. Against this dichotomy, we argue in this paper for a holistic approach called “Δ phenomenology” of HSRI (Human–Social Robot Interaction). In the first part of the paper, we will analyse the semantics of an HSRI. This is what leads a human being (x) to assign or receive a meaning of sociality (z) by interacting with a social robot (y). Hence, we will question the ontological structure underlying HSRIs, suggesting that HSRIs may lead to a peculiar kind of user alienation. By combining all these variables, we will formulate some final recommendations for an ethics of social robots.
Article
Full-text available
The paper describes an experimental study to investigate the potential of robotic devices in enhancing physical rehabilitation. Nowadays "robotic active assist" therapy for movement recovery mainly works at the level of repetitive, voluntary movement attempts by the patient, and mechanical assistance by the robot. Our study describes the experience of physical rehabilitation of a 2 year child with severe cognitive and physical-functional delays. The therapeutic protocol consisted in movement recovery sessions performed with the help of a baby seal robot in which repetitive exercises were combined with cognitive tasks based on sensorial and emotive stimulation. The paper describes the results of the study and offers a reflection of possible future directions of robot assisted therapy.
Article
Full-text available
Robotic “pets” are being marketed as social companions and are used in the emerging field of robot-assisted activities, including robot-assisted therapy (RAA). However, the limits to and potential of robotic analogues of living animals as social and therapeutic partners remain unclear. Do children and adults view robotic pets as “animal-like,”“machine-like,” or some combination of both? How do social behaviors differ toward a robotic versus living dog? To address these issues, we synthesized data from three studies of the robotic dog AIBO: (1) a content analysis of 6,438 Internet postings by 182 adult AIBO owners; (2) observations and interviews with 80 preschoolers during play periods with AIBO and with a stuffed dog; and (3) observations and interviews with 72 children, aged 7–15 years, who played with AIBO and a living dog. Overall, the studies revealed that “hybrid” cognitions and behaviors about AIBO emerged: the robotic dog was treated as a technological artifact that also embodied attributes of living animals, such as having mental states, being a social other, and having moral standing (although this latter finding remained difficult to interpret). Implications for use of robotic pets as companions and in interventions or therapy are explored.
Article
This paper aims to provide a heuristic for linking genetic predisposition, experiences in the first three years of life and psychological disturbance in later development. The author argues that the scientific case for sidelining the importance of parenting in general, and early attachment relationships in particular, tends to be based on inaccurate representations of behavioral genetics data. However, criticisms of socialization theory, such as attachment theory, are well grounded to the extent that past emphasis on the role of parenting seen merely in terms of relationship quality and the internalization of particular patterns of relationships may have been naive. It is argued that ear attachment relationships matter because the mental mechanism that moderates the expression of individual genotypes is intrinsically linked to the relationship with the primary caregiver. Evidence is marshaled for this proposition, from studies of gene-environment interactions involving attachment and three gene loci known to be involved in psychopathology. A model or the development of this mechanism is offered with supporting evidence. As part of this heuristic, a reconceptualization of attachment is tentatively proposed. While attachment in infancy has the primary evolutionary function of generating a mind capable of inferring and attributing causal motivational and epistemic mind states, and through these to arrive at a representation of the self in terms of a set of stable and generalized intentional attributes thus ensuring social collaboration, attachment in adulthood serves the evolutionary function of protecting the self representation from the impingements that social encounters inevitable create. Severe personality, pathology arises when the psychological mechanism of attachment is distorted or dysfunctional and cannot fulfill its biological function of preserving the intactness of self-representations.
Chapter
One of the most common things that human beings do is gather information about the environment, and then utilize that information in formulating interpretation and action. In this information and action process, once the individual has become interested in some environmental event, she then becomes receptive to information about that event and pays attention to it when it is provided. Finally, this input is incorporated into the individual’s understanding of the environmental event, and is the basis upon which action is founded (Feinman, 1986). This general sequence of informational receipt, reality construction, and action is what occurs when an infant first meets (face to face sometimes) the new puppy that mom and dad have brought home, when a teenager goes out on a blind date, and when an adult travels to a foreign country.
Article
Infants between 12 and 21 days of age can imitate both facial and manual gestures; this behavior cannot be explained in terms of either conditioning or innate releasing mechanisms. Such imitation implies that human neonates can equate their own unseen behaviors with gestures they see others perform.
Article
This study examined preschool children's reasoning about and behavioral interactions with one of the most advanced robotic pets currently on the retail market, Sony's robotic dog AIBO. Eighty children, equally divided between two age groups, 34-50 months and 58-74 months, participated in individual sessions with two artifacts: AIBO and a stuffed dog. Evaluation and justification results showed similarities in children's reasoning across artifacts. In contrast, children engaged more often in apprehensive behavior and attempts at reciprocity with AIBO, and more often mistreated the stuffed dog and endowed it with animation. Discussion focuses on how robotic pets, as representative of an emerging technological genre, may be (a) blurring foundational ontological categories, and (b) impacting children's social and moral development.
Book
The human-built environment is increasingly being populated by artificial agents that, through artificial intelligence (AI), are capable of acting autonomously. The software controlling these autonomous systems is, to-date, "ethically blind" in the sense that the decision-making capabilities of such systems does not involve any explicit moral reasoning. The title Moral Machines: Teaching Robots Right from Wrong refers to the need for these increasingly autonomous systems (robots and software bots) to become capable of factoring ethical and moral considerations into their decision making. The new field of inquiry directed at the development of artificial moral agents is referred to by a number of names including machine morality, machine ethics, roboethics, or artificial morality. Engineers exploring design strategies for systems sensitive to moral considerations in their choices and actions will need to determine what role ethical theory should play in defining control architectures for such systems.