Content uploaded by Oliver Korn
Author content
All content in this area was uploaded by Oliver Korn on Jun 27, 2018
Content may be subject to copyright.
Perspectives on Social Robots. From the Historic
Background to an Experts’ View on Future Developments
Oliver Korn
Offenburg University
Badstr. 24, 77652 Offenburg,
Germany
oliver.korn@acm.org
Gerald Bieber
Fraunhofer IGD
Joachim-Jungius-Str. 11,
18059 Rostock, Germany
gerald.bieber@igd-r.fraunhofer.de
Christian Fron
University of Heidelberg
Marstallhof 4, 69117 Heidelberg,
Germany
christian.fron@
zaw.uni-heidelberg.de
ABSTRACT
Social robots are robots interacting with humans not only in
collaborative settings, but also in personal settings like domestic
services and healthcare. Some social robots simulate feelings
(companions) while others just help lifting (assistants). However,
they often incite both fascination and fear: what abilities should
social robots have and what should remain exclusive to humans?
We provide a historical background on the development of robots
and related machines (1), discuss examples of social robots (2) and
present an expert study on their desired future abilities and
applications (3) conducted within the Forum of the European
Active and Assisted Living Programme (AAL).
The findings indicate that most technologies required for the social
robots’ emotion sensing are considered ready. For care robots, the
experts approve health-related tasks like drawing blood while they
prefer humans to do nursing tasks like washing. On a larger societal
scale, the acceptance of social robots increases highly significantly
with familiarity, making health robots and even military drones
more acceptable than sex robots or child companion robots for
childless couples. Accordingly, the acceptance of social robots
seems to decrease with the level of face-to-face emotions involved.
CCS Concepts
• Human-centered computing~Empirical studies in HCI
• Human-centered computing~Collaborative and social
computing devices • Human-centered computing~User studies
• Human-centered computing~Empirical studies in interaction design
• Human-centered computing~Accessibility theory, concepts and
paradigms • Human-centered computing~Accessibility systems
and tools • Social and professional topics~History of hardware
• Social and professional topics~Codes of ethics • Social and
professional topics~Assistive technologies • Computing
methodologies~Cognitive robotics • Computing
methodologies~Robotic planning • Applied
computing~Consumer health
Keywords
Social Robots; Robotics; Assistive Robotics; Companion Robot;
Artificial Intelligence; Technology Acceptance
1. INTRODUCTION
The term “robot” probably was first used in Karel Čapek’s play
R.U.R. (Rossum’s Universal Robots) from the 1920s. The term is
derived the Czech word “robota”, which means forced labor.
Already in this play, the robots revolt against their creators.
The concept of automated human-like machines has always incited
both fear and fascination – even in the antique roots described in
the section Background. This cocktail of fear of fascination
becomes more intense the closer humans and robots come. And
robots have long left the cage of industrial settings: they work
together with humans – collaboratively. As Elprama & El Makrini
show, this is not always appreciated by factory workers [4].
However, today robots come even closer, following humans into
personal settings like home and healthcare.
Figure 1: The idea of robots and humans working hand in
hand (collaboratively) or robots and humans interacting in is
not new but dates back to the antiquity.
In such intimate settings, “social robots” are required. While
assistive social robots focus on service functions (e.g. helping to lift
elderly persons), companion robots focus more on the emotional
aspects of interaction [1]. In any case, such robots have to look
harmless and friendly (Figure 1) to reduce fear. They should be able
to respond adequately to human behavior and ideally also to human
emotions: a depressive patient needs a different address than an
athlete recovering from a fracture of the leg.
However, should social robots also simulate emotions (like
smiling) to ease nonverbal communication? As they do not feel
emotions as humans do (lacking their biological substrate of a brain
made of neurons and a nervous system spreading throughout a
body), showing or simulating feelings could be considered lying.
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or
distributed for profit or commercial advantage and that copies bear this notice and
the full citation on the first page. Copyrights for components of this work owned
by others than the author(s) must be honored. Abstracting with credit is permitted.
To copy otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee. Request permissions from
Permissions@acm.org.
PETRA '18, June 26–29, 2018, Corfu, Greece
© 2018 Copyright is held by the owner/author(s). Publication rights licensed to ACM.
ACM ISBN 978-1-4503-6390-7/18/06…$15.00
https://doi.org/10.1145/3197768.3197774
186
In his paper and the corresponding video “Hello World” first shown
at the CHI-conference 2015 in Seoul [16], Kyle Overton reflects on
this ambiguity of human expectations: while machines are expected
to learn about humans and augment them in every possible way,
they “understand nothing” because they “can’t feel the loss of a
loved one”. In the authors’ opinion, it is questionable, if such “true
emotions” – even if they could be created eventually – should ever
be within the design space of computer scientists and engineers.
Several scientists have made the claim that “ethically correct
robots” should be able to reason about right and wrong. De Lima
recommends to create conversational rules [12] and Lokhorst even
presents a framework for “Computational Meta-Ethics” [14].
We think that ethical decisions are almost impossible to implement
without an emotional substrate corresponding to that of a human.
So again the question remains: even if we could design and build
emotion-sensing robots capable of ethical reasoning – do they also
reflect to a human desire, do we want them?
To get a better understanding of what future social robots should be
like, we first take a deep look into the past, providing a historical
perspective on robots and automated machinery in section 2
(Background). We then look at the present and discuss examples of
social robots in section 3 (Related Work). Finally, in section 4
(Study) we look into the future and present findings on the desired
appearance and abilities of social robots, gained in a study with 20
experts at the Forum of the European Active and Assisted Living
Programme. We sum up the findings in section 5 (Conclusion).
2. HISTORICAL BACKGROUND
In the broadest understanding of the term, “social robots” represent
the universal longing to create a model of humans to match personal
desires and necessities. The ancient author Ovid (43 BC - 17 AD)
describes an early example in his book Metamorphoses
(Transformations). Right after the story about the transformation of
Propitious’ ruthless daughters into stone as part of a divine penalty,
Ovid tells the story of Pygmalion of Cyprus: in his opposition
towards the imperfection of women, he focused all his passion and
time in creating a statue of his female ideal. He falls in love with
his creation, treats it like a real female companion, offers gifts and
shares his bed with it – comparable to today’s users of sex robots
like “Synthea Amatus” (see section 3.1, Related Work).
It is only thanks to the help and mercy of the Cyprian goddess
Venus that the statue becomes alive. In contrast to the later versions
of the story, which add elements of tragedy, the original version has
a “happy ending”: the birth of their child Paphos. Nevertheless, the
story’s moral is quite clear: human creativity and spirit of invention
is limited to the idealized figural imitation of life while the creation
of life itself depends on divine impetus. In the same way, the giant
bronze automaton Talos, a mythological guardian protecting the
island of Crete and Zeus’ girlfriend Europa, was alive only by
divine authority: created by Hephaestus, the god of handcraft [17].
As a 1920 artistic illustration shows, Talos was envisioned as a
robot.
Throughout classical antiquity, there is a strong connection
between the creation of “artificial beings” or automats through
mechanics and the divine. Statues resembling deities and ideals of
human beauty in several cases features complex mechanics. For
example, the sculptor and iron caster Canachos of Sikyon added a
mechanically movable deer to his bronze statue of Apollo in
Brachnidai [23]. It also seems highly probable that the bronze statue
of Diana on one of the luxurious Nemi ships, built for the emperor
Caligula, was standing on a rotating platform [26].
Figure 2: Talos of Crete may well be the first “robot”,
although it was created by a god. Illustration from 1920 by
Sybil Tawse, Public Domain.
Moreover a Crane-like stage machinery was invented for attic
tragedy to allow “deities” to suddenly appear up in the air on stage
[9]. Heron of Alexandria describes several other automatic
machines in the context of ancient religious life:
• two statues automatically giving libations, whenever there is
an incense offering [21]
• automatically opening doors of temples [21]
• a statue of Hercules shooting a snake statue on a tree with an
arrow [21], and many more.
All these automats were designed to amaze the audience and
illustrate divine power. They kept being instruments of fascination
rather than innovation or technical progress.
This attitude hardly changed throughout the centuries. Even in the
late 18th century, a time when natural sciences already were well
established, society was surprisingly willing to accept automated
machinery with unlikely abilities. An excellent example of this
attitude is the success of the “Mechanical Turk”.
Figure 3: The fake “Mechanical Turk” fascinated Europeans
and Americans. Copper engraving from 1783, Public Domain.
187
The mechanical Turk or “automaton chess player” was fake: a
human chess master hid inside, operating the machine. However,
the audience did not know that, and the machine’s creator
Wolfgang von Kempelen toured with it throughout Europe and
even through the United States, showing it to nobility and political
leaders like Napoleon Bonaparte and Benjamin Franklin.
We think that even today, many people look at robots and automats
with a special kind of fascination and a willingness to attribute
physical mental skills far beyond the machines’ actual capabilities.
In the Related Work section, we will discuss examples of this.
3. RELATED WORK
In ISO 8373:2012, “service robots” are defined as robots that
“perform useful tasks for humans, which aid in physical tasks such
as helping people to move around”. In contrast to service robots,
social robots are designed to communicate with persons. Chatbots
or avatars are also designed for that purpose, but a social robot is
physically embodied. It interacts and communicates with humans
or other autonomous physical agents by following social behaviors
and rules. In this section, we present examples of social robots as
well as a common model for studying technology acceptance.
3.1 Examples of Social Robots
A natural and intuitive communication between man and machine
occurs when social robots resemble animals. Accordingly, pets
have been popular social robots. In 1999, Sony introduced AIBO
(ERS 110) as a touch sensitive and interacting pet. It was designed
as a friend and partner for entertainment. Indeed, after more than a
decade, Sony has decided to resurrect the iconic robot pet. In a press
release from November 2017 [5] the new model is simply called
“aibo” (ERS-1000). While AIBO is definitely a toy, there are more
intricate solutions, blurring the border between serious and
entertainment applications.
The Nao Robot (Figure 1), developed in 2006 by Aldebaran
Robotics, looks like a toy. However, it can be programmed to show
complex behaviors and interactions, even mimicking human
behavior. One of its major applications was optimizing walk
engines [24]. Interestingly, as a programmable humanoid platform
it proved to be so flexible, that it has even been tested in human-
robot interaction therapy with autistic children [22].
This borderline between programmable toy or pet and useful
robotic machine is perfectly exemplified by another popular
example. Since 2009, the social robot Paro (Figure 4) supports
therapy and care. As an artificial harp seal, it was designed for
elderly people and patients of hospitals and nursing homes.
Figure 4: Paro is designed to interact with elderly patients of
hospitals and nursing homes. Courtesy of Aaron Biggs.
Paro responds to petting with body movements, by opening and
closing its eyes as well as with sounds. It even squeals if handled
too strongly. Studies show positive effects on older adults’ activity
[15, 19].
In the field of nursing homes, not only communication and social
interaction is needed: patients regularly require massive physical
support, e.g. help to get up or to be relocated in the bed. The
caregiver robot Robear supports patients and caregivers with
physical strength. It has been given a cartoon look to avoid the
“uncanny valley” effect [2] that causes humans to react badly to
not-quite convincing humanoids. The Japanese research institute
Riken developed Robear in 2015, but it is not commercially used
and barely subject of published studies so far.
Figure 5: Robear holding a person. Courtesy of Riken.
In healthcare and elderly care, robots can be considered a new tool:
they improve current instruments. A meta-study on “assistive social
robots in elderly care” has been provided 2009 by Broekens et al.
[1]. A recent overview of healthcare robotics is given by Riek [18].
However, what happens, if social robots leave these secluded
“professional” areas and enter the private homes of healthy people,
offering domestic support? Surely, a “robotic butler” should
comply with established rules of good service, like reliability,
discreetness and conspicuous services. The Care-O-bot, developed
by Fraunhofer IPA [7], is an example of a mobile robot assistant,
which actively supports humans in domestic environments.
Figure 6: Fraunhofer designed the Care-O-bot to assist in
domestic environments.
Care-O-bot 4 can be equipped with one or two arms (or no arms at
all, which surely limits applications). With the display integrated in
its head, it can display different emotions.
It would be interesting to combine the physical abilities of the Care-
O-bot with those of Pepper, a robot developed by Aldebaran
Robotics.
188
Figure 7: The Pepper Robot, developed 2015 by Aldebaran
Robotics. Xavier Caré / Wikimedia Commons.
Pepper (Figure 7) was designed to interpret emotions by analyzing
facial expressions and tone of voice. However, it is not using this
ability to support persons at home, but to influence buyers’
readiness stages (the willingness to make a purchase) and to assist
with additional information or appraisals, suggestions, and
endorsements. Although Pepper is probably more a marketing tool
than an emotion-sensing companion, it crosses a border: by
interpreting humans’ implicit behaviour, it no longer reacts to
commands but tries to correspond with moods and emotions.
This opens the stage for strange new robots like Sophia (Figure 8),
a female humanoid robot developed by Hong Kong-based Hanson
Robotics.
Figure 8: The Sophia Robot, first shown in 2015 by Hanson
Robotics. Courtesy of Hanson Robotics.
The company claims that the robot learns and adapts human
behavior using artificial intelligence. Indeed, there are multiple
videos of Sophia showing authentic responses to questions.
However, these responses are not generated at runtime but pre-
programmed – the AI only selects the best fitting one to create an
illusion of understanding. Thus, the robot resembles common
chatbots. The intriguing feature are the authentic facial expressions,
which match to the conversation.
Just like the mechanical Turk (section 2, Historical Background),
Sophia fascinates the audience more by a clever illusion than by an
actual ability. Accordingly, in a commonly viewed CNBC-
interview with its creator David Hanson, the Sophia robot claims
that it “hopes to do things such as go to school, study, make art,
start a business, even have my own home and family” [10]. In this
light, it is not surprising that in October 2017 Sophia became the
first robot to receive citizenship of a country: both Saudi Arabia
and Hanson Robotics surely appreciated the echo created by this
media scoop.
These examples show that robotic platforms are already offered and
can be configured for a wide range of “social” applications. It is
obvious, that fantastic new possibilities will emerge – but at the
same time, there will also be an increasing number of “weird”
things, of ethically controversial developments. Scheutz & Arnold
provokingly title their 2016 IEEE paper [20]: Are We Ready for
Sex Robots? It is a question of acceptance if robots like “Synthea
Amatus” will ever be considered appropriate. Thus, before we
present the experts’ perspective on social robots, we briefly
introduce the Technology Acceptance Model.
3.2 Acceptance of Social Robots
There are several models to assess the acceptance of new
technologies. A common one is the Technology Acceptance Model
(TAM). It was developed by Davis in 1989 [3] and posits that the
individual adoption and use of information technology are
determined by perceived usefulness and perceived ease of use.
Eleven years later, the model was extended by Venkatesh and Davis
[25] (TAM2, Figure 9) in an attempt to further decomposition
acceptance into societal, cognitive and psychological factors.
Figure 9: The revision of the Technology Acceptance Model
from 2000 (TAM2).
As Hornbæk & Hertzum explain in a recent review on
developments in technology acceptance and user experience [8],
TAM has long grown out of research in IT and its key constructs
have been refined for different disciplines. There are additional
constructs to supplement perceived usefulness and perceived ease
of use like perceived enjoyment [11], adding experiential and
hedonic aspects to TAM.
Technology Acceptance
Model, version 2 (TAM2)
Subjective
Norm
Image
Job Relevance
Output Quality
Result
Demonstrability
Experience Voluntariness
Perceived
Usefulness
(U)
Perceived
Ease of Use
(E)
Attitude
towards Using
(A)
Behavioral
Intention to Use
(BI)
Actual
System Use
189
4. STUDY
The study was conducted during the Active and Assisted Living
(AAL) Forum 2016 in St. Gallen, Switzerland, within a workshop
called “From Recognizing Motion to Emotion Awareness.
Perspectives for Future AAL Solutions”. In the survey, we
incorporated elements of TAM and of Delphi studies [13], when
we asked the experts to make time predictions about certain events.
4.1 Setup, Population and Data Gathering
The population consisted of 20 experts in this workshop, aged 30
to 57 years (M = 40.3, SD = 9.7) of which 16 were male and 4
female. 11 experts came from academia, 11 from the business
domain (it was possible to attribute to both domains). The experts
had backgrounds in information science, health, and gerontology.
For the expert survey, we used a questionnaire with just ten items.
It includes six statements, which could be agreed on a seven-point
Likert scale, and four Delphi-style questions, where the experts
should predict future events. The experts answered the questions
for themselves, not in the role of elderlies or people in general. For
the analysis, we used t-tests and ANOVA (analyses of variance).
4.2 Quantitative Data and Qualitative Findings
4.2.1 Devices for Tracking
In the first question, we wanted to check how familiar the experts
were with “giving away” parts of their personal data to devices:
Have you already used one or more devices for tracking activities?
Of the 20 experts, 17 (85%) have used such devices. The three most
devices mentioned most frequently were the Fitbit, the Apple
Watch and the Jawbone. This shows a basic readiness to share
personal data with sensing devices.
4.2.2 Emotion Recognition in Health Care?
In this question, we focused on the domain of healthcare. This is
both an area of high demand for qualified work and an area where
delicate questions are handled: What is your general attitude
towards automated systems in health care? The question was
divided in two sub-questions: the acceptance of automated systems
in health care with / without emotion recognition.
Interestingly, the mean acceptance for systems without emotion
recognition (M = 5.2, SD = 1.3) hardly differs from the acceptance
of systems with emotion recognition (M = 5.1, SD = 1.6), although
for the latter standard deviation is higher. Several experts’
statements show that they see the benefits of emotion recognition:
“It’s important to know whether a person likes using a system and
if [it] makes them happy” (p1). However, there are also some who
point out problems: “I am concerned about the level of consent
people can give” (p14). A few experts (two selected “2” out of 7,
two others “3”) there is the potential of an ethical crisis:
“misinterpretations can occur – what about incapacitating persons”
(p19). Therefore, in spite of the seemingly similar results in
acceptance, systems with emotion recognition are far more
controversial. This is also reflected by the low Pearson correlation
between the two values (r = .07, p > .82 – there is no statistically
significant connection).
4.2.3 Technological Level of Emotion Recognition
In the next question, we wanted to get an impression of the experts’
angle on technology in emotion recognition. Based on a scale from
1 to 7 we asked them about their perceived level of development (1
= basic research; 7 = market ready) of the following seven methods
for emotion recognition: movements and gestures, electrodermal
activity, brainwaves, pulse, facial thermal regions, eye movements
and voice.
Figure 10: Perceived level of development of seven methods
for emotion recognition, with standard deviations.
The diagram shows that especially brainwaves and facial thermal
regions are considered future technologies. When grouping these
future technologies and comparing them with the five other
methods, a t-test shows a highly significant difference (p < .00).
Consequently, a counter-check with an ANOVA over the five
technologies shows no statistically significant differences in the
experts’ perception among them (p > .96).
4.2.4 Tracking of Patients’ Emotions?
In the next question, we wanted to determine what problems
legitimize the use of emotion recognition with patients. We
explained: Some persons in nursing homes or hospitals can
articulate wishes and needs only partially. In which cases would
you consider it reasonable to assess such a person’s feelings
electronically? We provided four sub-questions focusing different
users group:
• persons who can no longer communicate
(e.g. stroke, locked-in syndrome)
• persons who have cognitive impairments
(e.g. Alzheimer’s disease)
• persons who suffer from psychological illnesses
(e.g. depression)
• all patients, as a standard procedure
Figure 11: Acceptance of emotion recognition for four
different groups of patients, with standard deviations.
The diagram shows that the level of acceptance decreases
continuously – and it decreases in statistically highly significant
4.8 4.6
3.5
4.9
3.7
4.8 4.7
1
2
3
4
5
6
7
perceived level of development
6.4
5.6
4.7
3.1
1
2
3
4
5
6
7
no
communication
cognitive
impairments
psychological
illnesses
all patients
acceptance of emotion recognition
190
steps, as an ANOVA reveals (p < 0.00). It is not surprising, that for
patients with no alternatives the acceptance is high. However, even
for patients with cognitive impairments or psychological illnesses
(e.g. the users of the Paro robot, see Related Work), the acceptance
is relatively high.
4.2.5 Social Robots Acting in Health and Care?
This question takes the ethical dilemma a step closer to the experts.
In a hypothetical scenario, we asked: Imagine you were in need of
care: which activities would you prefer to be conducted by humans
and which […] by automated systems / robots?
We provided six sub-questions focusing different activities in
health care and nursing: being put into another bed (1), washing (2),
going to the toilet (3), taking medicine (4), blood draw (5), and
everyday talk (6).
Figure 12: Acceptance of six common health-related activities
for social robots, with standard deviations.
Here the results surprised us. We expected that “being put into
another bed”, a common activity known to potentially cause back
problems for caregivers – and also washing – would be more
acceptable than intimate activities like going to the toilet or crucial
medical activities (taking medicine, blood draw). This is no
representative international study – but if these results were typical
for European patients, Riken’s Robear (see Related Work) would
have a hard time in the clinics.
A t-test shows no statistically significant difference between
activities 1 and 2 (p > .86) and an ANOVA shows no statistically
significant difference between the activities 3 to 5 (p > .81).
However, the aggregated groups 1 and 2 versus 3 to 5 versus 6 are,
as the diagram already indicates, statistically highly significantly
different (p < .00).
In summary, the experts seem to give the robots credit for health-
related tasks (3 to 5), whereas nursing tasks (1 and 2) are preferred
to be conducted by humans. We expected that this preference might
correlate with age, leading to the hypothesis: the older a person, the
less he or she wants to be nursed by a robot. The correlations of r =
-0.16 (age and bedding) and r = -0.13 (age and washing) may not
be large number, but both are statistically highly significant (p <
.00). In this case, the acceptance of technology decreases with age.
Younger persons frequently state that they prefer to have robots do
nursing activities.
The “everyday talk” is an activity we expected to be rejected.
Clearly, in spite of the fascination for robots with special abilities
(see the “mechanical Turk” in Historical Background and the robot
Sophia in Related Work), humans want to talk to fellow humans.
4.2.6 Other Potential Areas for Social Robots?
In the final question with a Likert-scale we asked: In which areas
would you consider it reasonable to use social robots or other
systems with emotion recognition? We provided seven sub-
questions focusing different forms of social relationships:
replacement for pet (1), companion for lonely seniors (2),
companion for childless couples (3), sexual substitute (4),
emotional replacement for partner (5), court statements (6), anti-
terror measures (7). We are well aware that almost all of these sub-
questions would require further elaboration and discussion.
Figure 13: Acceptance of seven potential activities for social
robots, with standard deviations.
The diagram indicates that there are two groups of activities. These
groups are relatively homogenous: two ANOVA show no
statistically significant difference within group 1 with activities 1,
2, and 7 (p > .72) and within group 2 with activities 3 to 6 (p > .66).
However, and ANOVA between these groups shows that they are
highly significantly different (p < .00). While group 2 tends towards
rejection, the first group tends towards acceptance – maybe because
similar solutions already exist:
• The Paro robot (see Related Work) is both a replacement for a
pet and a companion for seniors
• The established use of fighting drones by military partially
explains the relatively high acceptance in the area “anti-terror
measures”. However, this question also resulted in the highest
standard deviation throughout the survey (SD = 2.4).
Although the ANOVA showed no difference within group 2, the
data indicate that using social robots as emotional replacement
(activities 3 and 5) is considered less acceptable than using them
“just” as a sexual substitute (activity 4). Finally, we experts
consider social robots for investigating court statements least
acceptable. Obviously, the room for emotions including deception
is the sacred area of humanity – just as the video “Hello World”
indicates (see Introduction).
4.2.7 Delphi Predictions
In this sub-section, we present the experts’ predictions regarding
certain future events. If the experts indicated time ranges (e.g. 2030
to 2040), we used the mean (here: 2035). The term “widespread”
was defined as “comparable to the current use of GPS to track
disoriented seniors”. The experts answered the following Delphi
questions:
3.5 3.6
4.7
4.3 4.4
2.1
1
2
3
4
5
6
7
being put
into another
bed
washing going to the
toilet
taking
medicine
blood draw everyday
talk
acceptance of health-related support
4.3
4.6
2.9
3.2
2.9 2.7
4.4
1
2
3
4
5
6
7
acceptance of social robots in various areas
191
• In which year do you expect a widespread use of automated
systems, which support caregivers at physical work (“care
robots”)?
• In which year do you expect the market entry of a system (in
any area of society) that can estimate a person’s emotional
state (“social robots”)?
• In which year do you expect a widespread use of systems in
care that can estimate a person’s emotional state (“social
robots”)?
• In which year do you expect a widespread use of sensors
implanted into the human body?
Figure 14: Delphi predictions on four future technological
developments, with standard deviations. The smaller green
bar excludes an outlier in prediction 3.
For the first question, the answer tendency is clear, the standard
deviation is low (M = 2024.3, SD = 5.1). Clearly, systems like the
Care-O-bot or the Robear (see Related Work) show the potential.
Just like care robots, emotion sensing social robots are perceived as
something coming relatively soon (M = 2024.5, SD = 7.8). With
regard to emotion-sensing robots in care, it is mainly one outlier
(prediction: year 2100) which creates the high variance (M =
2031.6, SD = 17.6). Without that outlier, the overall prediction goes
down four years and deviation is halved (M = 2027.6, SD = 6.1).
Nevertheless, this example shows how controversial emotion
sensing becomes, once situated in special domain (in opposition to
unspecified use).
With the final prediction, we again wanted to see how the experts
react when things become “personal” (like in sub-section 4.2.5). An
implanted sensor blurs the distinction between human and machine
– a frequent topic in art, resulting in the concepts like “Cyborgs” or
“Replicants”. While the standard deviation is high (M = 2030.8, SD
= 11.9), the mean prognosis of 2030 is not even one generation
away from 2018.
5. PERSPECTIVES
Already the vast spectrum of social robots in the Related Work
section shows that we are on the edge of something new: artificial
intelligence and engineering allow robots with social behaviors to
become an everyday phenomenon. At the same time, the results of
the Study section indicate that most experts think that social robots
will eventually address even sensitive societal areas like emotional
companionship. If the experts’ optimistic time predictions are
checked against their skepticism regarding the application of social
robots in society, it becomes obvious that they perceive technology
as fast moving– faster than the necessary ethical discussions (let
alone legal adaptations) would require.
Society has to decide in which way technology should develop. Do
we just need robotic assistants, supporting activities like cleaning
or serving dinner? Beyond such assistance, there are personal
services like nursing or even companionship which contain
emotional elements. Their application should be guided by ethics
and regulated by law.
However, even if the ethical problems could be solved: if human
society starts substituting service and knowledge work by social
robots, could “artificial intelligence create an unemployment crisis”
as Ford [6] postulates? If a robot “starts a business”, like the Sophia
robot claims: should law allow that humans work for robots?
Should robots even be allowed to inherit money or property?
If we extrapolate the predicted development of service robots for
personal use, the substitution of humans can potentially spread to
intimate relationships. Relationships with social robots, which
simulate love and understanding authentically, even under the most
unlikely circumstances, will surely be much easier than
relationships with human partners. This can have a very damaging
societal impact. It is not surprising that the acceptance of social
robots seems to decrease with the level of face-to-face emotions
involved.
Finally, if social robots are getting smarter and more social
throughout the years: do we still have the right to treat them like
commodities or slaves – or will there be robot rights, like animal
rights and human rights?
6. CONCLUSION
This paper aims to shed light on what future social robots should
be like. In the introduction, we illustrated the ambiguity of human
expectations regarding robots. In section 2, we provided a historical
background on robots and automated machinery from Talos to the
mechanical Turk. In the section Related Work, we inspected the
present, discussing current examples of social robots like Paro, the
Care-O-bot or Saudi Arabia’s new “citizen” Sophia.
In the section Study, we looked into the future with the help of 20
experts and presented findings on the desired appearance and
abilities of social robots. We found that the experts consider the
basic technologies for emotion recognition as fairly well developed
(with the exception of brain computer interaction and facial thermal
regions). When it comes to applying emotion sensing in health care,
the acceptance increases with the severity of the patients’ illness or
impairment. When social robots do not only sense but actually help,
the experts seem to approve health-related tasks (going to the toilet,
taking medicine, blood draw), whereas nursing tasks (putting
patients into another bed, washing) or small talk are highly
significantly preferred to be conducted by humans.
On a larger societal scale (beyond healthcare), we found that if
similar types of solutions are already in use, social robots are rated
significantly more acceptable. This applies to the three diverse
areas ‘replacement for pet’, ‘companion for lonely seniors’, and
‘anti-terror measures’. Acceptance decreases highly significantly in
unfamiliar areas: ‘companion for childless couples’, ‘sexual
substitute’, ‘emotional replacement for partner’, and ‘court
statements’. Additionally, acceptance seems to go down with the
level of face-to-face emotions involved.
In the introduction we asked, if humans desire social robots capable
of emotion-sensing or even ethical reasoning. If we look at these
findings, the experts perceive that currently such capabilities are
primarily suited for clinical and health-related situations.
2024.3 2024.5
2031.6 2030.8
2010
2015
2020
2025
2030
2035
2040
2045
2050
care robots social robots
sensing emotions
(anywhere)
social robots
sensing emotions
(in care)
implanted sensors
Delphi predictions on future technology
2027.6
192
In the study’s Delphi section, we presented the experts’ predictions
regarding future events. Already for 2024, they forecasted a
relatively widespread use of care robots, as well as emotion-sensing
social robots. However, for emotion-sensing robots in health care,
the prognosis goes up by at least three years, so this area is
considered especially sensitive. Nevertheless, the experts think that
such sensitive areas will eventually be covered: even implanted
sensors are predicted to be fairly common by the year 2031.
If these optimistic time predictions are checked against the
skepticism regarding the application of social robots in society, it
becomes obvious that the experts perceive technology as moving
fast – faster than the necessary ethical discussions (let alone legal
adaptations) would require. This work helps to make this
discrepancy more transparent and hopefully feeds into a more
informed design of future social robots.
7. LIMITATIONS AND FUTURE WORK
We are well aware that is work is just a glimpse into the future of
social robots. Many of the questions require further elaboration and
discussion – e.g. the use of social robots as an emotional or sexual
substitute or the legal status of social robots and their actions.
We aim to further develop the survey and apply it to a wider
audience. Thus, we aim to gather data that are more representative
for how different groups of society perceive social robots. For
example, we want to see how gender or cultural background
influence acceptance. Such findings will inform the future design
of social robots, their appearance and their abilities.
8. ACKNOWLEDGMENTS
This research is related to the project “Perspectives on Social
Robots” within the AGYA (Arab German Young Academy of
Sciences and Humanities). The first and the third author are AGYA
members. The AGYA is funded by the German Federal Ministry of
Education and Research.
9. REFERENCES
[1] Broekens, J. et al. 2009. Assistive social robots in elderly
care: a review. Gerontechnology. 8, 2 (Apr. 2009).
DOI:https://doi.org/10.4017/gt.2009.08.02.002.00.
[2] Davies, N. 2016. Can robots handle your healthcare?
Engineering Technology. 11, 9 (Oct. 2016), 58–61.
DOI:https://doi.org/10.1049/et.2016.0907.
[3] Davis, F.D. et al. 1989. User Acceptance of Computer
Technology: A Comparison of Two Theoretical Models.
Management Science. 35, 8 (Aug. 1989), 982–1003.
DOI:https://doi.org/10.1287/mnsc.35.8.982.
[4] Elprama, S.A. et al. 2017. Attitudes of Factory Workers
Towards Industrial and Collaborative Robots. Proceedings
of the Companion of the 2017 ACM/IEEE International
Conference on Human-Robot Interaction (New York, NY,
USA, 2017), 113–114.
[5] Entertainment Robot “aibo” Announced: 2017.
https://www.sony.net/SonyInfo/News/Press/201711/17-
105E/index.html. Accessed: 2017-11-17.
[6] Ford, M. 2013. Could artificial intelligence create an
unemployment crisis? Commun. ACM. 56, 7 (Jul. 2013), 37–
39. DOI:https://doi.org/10.1145/2483852.2483865.
[7] Graf, B. et al. 2004. Care-O-bot II—Development of a Next
Generation Robotic Home Assistant. Autonomous Robots.
16, 2 (Mar. 2004), 193–205.
DOI:https://doi.org/10.1023/B:AURO.0000016865.35796.
e9.
[8] Hornbæk, K. and Hertzum, M. 2017. Technology
Acceptance and User Experience: A Review of the
Experiential Component in HCI. ACM Trans. Comput.-
Hum. Interact. 24, 5 (Oct. 2017), 33:1–33:30.
DOI:https://doi.org/10.1145/3127358.
[9] Hose, M. 1997. Deus ex machine.
[10] Hot Robot At SXSW Says She Wants To Destroy Humans:
2016. https://www.youtube.com/watch?v=W0_DPi0PmF0.
Accessed: 2016-03-13.
[11] Liao, C.-H. et al. 2008. The Roles of Perceived Enjoyment
and Price Perception in Determining Acceptance of
Multimedia-on-Demand. International Journal of Business
and Information. 3, 1 (2008).
[12] de Lima Salge, C.A. and Berente, N. 2017. Is That Social
Bot Behaving Unethically? Commun. ACM. 60, 9 (Aug.
2017), 29–31. DOI:https://doi.org/10.1145/3126492.
[13] Linstone, H.A. and Turoff, M. eds. 1975. The Delphi
method: techniques and applications. Addison-Wesley Pub.
Co., Advanced Book Program.
[14] Lokhorst, G.-J.C. 2011. Computational Meta-Ethics:
Towards the Meta-Ethical Robot. Minds and Machines. 21,
2 (May 2011), 261–274.
DOI:https://doi.org/10.1007/s11023-011-9229-z.
[15] McGlynn, S. et al. 2014. Therapeutic Robots for Older
Adults: Investigating the Potential of Paro. Proceedings of
the 2014 ACM/IEEE International Conference on Human-
robot Interaction (New York, NY, USA, 2014), 246–247.
[16] Overton, K. 2015. “Hello World”: A Digital Quandary and
the Apotheosis of the Human. Proceedings of the 33rd
Annual ACM Conference Extended Abstracts on Human
Factors in Computing Systems (New York, NY, USA,
2015), 179–179.
[17] Papadopoulos, J. 1994. Talos. Lexicon Iconographicum
Mythologiae Classicae. Artemis & Winkler Verlag.
[18] Riek, L.D. 2017. Healthcare Robotics. Commun. ACM. 60,
11 (Oct. 2017), 68–78.
DOI:https://doi.org/10.1145/3127874.
[19] Šabanović, S. et al. 2013. PARO robot affects diverse
interaction modalities in group sensory therapy for older
adults with dementia. 2013 IEEE 13th International
Conference on Rehabilitation Robotics (ICORR) (Jun.
2013), 1–6.
[20] Scheutz, M. and Arnold, T. 2016. Are We Ready for Sex
Robots? The Eleventh ACM/IEEE International Conference
on Human Robot Interaction (Piscataway, NJ, USA, 2016),
351–358.
[21] Schmidt, W. 1899. Pneumatica et automata.
[22] Shamsuddin, S. et al. 2012. Initial response of autistic
children in human-robot interaction therapy with humanoid
robot NAO. 2012 IEEE 8th International Colloquium on
Signal Processing and its Applications (Mar. 2012), 188–
193.
[23] Strocka, V.M. 2002. Der Apollon des Kanachos in Didyma
und der Beginn des strengen Stils. Verlag Walter de Gruyter.
[24] Strom, J.H. et al. 2009. Omnidirectional Walking Using
ZMP and Preview Control for the NAO Humanoid Robot.
RoboCup. 5949, (2009), 378–389.
[25] Venkatesh, V. and Davis, F.D. 2000. A Theoretical
Extension of the Technology Acceptance Model: Four
Longitudinal Field Studies. Management Science. 46, 2
(Feb. 2000), 186–204.
DOI:https://doi.org/10.1287/mnsc.46.2.186.11926.
[26] Wolfmayr, S. 2009. Die Schiffe vom Lago di Nemi. Diplom.
(2009), 37–39.
193