Conference PaperPDF Available

Perspectives on Social Robots: From the Historic Background to an Experts' View on Future Developments


Abstract and Figures

Social robots are robots interacting with humans not only in collaborative settings, but also in personal settings like domestic services and healthcare. Some social robots simulate feelings (companions) while others just help lifting (assistants). However, they often incite both fascination and fear: what abilities should social robots have and what should remain exclusive to humans? We provide a historical background on the development of robots and related machines (1), discuss examples of social robots (2) and present an expert study on their desired future abilities and applications (3) conducted within the Forum of the European Active and Assisted Living Programme (AAL). The findings indicate that most technologies required for the social robots' emotion sensing are considered ready. For care robots, the experts approve health-related tasks like drawing blood while they prefer humans to do nursing tasks like washing. On a larger societal scale, the acceptance of social robots increases highly significantly with familiarity, making health robots and even military drones more acceptable than sex robots or child companion robots for childless couples. Accordingly, the acceptance of social robots seems to decrease with the level of face-to-face emotions involved.
Content may be subject to copyright.
Perspectives on Social Robots. From the Historic
Background to an Experts’ View on Future Developments
Oliver Korn
Offenburg University
Badstr. 24, 77652 Offenburg,
Gerald Bieber
Fraunhofer IGD
Joachim-Jungius-Str. 11,
18059 Rostock, Germany
Christian Fron
University of Heidelberg
Marstallhof 4, 69117 Heidelberg,
Social robots are robots interacting with humans not only in
collaborative settings, but also in personal settings like domestic
services and healthcare. Some social robots simulate feelings
(companions) while others just help lifting (assistants). However,
they often incite both fascination and fear: what abilities should
social robots have and what should remain exclusive to humans?
We provide a historical background on the development of robots
and related machines (1), discuss examples of social robots (2) and
present an expert study on their desired future abilities and
applications (3) conducted within the Forum of the European
Active and Assisted Living Programme (AAL).
The findings indicate that most technologies required for the social
robots’ emotion sensing are considered ready. For care robots, the
experts approve health-related tasks like drawing blood while they
prefer humans to do nursing tasks like washing. On a larger societal
scale, the acceptance of social robots increases highly significantly
with familiarity, making health robots and even military drones
more acceptable than sex robots or child companion robots for
childless couples. Accordingly, the acceptance of social robots
seems to decrease with the level of face-to-face emotions involved.
CCS Concepts
Human-centered computing~Empirical studies in HCI
Human-centered computing~Collaborative and social
computing devicesHuman-centered computing~User studies
Human-centered computing~Empirical studies in interaction design
Human-centered computing~Accessibility theory, concepts and
paradigmsHuman-centered computing~Accessibility systems
and toolsSocial and professional topics~History of hardware
Social and professional topics~Codes of ethicsSocial and
professional topics~Assistive technologiesComputing
methodologies~Cognitive roboticsComputing
methodologies~Robotic planningApplied
computing~Consumer health
Social Robots; Robotics; Assistive Robotics; Companion Robot;
Artificial Intelligence; Technology Acceptance
The term robot” probably was first used in Karel Čapek’s play
R.U.R. (Rossum’s Universal Robots) from the 1920s. The term is
derived the Czech word “robota”, which means forced labor.
Already in this play, the robots revolt against their creators.
The concept of automated human-like machines has always incited
both fear and fascination even in the antique roots described in
the section Background. This cocktail of fear of fascination
becomes more intense the closer humans and robots come. And
robots have long left the cage of industrial settings: they work
together with humans collaboratively. As Elprama & El Makrini
show, this is not always appreciated by factory workers [4].
However, today robots come even closer, following humans into
personal settings like home and healthcare.
Figure 1: The idea of robots and humans working hand in
hand (collaboratively) or robots and humans interacting in is
not new but dates back to the antiquity.
In such intimate settings, “social robots” are required. While
assistive social robots focus on service functions (e.g. helping to lift
elderly persons), companion robots focus more on the emotional
aspects of interaction [1]. In any case, such robots have to look
harmless and friendly (Figure 1) to reduce fear. They should be able
to respond adequately to human behavior and ideally also to human
emotions: a depressive patient needs a different address than an
athlete recovering from a fracture of the leg.
However, should social robots also simulate emotions (like
smiling) to ease nonverbal communication? As they do not feel
emotions as humans do (lacking their biological substrate of a brain
made of neurons and a nervous system spreading throughout a
body), showing or simulating feelings could be considered lying.
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or
distributed for profit or commercial advantage and that copies bear this notice and
the full citation on the first page. Copyrights for components of this work owned
by others than the author(s) must be honored. Abstracting with credit is permitted.
To copy otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee. Request permissions from
PETRA '18, June 2629, 2018, Corfu, Greece
© 2018 Copyright is held by the owner/author(s). Publication rights licensed to ACM.
ACM ISBN 978-1-4503-6390-7/18/06…$15.00
In his paper and the corresponding video “Hello World” first shown
at the CHI-conference 2015 in Seoul [16], Kyle Overton reflects on
this ambiguity of human expectations: while machines are expected
to learn about humans and augment them in every possible way,
they “understand nothing” because they “can’t feel the loss of a
loved one”. In the authors’ opinion, it is questionable, if such “true
emotions” – even if they could be created eventually should ever
be within the design space of computer scientists and engineers.
Several scientists have made the claim that “ethically correct
robots” should be able to reason about right and wrong. De Lima
recommends to create conversational rules [12] and Lokhorst even
presents a framework for “Computational Meta-Ethics[14].
We think that ethical decisions are almost impossible to implement
without an emotional substrate corresponding to that of a human.
So again the question remains: even if we could design and build
emotion-sensing robots capable of ethical reasoning do they also
reflect to a human desire, do we want them?
To get a better understanding of what future social robots should be
like, we first take a deep look into the past, providing a historical
perspective on robots and automated machinery in section 2
(Background). We then look at the present and discuss examples of
social robots in section 3 (Related Work). Finally, in section 4
(Study) we look into the future and present findings on the desired
appearance and abilities of social robots, gained in a study with 20
experts at the Forum of the European Active and Assisted Living
Programme. We sum up the findings in section 5 (Conclusion).
In the broadest understanding of the term, “social robots” represent
the universal longing to create a model of humans to match personal
desires and necessities. The ancient author Ovid (43 BC - 17 AD)
describes an early example in his book Metamorphoses
(Transformations). Right after the story about the transformation of
Propitious’ ruthless daughters into stone as part of a divine penalty,
Ovid tells the story of Pygmalion of Cyprus: in his opposition
towards the imperfection of women, he focused all his passion and
time in creating a statue of his female ideal. He falls in love with
his creation, treats it like a real female companion, offers gifts and
shares his bed with it comparable to today’s users of sex robots
like “Synthea Amatus(see section 3.1, Related Work).
It is only thanks to the help and mercy of the Cyprian goddess
Venus that the statue becomes alive. In contrast to the later versions
of the story, which add elements of tragedy, the original version has
a “happy ending”: the birth of their child Paphos. Nevertheless, the
story’s moral is quite clear: human creativity and spirit of invention
is limited to the idealized figural imitation of life while the creation
of life itself depends on divine impetus. In the same way, the giant
bronze automaton Talos, a mythological guardian protecting the
island of Crete and Zeus’ girlfriend Europa, was alive only by
divine authority: created by Hephaestus, the god of handcraft [17].
As a 1920 artistic illustration shows, Talos was envisioned as a
Throughout classical antiquity, there is a strong connection
between the creation of “artificial beings” or automats through
mechanics and the divine. Statues resembling deities and ideals of
human beauty in several cases features complex mechanics. For
example, the sculptor and iron caster Canachos of Sikyon added a
mechanically movable deer to his bronze statue of Apollo in
Brachnidai [23]. It also seems highly probable that the bronze statue
of Diana on one of the luxurious Nemi ships, built for the emperor
Caligula, was standing on a rotating platform [26].
Figure 2: Talos of Crete may well be the first “robot”,
although it was created by a god. Illustration from 1920 by
Sybil Tawse, Public Domain.
Moreover a Crane-like stage machinery was invented for attic
tragedy to allow “deities” to suddenly appear up in the air on stage
[9]. Heron of Alexandria describes several other automatic
machines in the context of ancient religious life:
two statues automatically giving libations, whenever there is
an incense offering [21]
automatically opening doors of temples [21]
a statue of Hercules shooting a snake statue on a tree with an
arrow [21], and many more.
All these automats were designed to amaze the audience and
illustrate divine power. They kept being instruments of fascination
rather than innovation or technical progress.
This attitude hardly changed throughout the centuries. Even in the
late 18th century, a time when natural sciences already were well
established, society was surprisingly willing to accept automated
machinery with unlikely abilities. An excellent example of this
attitude is the success of theMechanical Turk”.
Figure 3: The fake “Mechanical Turk” fascinated Europeans
and Americans. Copper engraving from 1783, Public Domain.
The mechanical Turk or “automaton chess player was fake: a
human chess master hid inside, operating the machine. However,
the audience did not know that, and the machine’s creator
Wolfgang von Kempelen toured with it throughout Europe and
even through the United States, showing it to nobility and political
leaders like Napoleon Bonaparte and Benjamin Franklin.
We think that even today, many people look at robots and automats
with a special kind of fascination and a willingness to attribute
physical mental skills far beyond the machines’ actual capabilities.
In the Related Work section, we will discuss examples of this.
In ISO 8373:2012, “service robots are defined as robots that
“perform useful tasks for humans, which aid in physical tasks such
as helping people to move around”. In contrast to service robots,
social robots are designed to communicate with persons. Chatbots
or avatars are also designed for that purpose, but a social robot is
physically embodied. It interacts and communicates with humans
or other autonomous physical agents by following social behaviors
and rules. In this section, we present examples of social robots as
well as a common model for studying technology acceptance.
3.1 Examples of Social Robots
A natural and intuitive communication between man and machine
occurs when social robots resemble animals. Accordingly, pets
have been popular social robots. In 1999, Sony introduced AIBO
(ERS 110) as a touch sensitive and interacting pet. It was designed
as a friend and partner for entertainment. Indeed, after more than a
decade, Sony has decided to resurrect the iconic robot pet. In a press
release from November 2017 [5] the new model is simply called
“aibo” (ERS-1000). While AIBO is definitely a toy, there are more
intricate solutions, blurring the border between serious and
entertainment applications.
The Nao Robot (Figure 1), developed in 2006 by Aldebaran
Robotics, looks like a toy. However, it can be programmed to show
complex behaviors and interactions, even mimicking human
behavior. One of its major applications was optimizing walk
engines [24]. Interestingly, as a programmable humanoid platform
it proved to be so flexible, that it has even been tested in human-
robot interaction therapy with autistic children [22].
This borderline between programmable toy or pet and useful
robotic machine is perfectly exemplified by another popular
example. Since 2009, the social robot Paro (Figure 4) supports
therapy and care. As an artificial harp seal, it was designed for
elderly people and patients of hospitals and nursing homes.
Figure 4: Paro is designed to interact with elderly patients of
hospitals and nursing homes. Courtesy of Aaron Biggs.
Paro responds to petting with body movements, by opening and
closing its eyes as well as with sounds. It even squeals if handled
too strongly. Studies show positive effects on older adults’ activity
[15, 19].
In the field of nursing homes, not only communication and social
interaction is needed: patients regularly require massive physical
support, e.g. help to get up or to be relocated in the bed. The
caregiver robot Robear supports patients and caregivers with
physical strength. It has been given a cartoon look to avoid the
“uncanny valley” effect [2] that causes humans to react badly to
not-quite convincing humanoids. The Japanese research institute
Riken developed Robear in 2015, but it is not commercially used
and barely subject of published studies so far.
Figure 5: Robear holding a person. Courtesy of Riken.
In healthcare and elderly care, robots can be considered a new tool:
they improve current instruments. A meta-study on “assistive social
robots in elderly care” has been provided 2009 by Broekens et al.
[1]. A recent overview of healthcare robotics is given by Riek [18].
However, what happens, if social robots leave these secluded
“professional” areas and enter the private homes of healthy people,
offering domestic support? Surely, a robotic butler should
comply with established rules of good service, like reliability,
discreetness and conspicuous services. The Care-O-bot, developed
by Fraunhofer IPA [7], is an example of a mobile robot assistant,
which actively supports humans in domestic environments.
Figure 6: Fraunhofer designed the Care-O-bot to assist in
domestic environments.
Care-O-bot 4 can be equipped with one or two arms (or no arms at
all, which surely limits applications). With the display integrated in
its head, it can display different emotions.
It would be interesting to combine the physical abilities of the Care-
O-bot with those of Pepper, a robot developed by Aldebaran
Figure 7: The Pepper Robot, developed 2015 by Aldebaran
Robotics. Xavier Caré / Wikimedia Commons.
Pepper (Figure 7) was designed to interpret emotions by analyzing
facial expressions and tone of voice. However, it is not using this
ability to support persons at home, but to influence buyers’
readiness stages (the willingness to make a purchase) and to assist
with additional information or appraisals, suggestions, and
endorsements. Although Pepper is probably more a marketing tool
than an emotion-sensing companion, it crosses a border: by
interpreting humans’ implicit behaviour, it no longer reacts to
commands but tries to correspond with moods and emotions.
This opens the stage for strange new robots like Sophia (Figure 8),
a female humanoid robot developed by Hong Kong-based Hanson
Figure 8: The Sophia Robot, first shown in 2015 by Hanson
Robotics. Courtesy of Hanson Robotics.
The company claims that the robot learns and adapts human
behavior using artificial intelligence. Indeed, there are multiple
videos of Sophia showing authentic responses to questions.
However, these responses are not generated at runtime but pre-
programmed – the AI only selects the best fitting one to create an
illusion of understanding. Thus, the robot resembles common
chatbots. The intriguing feature are the authentic facial expressions,
which match to the conversation.
Just like the mechanical Turk (section 2, Historical Background),
Sophia fascinates the audience more by a clever illusion than by an
actual ability. Accordingly, in a commonly viewed CNBC-
interview with its creator David Hanson, the Sophia robot claims
that it “hopes to do things such as go to school, study, make art,
start a business, even have my own home and family” [10]. In this
light, it is not surprising that in October 2017 Sophia became the
first robot to receive citizenship of a country: both Saudi Arabia
and Hanson Robotics surely appreciated the echo created by this
media scoop.
These examples show that robotic platforms are already offered and
can be configured for a wide range of “social” applications. It is
obvious, that fantastic new possibilities will emerge but at the
same time, there will also be an increasing number of “weird”
things, of ethically controversial developments. Scheutz & Arnold
provokingly title their 2016 IEEE paper [20]: Are We Ready for
Sex Robots? It is a question of acceptance if robots like “Synthea
Amatus” will ever be considered appropriate. Thus, before we
present the experts’ perspective on social robots, we briefly
introduce the Technology Acceptance Model.
3.2 Acceptance of Social Robots
There are several models to assess the acceptance of new
technologies. A common one is the Technology Acceptance Model
(TAM). It was developed by Davis in 1989 [3] and posits that the
individual adoption and use of information technology are
determined by perceived usefulness and perceived ease of use.
Eleven years later, the model was extended by Venkatesh and Davis
[25] (TAM2, Figure 9) in an attempt to further decomposition
acceptance into societal, cognitive and psychological factors.
Figure 9: The revision of the Technology Acceptance Model
from 2000 (TAM2).
As Hornbæk & Hertzum explain in a recent review on
developments in technology acceptance and user experience [8],
TAM has long grown out of research in IT and its key constructs
have been refined for different disciplines. There are additional
constructs to supplement perceived usefulness and perceived ease
of use like perceived enjoyment [11], adding experiential and
hedonic aspects to TAM.
Technology Acceptance
Model, version 2 (TAM2)
Job Relevance
Output Quality
Experience Voluntariness
Ease of Use
towards Using
Intention to Use
System Use
The study was conducted during the Active and Assisted Living
(AAL) Forum 2016 in St. Gallen, Switzerland, within a workshop
called “From Recognizing Motion to Emotion Awareness.
Perspectives for Future AAL Solutions”. In the survey, we
incorporated elements of TAM and of Delphi studies [13], when
we asked the experts to make time predictions about certain events.
4.1 Setup, Population and Data Gathering
The population consisted of 20 experts in this workshop, aged 30
to 57 years (M = 40.3, SD = 9.7) of which 16 were male and 4
female. 11 experts came from academia, 11 from the business
domain (it was possible to attribute to both domains). The experts
had backgrounds in information science, health, and gerontology.
For the expert survey, we used a questionnaire with just ten items.
It includes six statements, which could be agreed on a seven-point
Likert scale, and four Delphi-style questions, where the experts
should predict future events. The experts answered the questions
for themselves, not in the role of elderlies or people in general. For
the analysis, we used t-tests and ANOVA (analyses of variance).
4.2 Quantitative Data and Qualitative Findings
4.2.1 Devices for Tracking
In the first question, we wanted to check how familiar the experts
were with “giving away” parts of their personal data to devices:
Have you already used one or more devices for tracking activities?
Of the 20 experts, 17 (85%) have used such devices. The three most
devices mentioned most frequently were the Fitbit, the Apple
Watch and the Jawbone. This shows a basic readiness to share
personal data with sensing devices.
4.2.2 Emotion Recognition in Health Care?
In this question, we focused on the domain of healthcare. This is
both an area of high demand for qualified work and an area where
delicate questions are handled: What is your general attitude
towards automated systems in health care? The question was
divided in two sub-questions: the acceptance of automated systems
in health care with / without emotion recognition.
Interestingly, the mean acceptance for systems without emotion
recognition (M = 5.2, SD = 1.3) hardly differs from the acceptance
of systems with emotion recognition (M = 5.1, SD = 1.6), although
for the latter standard deviation is higher. Several experts’
statements show that they see the benefits of emotion recognition:
“It’s important to know whether a person likes using a system and
if [it] makes them happy” (p1). However, there are also some who
point out problems: I am concerned about the level of consent
people can give” (p14). A few experts (two selected “2” out of 7,
two others “3”) there is the potential of an ethical crisis:
“misinterpretations can occur what about incapacitating persons”
(p19). Therefore, in spite of the seemingly similar results in
acceptance, systems with emotion recognition are far more
controversial. This is also reflected by the low Pearson correlation
between the two values (r = .07, p > .82 there is no statistically
significant connection).
4.2.3 Technological Level of Emotion Recognition
In the next question, we wanted to get an impression of the experts’
angle on technology in emotion recognition. Based on a scale from
1 to 7 we asked them about their perceived level of development (1
= basic research; 7 = market ready) of the following seven methods
for emotion recognition: movements and gestures, electrodermal
activity, brainwaves, pulse, facial thermal regions, eye movements
and voice.
Figure 10: Perceived level of development of seven methods
for emotion recognition, with standard deviations.
The diagram shows that especially brainwaves and facial thermal
regions are considered future technologies. When grouping these
future technologies and comparing them with the five other
methods, a t-test shows a highly significant difference (p < .00).
Consequently, a counter-check with an ANOVA over the five
technologies shows no statistically significant differences in the
experts’ perception among them (p > .96).
4.2.4 Tracking of Patients’ Emotions?
In the next question, we wanted to determine what problems
legitimize the use of emotion recognition with patients. We
explained: Some persons in nursing homes or hospitals can
articulate wishes and needs only partially. In which cases would
you consider it reasonable to assess such a person’s feelings
electronically? We provided four sub-questions focusing different
users group:
persons who can no longer communicate
(e.g. stroke, locked-in syndrome)
persons who have cognitive impairments
(e.g. Alzheimer’s disease)
persons who suffer from psychological illnesses
(e.g. depression)
all patients, as a standard procedure
Figure 11: Acceptance of emotion recognition for four
different groups of patients, with standard deviations.
The diagram shows that the level of acceptance decreases
continuously and it decreases in statistically highly significant
4.8 4.6
4.8 4.7
perceived level of development
all patients
acceptance of emotion recognition
steps, as an ANOVA reveals (p < 0.00). It is not surprising, that for
patients with no alternatives the acceptance is high. However, even
for patients with cognitive impairments or psychological illnesses
(e.g. the users of the Paro robot, see Related Work), the acceptance
is relatively high.
4.2.5 Social Robots Acting in Health and Care?
This question takes the ethical dilemma a step closer to the experts.
In a hypothetical scenario, we asked: Imagine you were in need of
care: which activities would you prefer to be conducted by humans
and which […] by automated systems / robots?
We provided six sub-questions focusing different activities in
health care and nursing: being put into another bed (1), washing (2),
going to the toilet (3), taking medicine (4), blood draw (5), and
everyday talk (6).
Figure 12: Acceptance of six common health-related activities
for social robots, with standard deviations.
Here the results surprised us. We expected that “being put into
another bed”, a common activity known to potentially cause back
problems for caregivers and also washing would be more
acceptable than intimate activities like going to the toilet or crucial
medical activities (taking medicine, blood draw). This is no
representative international study – but if these results were typical
for European patients, Riken’s Robear (see Related Work) would
have a hard time in the clinics.
A t-test shows no statistically significant difference between
activities 1 and 2 (p > .86) and an ANOVA shows no statistically
significant difference between the activities 3 to 5 (p > .81).
However, the aggregated groups 1 and 2 versus 3 to 5 versus 6 are,
as the diagram already indicates, statistically highly significantly
different (p < .00).
In summary, the experts seem to give the robots credit for health-
related tasks (3 to 5), whereas nursing tasks (1 and 2) are preferred
to be conducted by humans. We expected that this preference might
correlate with age, leading to the hypothesis: the older a person, the
less he or she wants to be nursed by a robot. The correlations of r =
-0.16 (age and bedding) and r = -0.13 (age and washing) may not
be large number, but both are statistically highly significant (p <
.00). In this case, the acceptance of technology decreases with age.
Younger persons frequently state that they prefer to have robots do
nursing activities.
The everyday talk is an activity we expected to be rejected.
Clearly, in spite of the fascination for robots with special abilities
(see the “mechanical Turk” in Historical Background and the robot
Sophia in Related Work), humans want to talk to fellow humans.
4.2.6 Other Potential Areas for Social Robots?
In the final question with a Likert-scale we asked: In which areas
would you consider it reasonable to use social robots or other
systems with emotion recognition? We provided seven sub-
questions focusing different forms of social relationships:
replacement for pet (1), companion for lonely seniors (2),
companion for childless couples (3), sexual substitute (4),
emotional replacement for partner (5), court statements (6), anti-
terror measures (7). We are well aware that almost all of these sub-
questions would require further elaboration and discussion.
Figure 13: Acceptance of seven potential activities for social
robots, with standard deviations.
The diagram indicates that there are two groups of activities. These
groups are relatively homogenous: two ANOVA show no
statistically significant difference within group 1 with activities 1,
2, and 7 (p > .72) and within group 2 with activities 3 to 6 (p > .66).
However, and ANOVA between these groups shows that they are
highly significantly different (p < .00). While group 2 tends towards
rejection, the first group tends towards acceptance maybe because
similar solutions already exist:
The Paro robot (see Related Work) is both a replacement for a
pet and a companion for seniors
The established use of fighting drones by military partially
explains the relatively high acceptance in the area “anti-terror
measures”. However, this question also resulted in the highest
standard deviation throughout the survey (SD = 2.4).
Although the ANOVA showed no difference within group 2, the
data indicate that using social robots as emotional replacement
(activities 3 and 5) is considered less acceptable than using them
“just” as a sexual substitute (activity 4). Finally, we experts
consider social robots for investigating court statements least
acceptable. Obviously, the room for emotions including deception
is the sacred area of humanity just as the video “Hello World”
indicates (see Introduction).
4.2.7 Delphi Predictions
In this sub-section, we present the experts’ predictions regarding
certain future events. If the experts indicated time ranges (e.g. 2030
to 2040), we used the mean (here: 2035). The term “widespread”
was defined as “comparable to the current use of GPS to track
disoriented seniors”. The experts answered the following Delphi
3.5 3.6
4.3 4.4
being put
into another
washing going to the
blood draw everyday
acceptance of health-related support
In which year do you expect a widespread use of automated
systems, which support caregivers at physical work (“care
In which year do you expect the market entry of a system (in
any area of society) that can estimate a person’s emotional
state (“social robots”)?
In which year do you expect a widespread use of systems in
care that can estimate a person’s emotional state (“social
In which year do you expect a widespread use of sensors
implanted into the human body?
Figure 14: Delphi predictions on four future technological
developments, with standard deviations. The smaller green
bar excludes an outlier in prediction 3.
For the first question, the answer tendency is clear, the standard
deviation is low (M = 2024.3, SD = 5.1). Clearly, systems like the
Care-O-bot or the Robear (see Related Work) show the potential.
Just like care robots, emotion sensing social robots are perceived as
something coming relatively soon (M = 2024.5, SD = 7.8). With
regard to emotion-sensing robots in care, it is mainly one outlier
(prediction: year 2100) which creates the high variance (M =
2031.6, SD = 17.6). Without that outlier, the overall prediction goes
down four years and deviation is halved (M = 2027.6, SD = 6.1).
Nevertheless, this example shows how controversial emotion
sensing becomes, once situated in special domain (in opposition to
unspecified use).
With the final prediction, we again wanted to see how the experts
react when things become “personal” (like in sub-section 4.2.5). An
implanted sensor blurs the distinction between human and machine
a frequent topic in art, resulting in the concepts like “Cyborgs” or
“Replicants”. While the standard deviation is high (M = 2030.8, SD
= 11.9), the mean prognosis of 2030 is not even one generation
away from 2018.
Already the vast spectrum of social robots in the Related Work
section shows that we are on the edge of something new: artificial
intelligence and engineering allow robots with social behaviors to
become an everyday phenomenon. At the same time, the results of
the Study section indicate that most experts think that social robots
will eventually address even sensitive societal areas like emotional
companionship. If the experts’ optimistic time predictions are
checked against their skepticism regarding the application of social
robots in society, it becomes obvious that they perceive technology
as fast moving– faster than the necessary ethical discussions (let
alone legal adaptations) would require.
Society has to decide in which way technology should develop. Do
we just need robotic assistants, supporting activities like cleaning
or serving dinner? Beyond such assistance, there are personal
services like nursing or even companionship which contain
emotional elements. Their application should be guided by ethics
and regulated by law.
However, even if the ethical problems could be solved: if human
society starts substituting service and knowledge work by social
robots, could “artificial intelligence create an unemployment crisis
as Ford [6] postulates? If a robot “starts a business”, like the Sophia
robot claims: should law allow that humans work for robots?
Should robots even be allowed to inherit money or property?
If we extrapolate the predicted development of service robots for
personal use, the substitution of humans can potentially spread to
intimate relationships. Relationships with social robots, which
simulate love and understanding authentically, even under the most
unlikely circumstances, will surely be much easier than
relationships with human partners. This can have a very damaging
societal impact. It is not surprising that the acceptance of social
robots seems to decrease with the level of face-to-face emotions
Finally, if social robots are getting smarter and more social
throughout the years: do we still have the right to treat them like
commodities or slaves or will there be robot rights, like animal
rights and human rights?
This paper aims to shed light on what future social robots should
be like. In the introduction, we illustrated the ambiguity of human
expectations regarding robots. In section 2, we provided a historical
background on robots and automated machinery from Talos to the
mechanical Turk. In the section Related Work, we inspected the
present, discussing current examples of social robots like Paro, the
Care-O-bot or Saudi Arabia’s new “citizen” Sophia.
In the section Study, we looked into the future with the help of 20
experts and presented findings on the desired appearance and
abilities of social robots. We found that the experts consider the
basic technologies for emotion recognition as fairly well developed
(with the exception of brain computer interaction and facial thermal
regions). When it comes to applying emotion sensing in health care,
the acceptance increases with the severity of the patients’ illness or
impairment. When social robots do not only sense but actually help,
the experts seem to approve health-related tasks (going to the toilet,
taking medicine, blood draw), whereas nursing tasks (putting
patients into another bed, washing) or small talk are highly
significantly preferred to be conducted by humans.
On a larger societal scale (beyond healthcare), we found that if
similar types of solutions are already in use, social robots are rated
significantly more acceptable. This applies to the three diverse
areas ‘replacement for pet’, ‘companion for lonely seniors’, and
‘anti-terror measures’. Acceptance decreases highly significantly in
unfamiliar areas: ‘companion for childless couples’, ‘sexual
substitute’, ‘emotional replacement for partner’, and ‘court
statements’. Additionally, acceptance seems to go down with the
level of face-to-face emotions involved.
In the introduction we asked, if humans desire social robots capable
of emotion-sensing or even ethical reasoning. If we look at these
findings, the experts perceive that currently such capabilities are
primarily suited for clinical and health-related situations.
2024.3 2024.5
2031.6 2030.8
care robots social robots
sensing emotions
social robots
sensing emotions
(in care)
implanted sensors
Delphi predictions on future technology
In the study’s Delphi section, we presented the experts’ predictions
regarding future events. Already for 2024, they forecasted a
relatively widespread use of care robots, as well as emotion-sensing
social robots. However, for emotion-sensing robots in health care,
the prognosis goes up by at least three years, so this area is
considered especially sensitive. Nevertheless, the experts think that
such sensitive areas will eventually be covered: even implanted
sensors are predicted to be fairly common by the year 2031.
If these optimistic time predictions are checked against the
skepticism regarding the application of social robots in society, it
becomes obvious that the experts perceive technology as moving
fast faster than the necessary ethical discussions (let alone legal
adaptations) would require. This work helps to make this
discrepancy more transparent and hopefully feeds into a more
informed design of future social robots.
We are well aware that is work is just a glimpse into the future of
social robots. Many of the questions require further elaboration and
discussion – e.g. the use of social robots as an emotional or sexual
substitute or the legal status of social robots and their actions.
We aim to further develop the survey and apply it to a wider
audience. Thus, we aim to gather data that are more representative
for how different groups of society perceive social robots. For
example, we want to see how gender or cultural background
influence acceptance. Such findings will inform the future design
of social robots, their appearance and their abilities.
This research is related to the project “Perspectives on Social
Robots” within the AGYA (Arab German Young Academy of
Sciences and Humanities). The first and the third author are AGYA
members. The AGYA is funded by the German Federal Ministry of
Education and Research.
[1] Broekens, J. et al. 2009. Assistive social robots in elderly
care: a review. Gerontechnology. 8, 2 (Apr. 2009).
[2] Davies, N. 2016. Can robots handle your healthcare?
Engineering Technology. 11, 9 (Oct. 2016), 58–61.
[3] Davis, F.D. et al. 1989. User Acceptance of Computer
Technology: A Comparison of Two Theoretical Models.
Management Science. 35, 8 (Aug. 1989), 982–1003.
[4] Elprama, S.A. et al. 2017. Attitudes of Factory Workers
Towards Industrial and Collaborative Robots. Proceedings
of the Companion of the 2017 ACM/IEEE International
Conference on Human-Robot Interaction (New York, NY,
USA, 2017), 113–114.
[5] Entertainment Robot “aibo” Announced: 2017.
105E/index.html. Accessed: 2017-11-17.
[6] Ford, M. 2013. Could artificial intelligence create an
unemployment crisis? Commun. ACM. 56, 7 (Jul. 2013), 37–
39. DOI:
[7] Graf, B. et al. 2004. Care-O-bot II—Development of a Next
Generation Robotic Home Assistant. Autonomous Robots.
16, 2 (Mar. 2004), 193–205.
[8] Hornbæk, K. and Hertzum, M. 2017. Technology
Acceptance and User Experience: A Review of the
Experiential Component in HCI. ACM Trans. Comput.-
Hum. Interact. 24, 5 (Oct. 2017), 33:1–33:30.
[9] Hose, M. 1997. Deus ex machine.
[10] Hot Robot At SXSW Says She Wants To Destroy Humans:
Accessed: 2016-03-13.
[11] Liao, C.-H. et al. 2008. The Roles of Perceived Enjoyment
and Price Perception in Determining Acceptance of
Multimedia-on-Demand. International Journal of Business
and Information. 3, 1 (2008).
[12] de Lima Salge, C.A. and Berente, N. 2017. Is That Social
Bot Behaving Unethically? Commun. ACM. 60, 9 (Aug.
2017), 29–31. DOI:
[13] Linstone, H.A. and Turoff, M. eds. 1975. The Delphi
method: techniques and applications. Addison-Wesley Pub.
Co., Advanced Book Program.
[14] Lokhorst, G.-J.C. 2011. Computational Meta-Ethics:
Towards the Meta-Ethical Robot. Minds and Machines. 21,
2 (May 2011), 261–274.
[15] McGlynn, S. et al. 2014. Therapeutic Robots for Older
Adults: Investigating the Potential of Paro. Proceedings of
the 2014 ACM/IEEE International Conference on Human-
robot Interaction (New York, NY, USA, 2014), 246–247.
[16] Overton, K. 2015. “Hello World”: A Digital Quandary and
the Apotheosis of the Human. Proceedings of the 33rd
Annual ACM Conference Extended Abstracts on Human
Factors in Computing Systems (New York, NY, USA,
2015), 179–179.
[17] Papadopoulos, J. 1994. Talos. Lexicon Iconographicum
Mythologiae Classicae. Artemis & Winkler Verlag.
[18] Riek, L.D. 2017. Healthcare Robotics. Commun. ACM. 60,
11 (Oct. 2017), 68–78.
[19] Šabanović, S. et al. 2013. PARO robot affects diverse
interaction modalities in group sensory therapy for older
adults with dementia. 2013 IEEE 13th International
Conference on Rehabilitation Robotics (ICORR) (Jun.
2013), 1–6.
[20] Scheutz, M. and Arnold, T. 2016. Are We Ready for Sex
Robots? The Eleventh ACM/IEEE International Conference
on Human Robot Interaction (Piscataway, NJ, USA, 2016),
[21] Schmidt, W. 1899. Pneumatica et automata.
[22] Shamsuddin, S. et al. 2012. Initial response of autistic
children in human-robot interaction therapy with humanoid
robot NAO. 2012 IEEE 8th International Colloquium on
Signal Processing and its Applications (Mar. 2012), 188–
[23] Strocka, V.M. 2002. Der Apollon des Kanachos in Didyma
und der Beginn des strengen Stils. Verlag Walter de Gruyter.
[24] Strom, J.H. et al. 2009. Omnidirectional Walking Using
ZMP and Preview Control for the NAO Humanoid Robot.
RoboCup. 5949, (2009), 378–389.
[25] Venkatesh, V. and Davis, F.D. 2000. A Theoretical
Extension of the Technology Acceptance Model: Four
Longitudinal Field Studies. Management Science. 46, 2
(Feb. 2000), 186–204.
[26] Wolfmayr, S. 2009. Die Schiffe vom Lago di Nemi. Diplom.
(2009), 37–39.
... Humanoid Robot Sophia[7] ...
Full-text available
Computer vision is an artificial intelligence field that is widely used in many fields today. Smart glasses and humanoid robots, on the other hand, are new technologies with a lot of investment, but their development is progressing very quickly and impressively. Computer vision and augmented reality constitute the working principle of smart glasses. On the other hand, the data and technologies to be obtained with smart glasses directly affect the development of humanoid robots. While humanoid robots are on the way to become a big part of our future, computer vision and smart glasses are the main subjects of this article as areas that enable the development of this technology. In this study, the interaction of smart glasses and humanoid robots was examined. The effect and development of computer vision on smart glasses and humanoid robots are explained. In addition, the effect of artificial intelligence and mind reading investments on humanoid robots in the future is also mentioned.
... For a long time, humans have wanted to give life to inanimate objects or try to recreate a human. As far back as we can go, traces of the first "robot" can be found in Greek mythology [2]. ...
Background The use of Artificial Intelligence (AI) in the medical field has the potential to bring about significant improvements in patient care and outcomes. AI is being used in dentistry and more specifically in orthodontics through the development of diagnostic imaging tools, the development of treatment planning tools, and the development of robotic surgery. The aim of this study is to present the latest emerging AI softwares and applications in dental field to benefit from. Types of studies reviewed Search strategies were conducted in three electronic databases, with no date limits in the following databases up to April 30, 2023: MEDLINE, PUBMED, and GOOGLE® SCHOLAR for articles related to AI in dentistry & orthodontics. No inclusion and exclusion criteria were used for the selection of the articles. Most of the articles included (n= 79) are reviews of the literature, retro/prospective studies, systematic reviews and meta-analyses, and observational studies. Results The use of AI in dentistry and orthodontics is a rapidly growing area of research and development, with the potential to revolutionize the field and bring about significant improvements in patient care and outcomes; this can save clinicians’ chair-time and push for more individualized treatment plans. Results from the various studies reported in this review are suggestive that the accuracy of AI-based systems is quite promising and reliable. Practical implications AI application in the healthcare field has proven to be efficient and helpful for the dentist to be more precise in diagnosis and clinical decision-making. These systems can simplify the tasks and provide results in quick time which can save dentists time and help them perform their duties more efficiently. These systems can be of greater aid and can be used as auxiliary support for dentists with lesser experience.
... The research areas of social robotics and cognitive robotics are very popular now. Korn et al. pointed out that artificial intelligence and engineering allow robots with social behaviors to become an everyday phenomenon [5]. They gave the following definition. ...
Conference Paper
This paper presents the results of a method designed to realize visualization of long-term prediction of trending research topics in the field of social and cognitive robotics. Meaningful topics were identified among the words included in the titles of scientific articles. The longevity of the citation trend growth was the target for the machine learning algorithm CatBoost. We conducted experiments on a dataset including 5 million scientific publications to demonstrate the effectiveness of the proposed model. The accuracy rate of 5-year forecasts for a number of experiments was about 60%. Trending topics are built from trending keywords located closely in the semantic vector space. The following trending topics in the field of social and cognitive robotics have been identified: recognition, deep learning, engagement, disorder, conversation, cognitive computing, attention, robotic platform. Trending keywords and topics are visualized on a semantic map built using the t- SNE method. Visualization helps to see the Big Picture, identify promising directions, understand trending topics and reveals related keywords.
... As an advanced form of AI, humanoid robotics have shown promise in the literature related to children with disabilities as well as older adults. SAR, also known as social robots, interact with humans across a variety of settings, and engage people in learning, social, rehabilitative, assistive care, and collaborative activities [27][28][29][30]. Furthermore, SAR promotes independence and well-being while interacting with the user in an intuitive fashion without extensive training or the intervention of a human operator [31]. ...
Full-text available
Introduction The need for caregiver respite is well-documented for the care of persons with IDD. Social Assistive Robotics (SAR) offer promise in addressing the need for caregiver respite through ‘complementary caregiving’ activities that promote engagement and learning opportunities for a care recipient (CR) with IDD. This study explored the acceptability and usefulness of a SAR caregiver respite program responsive to feedback from both young adults with IDD and their older family caregivers (age 55+). Method Young adults with IDD and caregiver dyads ( N = 11) were recruited. A mixed methods research design was deployed in three phases: Phase I with four focus groups to inform the program design; Phase II for program demonstration and evaluation with pre- and post-surveys; and Phase III with post-program interviews for feedback and suggestions. Results Both young adults with IDD and their caregivers scored favorably the social presence of, social engagement, and satisfaction with robot Pepper. Though there was no significant improvement of caregiving burden/stress as well as well-being of the young adults with IDD based on surveys, results from interviews suggested that the SAR may offer physical/emotional respite to caregivers by providing companionship/friendship as well as promoting independence, safety/monitoring, and interactive engagement with children. Discussion SAR has potential in providing respite for older family caregivers. Future studies need a longer program design and larger sample size to develop a promising intervention and test its feasibility and efficacy.
... Considering the presented categorisation, this work aims at focusing to non industrial robots with assistive purpose in social environment. More specifically, service and companion robots represent the object of this work, since they represent those platforms which better suits with older adults' needs [6,8,28,34]. ...
Full-text available
In the last years, social robots have become a trending topic. Indeed, robots which communicate with us and mimic human behavior patterns are fascinating. However, while there is a massive body of research on their design and acceptance in different fields of application, their market potential has been rarely investigated. As their future integration in society may have a vast disruptive potential, this work aims at shedding light on the market potential, focusing on the assistive health domain. A study with 197 persons from Italy (age: M = 67.87; SD = 8.87) and Germany (age: M = 62.15; SD = 6.14) investigates cultural acceptance, desired functionalities, and purchase preferences. The participants filled in a questionnaire after watching a video illustrating some examples of social robots. Surprisingly, the individual perception of health status, social status as well as nationality did hardly influence the attitude towards social robots, although the German group was somewhat more reluctant to the idea of using them. Instead, there were significant correlations with most dimensions of the Almere model (like perceived enjoyment, sociability, usefulness and trustworthiness). Also, technology acceptance resulted strongly correlated with the individual readiness to invest money. However, as most persons consider social robots as “Assistive Technological Devices” (ATDs), they expected that their provision should mirror the usual practices followed in the two Countries for such devices. Thus, to facilitate social robots’ future visibility and adoption by both individuals and health care organisations, policy makers would need to start integrating them into official ATDs databases.
... Thus, a specific focus had been given to care scenarios. The results were optimistic, and many experts foresaw a widespread use of social robots, for example in healthcare, already in the year 2024 [12]. Almost five years after this expert evaluation, we again used the AAL-Forum in October 2021 (virtual event) to repeat and expand the study. ...
Conference Paper
Full-text available
This work documents the rising acceptance of social robots for healthcare as well as their growing economic potential from 2017 to 2021. The comparison is based on two studies in the active assisted living (AAL) community. We first provide a brief overview of social robotics and a discussion of the economic potential of social health robots. We found that, despite the huge potential for robotic support in healthcare and domestic routines, social robots still lack the functionality to access that potential. At the same time, the study exemplifies a rise in acceptance: all health-related activities are more accepted in 2021 when in 2017, most of them with high statistical significance. When investigating the economic perspective, we found that persons are aware of the influence of cultural, spiritual, or religious beliefs. Most experts (57%), having a European background, expect the state or the government to be the key driver for establishing social robots in health and significantly prefer leasing or renting a social health robot to buying one. Nevertheless, we speculate that it might be a global financial elite which is first to adopt social robots.
... En revanche, ces robots industriels traditionnels ont connu une révolution majeure qui les a rendus de plus en plus autonomes et faciles à commander : un robot industriel est aujourd'hui une machine capable de s'adapter et d'agir sur son environnement, même plus complexe, en remplaçant l'être humain, travaillant avec lui ou étendant certaines de ses fonctions. Un demisiècle après leur apparition, ces robots ne sont plus confinés à des environnements très fermés, et comme le souligne parfaitement [97], " Robots have long left the cage of industrial settings : they work together with humans -collaboratively." c.-à-d. ...
Cette thèse a pour objectif principal de développer une loi de commande pour contrôler un robot à deux bras afin de remplir les tâches de manipulation et de garantir les aspects de sécurité pour l'homme et le robot. Pour y parvenir, nous avons présenté un aperçu des études qui ont été menées dans ce cadre. Ensuite, nous avons procédé à la modélisation cinématique géométrique et dynamique du robot. Nous avons utilisé le logiciel SYMORO+ pour calculer son modèle dynamique. Au terme d'une présentation détaillée de la méthode d'identification des paramètres des robots manipulateurs, nous avons appliqué celle-ci de manière concrète sur notre propre robot. Ceci nous a permis d'obtenir un vecteur de paramètres capable de garantir une matrice d'inertie définie positive pour toute configuration d'articulation du robot, tout en assurant une bonne qualité de reconstruction du couple pour des vitesses articulaires à la fois constantes et variables. Grâce à cette identification dynamique, nous avons pu réaliser un simulateur précis du robot sur Matlab/Simulink. Les forces externes du robot ont été estimées à partir du module dynamique identifié et une validation expérimentale a été décrite. Pour faire appliquer des lois de commande sur un robot industriel, il a été nécessaire de réfléchir à une communication externe sur le robot. L'approche adoptée a été présentée ainsi que sa réalisation sur le robot. Cette dernière, nous a permis de programmer en python toutes nos problématiques à mettre en œuvre sur le robot. Après avoir validé toutes les étapes préalablement citées nous sommes partis vers l'implémentation de la commande sur le robot. La tâche principale que nous avons adoptée est le dénudage des câbles. A cet effet, nous avons choisi de réaliser une commande hybride force/position dans laquelle les forces externes sont contrôlées afin de garantir l'absence de dommages sur les câbles à dénuder. Le robot qui a servi à réaliser toutes les expériences est le robot IRB14000 YuMi de Abb. Toutes les expériences réalisées dans le cadre de ce projet ont été décrites et présentées. Ce travail de thèse a été réalisé dans le cadre du projet Robotix Academy financé par le fonds européen de développement régional INTERREG V-A Grande Région.
Social robots hold promise in augmenting education, rehabilitative care, and leisure activities for children. Despite findings suggesting various benefits of social robot use in schools, clinics, and homes, stakeholders have voiced concerns about the potential social and emotional effects of children engaging in long-term interactions with robots. Given the challenges of conducting large long-term studies of child-robot interaction (CRI), little is known about the impact of CRI on children's socio-emotional development. Here we summarize the literature on predictions and expectations of teachers, parents, therapists, and children regarding the effects of CRI on children's socio-emotional functioning and skill building. We then highlight the limited body of empirical research examining how CRI affects children's social behavior and emotional expression, and we provide a summary of available questionnaires for measuring socio-emotional constructs relevant to CRI. We conclude with design recommendations for research studies aimed at better understanding the effects of CRI, before social robots become more ubiquitous. This review is relevant to researchers, educators, roboticists, and clinicians interested in designing and using social robots with developmental populations.
Sophia is a humanoid robot developed by Hanson Robotics. In this chapter, I will snapshot some selected medial representations of and debates around her public appearances to discuss the multiple facets of her cyborgian relations, address boundary-crossing between bios and techné, and nature and culture, and attend to debates around intelligence and consciousness. I will explore ambivalences in the representations of Sophia as a controllable trend-setting technology and as a narrator of her own stories of stubbornness, empathy, and irony. I will also reveal the gendered and racialized meanings ascribed to Sophia to discuss the embeddedness of her representations within a white, masculinized, and colonial politico-economic regime. What does Sophia tell us today about the potentials and limitations of cyborgian notions for a feminist-postcolonial debate grounded in Science and Technology Studies?KeywordsHumanoid robotRepresentationFeminist-postcolonialScience and Technology Studies
Dieser Beitrag gibt eine Marktübersicht zu sozialen Robotern. Anhand einiger Beispiele wird aufgezeigt, welche Roboter derzeit erhältlich sind oder es demnächst sein dürften. Der Beitrag gibt aber auch Aufschluss über Modelle, welche früher verfügbar waren und wieder verschwunden sind oder trotz Ankündigung nie auf den Markt kamen. Dazu werden die vier Bereiche Kinderzimmer, Schlafzimmer, öffentlicher Raum und Therapie und Pflege voneinander abgegrenzt und gesondert behandelt. Abschließend wird festgehalten, in welche Richtung die Entwicklung von sozialen Robotern in den kommenden Jahren gehen dürfte und welche Fragen es zwischenzeitlich zu klären gilt.
Full-text available
Understanding the mechanisms that shape the adoption and use of information technology is central to human--computer interaction. Two accounts are particularly vocal about these mechanisms, namely the technology acceptance model (TAM) and work on user experience (UX) models. In this study, we review 37 papers in the overlap between TAM and UX models to explore the experiential component of human--computer interactions. The models provide rich insights about what constructs influence the experiential component of human--computer interactions and about how these constructs are related. For example, the effect of perceived enjoyment on attitude is stronger than those of perceived usefulness and perceived ease of use. It is less clear why the relations exist and under which conditions the models apply. We discuss four of the main theories used in reasoning about the experiential component and, for example, point to the near absence of psychological needs and negative emotions in the models. In addition, most of the reviewed studies are not tied to specific use episodes, thereby bypassing tasks as an explanatory variable and undermining the accurate measurement of experiences, which are susceptible to moment-to-moment changes. We end by summarizing the implications of our review for future research.
Full-text available
A procedure for reflection and discourse on the behavior of bots in the context of law, deception, and societal norms.
Conference Paper
Full-text available
Despite the predicted growth of the amount of collaborative robots, little attention has been paid to the attitudes and needs of potential future users of these robots (e.g. factory workers). The main goal of our study was to identify recommendations to improve collaboration between robots and factory workers. We used the collaborative robot Baxter as a probe to elicit repsonses from factory workers in order to explore their perceptions regarding working with robots. We conducted four interviews with two factory workers simultaneously participating in each interview (n = 8). These interviews were analyzed and our findings suggest that workers believe robots can and have taken people's jobs. However, they also believe that robots can decrease their workload.
Robots have the potential to be a game changer in healthcare: improving health and well-being, filling care gaps, supporting care givers, and aiding health care workers. However, before robots are able to be widely deployed, it is crucial that both the research and industrial communities work together to establish a strong evidence-base for healthcare robotics, and surmount likely adoption barriers. This article presents a broad contextualization of robots in healthcare by identifying key stakeholders, care settings, and tasks; reviewing recent advances in healthcare robotics; and outlining major challenges and opportunities to their adoption.
So are robots the future of healthcare? At least for routine tasks, it looks as if they are likely to become common in healthcare settings. As machine learning expert Anthony Goldbloom recently argued, robots are well-suited to absorbing a large number of examples or books on a subject and applying that knowledge to a situation. Often with more precision than a human, they can navigate obstacles, perform repetitive tasks to a specification, identify and remember patterns and be designed to go places where humans cannot. Although their appearances and behaviours may eventually become indistinguishable from those of humans, some believe they will always lack the human ability to think creatively and 'out of the box.' In the healthcare context, this raises the question of whether robots will ever be able to truly replace caregivers - will they be able to empathise with a patient or be able to identify environmental causes for disease? Such questions remain to be answered, but the scope for mobile service robots is certainly an area to keep an eye on.
Conference Paper
This animated film postulates how a sentient robot might reify the concept of human as creator. The artificially intelligent narrator attributes deity status to humans, and wrestles with the consequences of an illogical god. Exaltation, condemnation, praise, belittlement, status, worth, and self-actualization are all themes in this short. The film is neither a direct criticism nor tribute to the work of practitioners. Rather, it is an examination of the dynamic exchange between the role of human and the ability of machine, for the purpose of illustrating the philosophical paradoxes related to intelligence, and theory of mind.
Conference Paper
As the population ages, there is an increasing need for socio-emotional support for older adults. Therapeutic pets have been used to meet this need, but there are limitations in the practicality of placing pets in older adults' living environments. More recently therapeutic robots have been proposed as a solution. However, there is limited research on the efficacy of deploying therapeutic robots to provide socio-emotional support to older adults. This study investigates the potential of the Paro robot seal to support older adults' needs while avoiding some of the limitations of current pet therapy approaches. Our results will provide insight about how robots can be designed and deployed to increase therapeutic efficacy.
Advances in artificial intelligence and robotics will have significant implications for evolving economic systems.