Conference PaperPDF Available

Care-O-bot 3 — Rationale for human-robot interaction design

Authors:

Abstract and Figures

This paper presents the design rationale for a new service robot, Care-O-bot 3 that is meant to operate as a companion robot in people’s homes. Note, the emphasis in the paper is on the design of a companion robot as a product, not as a research prototype. The design is motivated from a multi-disciplinary viewpoint and compared to other approaches in the field which often focus on a humanoid appearance. In this paper we put forward an abstract design with iconic features. We argue that such a more ‘technomorphic’ design may be appealing to a potentially large user group. Potential target user groups of the robot are identified. The realization of such a design, including details on the robots mobile base and torso, manipulator, sensors, and remote interfaces are presented in the paper.
Content may be subject to copyright.
Care-O-bot 3 — Rationale for human-robot interaction design
Christopher Parlitz, Martin H¨
agele, Peter Klein, Jan Seifertand Kerstin Dautenhahn
Fraunhofer IPA
Nobelstr. 12, D-70569 Stuttgart, Germany
Email: {parlitz, haegele}@ipa.fraunhofer.de
User Interface Design GmbH, Teinacher Str. 38, D-71634 Ludwigsburg, Germany
Email: {pklein, jseifert}@uidesign.de
Adaptive Systems Research Group, School of Computer Science, University of Hertfordshire, Hatfield,
Hertfordshire AL10 9AB, UK
Email: K.Dautenhahn@herts.ac.uk
AbstractThis paper presents the design rationale for a
new service robot, Care-O-bot 3 that is meant to operate as
a companion robot in people’s homes. Note, the emphasis in
the paper is on the design of a companion robot as a product,
not as a research prototype. The design is motivated from a
multi-disciplinary viewpoint and compared to other approaches
in the field which often focus on a humanoid appearance. In this
paper we put forward an abstract design with iconic features.
We argue that such a more ‘technomorphic’ design may be
appealing to a potentially large user group. Potential target
user groups of the robot are identified. The realization of such
a design, including details on the robots mobile base and torso,
manipulator, sensors, and remote interfaces are presented in
the paper.
I. INTRODUCTION
It makes a great difference whether you may pursue a
vision or whether you want to claim a market niche. Both
aims are equally admirable. Still, not surprisingly they do
require totally different strategies. We refrain from the notion
of a humanoid robot in the short run and claim: the humanoid
robot as a product is a matter of the less near future. Based
on this, we would like to give an account of our strategy on
how to design a household robot of the nearer future using
established remote user interfaces and creating a unique
external design for intuitive human-robot interaction.
II. PHYSICAL LOOK & FEEL: ANTHROPOMORPHISM OR
TECHNOMORPHISM
Humans talk! They talk to other humans whom they
suppose to be listening. But they do also talk to perfectly
unsuited ‘fellows’. They talk with a cranky car, with a
headstrong computer, with their pushy alarm clock, even with
astubborn ketchup bottle. [2] reports: 47% identify their
vehicle by gender and nearly 26% had given it a name. [13]
reports similar results about computers. The phenomenon
This work was partly funded as part of the research project DESIRE by
the German Federal Ministry of Education and Research (BMBF) under
grant no. 01IME01A and partially conducted within the Integrated Project
COGNIRON (‘The Cognitive Robot Companion’ - www.cogniron.org)
and funded by the European Commission Division FP6-IST Future and
Emerging Technologies under Contract FP6-002020.
that humans attribute human-like characteristics, motives, or
behaviour to inanimate objects is called anthropomorphism.
Anthropomorphism is a constant pattern in human cog-
nition [4], [8], [19], [29]. The interaction of a human with
a robot (or any kind of machine) can not completely elude
it. However, we may want to keep such attributions to an
absolute minimum. The engineer needs to decide whether to
promote an anthropomorphic perceived robot or to minimise
it. According to [16] the uncanny valley would suggest to
either stay in the domain of very non-human, toy-like robots,
or to create a robot that appears to be almost perfectly
human-like, because a robot in between may elicit rather
fearful responses. Unfortunately, at present the uncanny
valley is not a good starting point for robot engineering and
lacks a solid empirical foundation [14].
Furthermore, there is disagreement. The matching hypoth-
esis [9] predicts the most successful human-robot interaction
if the robot’s appearance matches its role in the interaction.
In highly interactive social or playful tasks participants in
a study preferred the human-like robot. In serious, less
emotional tasks however they did prefer the machine-like
robot [9]. Similar findings done in our own studies are
described below. We must be aware of the fact that the
appearance of the robot communicates its strengths and
competences to the user.
Further arguments against a human-like robot can be
derived from basic usability principles. Firstly, we need to
establish a stable channel of communication. The interaction
with a machine shall minimise tendencies of ‘misunder-
standings’. The more complex interaction techniques based
on gesture or language are still in an early stadium of
development. Albeit the achievements of these technologies
they are still error-prone and cannot provide a stable basis of
human-machine communication in real-world applications.
Secondly, successful interaction is not merely a question
of establishing a reliable channel. Since we do not want to
create an individual personality but a tool to support us in our
lives, the interaction with a machine shall satisfy the users’
expectancies [1]. This requires that the machine interprets its
input correctly.
Human-like appearance is likely to trigger expectations
that go beyond the actions of a machine. But being humanoid
in appearance does hardly suffice to meet the expectancy
of human-like reactions. To achieve this, the robot needs
to interpret situations correctly to adapt its behavior. This
requires elaborate models of cognition and emotion. Even
though research makes progress in these matters, e.g. within
the Cogniron Project, this is not suitable every-day technol-
ogy yet. Instead findings suggest, that if a machine triggers
high expectations concerning its capabilities, the user adapts
accordingly and tends to overchallenge the machine [23]
while getting frustrated himself.
Furthermore, the relation between human and robot gets
even more complicated if we expand the focus from the
capabilities of the robot to the characteristics of the inter-
action. Research about human-machine interaction is well
established in science. The interaction between a human-like
robot and a human, however, goes far beyond a traditional
human-machine relation. In this context patterns of social
behavior become more important [18], [22]. Thus, robot
designer also needs to be familiar with issues regarding social
interaction aspects. At present, however, findings are still
too preliminary to serve as design guidelines for a socially
acceptable humanoid service robot.
Based on these arguments, we decided against a humanlike
robot and investigated measures to avoid anthropomorphic
attributions, and instead support technomorphic perceptions.
Nass et al. repeatedly noted measures that are supporting an-
thropomorphic attributions: use of natural language, display
of a face, demonstration of emotion, interactivity, and role
acquisition [17], [18].
As summarised and supported by [21] anthropomorphic
attributions may be enhanced if the robot permits a clear
attribution to a gender. A genderless appearance should im-
prove technomorphic perceptions. From this follows that the
engineer needs to carefully reflect about the robots shape and
especially the display of a face. Furthermore, language output
needs to be carefully designed to avoid gender attributions
(provided that this is possible).
Since a face enhances the impression of a human-like
interaction ‘partner’ [17], [27], which we try to avoid, the
use of faces is not recommended in our particular approach.
Altogether we deduce the claim of a non-humanoid appear-
ance.
We may now take a closer look at behavioral variables.
In the long run, language and gesture communication will
inevitably replace buttons, keyboards, or touch pads. But,
as noted above, such techniques may amplify interaction
problems. While including some of these techniques on
the robot, we are aiming for an interface guaranteeing an
unambiguous communication channel between human and
robot.
Many interaction problems may be seen as a consequence
of anthropomorphic attributions, because it favors the per-
ception of autonomous actions [4], [13]. In any situation the
robot shall communicate to the user what it is doing and
that its actions are deduced from the commands of the user.
The users’ perception of control should gain more attention
than usual. Usability literature provides a bunch of measures,
especially a task-oriented interaction design, supplemented
by a feedback strategy using informative messages and
continuous status display.
In our approach we avoid emotion-like robot behavior or
any display of a distinct, explicit personality. For example,
the robot should not describe itself as being ‘hungry’ when
the battery is empty. Status indication shall avoid explicit
metaphors taken from living beings. The robot shall not refer
to himself by name as ‘I’, because this implies the notion
of a self and the robot’s awareness of its individuality [19].
The same is valid for the use of voices [19], [20]. Note,
as we will discuss in more detail below, anthropomorphic
projections and interpretations of a robot cannot be avoided
completely, due to the inherent tendency of human beings
to perceive the world in terms of intentional and motivated
entities, however, from a design stance there is a choice:
either to exploit and build on such tendencies, or to try
to avoid any explicit reference to anthropo- or zoomorphic
designs. Our choice is the latter alternative.
Despite of all the constraints described above, we still want
to build a pleasurable device. The robot itself may not be
the cause of amusement. But it is to find amusement with
friends. It is a modern lifestyle product. A tool that may
make life easier and more comfortable. The design shall be
appealing, not too mechanical. We aim for an organic shape
and a smooth touch. These considerations provide the basis
for appearance and functionality, as well as interaction and
interface design.
III. VIEWS OF POTENTIAL USERS OF INTELLIGENT
SERVICE ROBOTS
What are people’s views on the role of an intelligent
service robot in the home? Different studies have investigated
people’s attitudes towards domestic robots. Khan [28] carried
out a survey in order to examine adults’ attitudes towards
an intelligent service robot. Participants were 21-60 years
old, and the majority belonged to the age group 21-30.
Results show that most participants were positive towards
the idea of an intelligent service robot and view it as a
domestic machine or a smart intelligent equipment that can
be ’controlled’, but is intelligent enough to perform typical
household tasks. Participants also prefer a robot to be neutral
towards gender and age. Scopelliti et al. [26] investigated
people’s representation of domestic robots across three dif-
ferent generations and found that while young people tend
to have positive feelings towards domestic robots, elderly
people were more frightened of the prospect of a robot in
the home. Studies within the European project Cogniron
assessed people’s attitudes towards robots via questionnaires
following live human-robot interaction trials [7]. Responses
from 28 adults (the majority in the age range 26-45) indicated
that a large proportion of participants were in favour of
a robot companion, but would prefer it to have a role
of an assistant (79%), machine/appliance (71%) or servant
(46%). Few wanted a robot companion to be a ‘friend’. The
majority of the participants wanted the robot to be able to do
household tasks. Also, participants preferred a robot that is
predictable, controllable, considerate and polite. Human-like
communication was desired for a robot companion, however,
human-like behaviour and appearance were less important.
These three studies, conducted in different European coun-
tries, agreed with respect to the desired role of a service
robot in the home: an assistant able to carry out useful tasks,
and not necessarily a ’friend’ with human-like appearance.
Such findings are consistent with the definition of a robot
companion which must be a) able to perform a range of
useful tasks or functions, and b) must carry out these tasks
or functions in a manner that is socially acceptable and
comfortable for people it shares the environment with and/or
it interacts with [28]. This approach, that we put forward in
this paper, complements other approaches that view a robot
as a ‘pet’ or even a ‘child substitute’, relying on people, as
‘caregivers’ to bond with and ‘care’ about the robot, see a
discussion of different paradigms in [5].
IV. ROBOT DESIGN
A. Approach
Considering the above, the goal was to create an unique
and iconic design for a service robot depicting an innovative
product perception away from a humanoid approach. The
design intends to convey a future product vision that is very
different from existing humanoid robots, and that will create
fascination and acceptance for service robots.
To extract necessary functionality, first of all the roles
(Butler, Info-Terminal, Attraction, ...) and typical tasks (Lay
a table, Serve drinks, Fetch and Carry tasks) of the robot
were defined. Simultaneously, available state-of-the-art robot
technology was evaluated. Constraints concerning size and
weight set by a typical house-hold environment had to be
considered. Finally, the experiences made with former robot
developments [10], [11] delivered valuable input. The basic
concept developed was to define two sides of the robot.
One side is called the ‘working side’ and is located at the
back of the robot away from the user. This is where all
technical devices like manipulators and sensors which can
not be hidden and need direct access to the environment are
mounted. The other side is called the ‘serving side’ and is
intended to reduce possible users’ fears of mechanical parts
by having smooth surfaces and a likable appearance. This
is the side where all physical human-robot interaction will
take place. One of the first design sketches can be seen in
fig. 1. After several steps of design-technology convergence
a simplified rendering can be seen in fig 2. Based on these
images the underlying technology was integrated into this
shape.
The robot can be divided into the following components:
Robot mobility and base, torso, manipulator, tray and sensor
carrier with sensors.
The robot is driven by four wheels. Each wheel’s orienta-
tion and rotational speed can be set individually. This gives
Fig. 1. First design sketch
Fig. 2. First technical rendering
the robot an omnidirectional drive enabling advanced move-
ments and simplifying complete kinematic chain (platform-
manipulator-gripper) control. The wheeled drive was chosen
over leg drive because of safety (no risk of falling) and
stability during manipulation. The base also includes the Li-
Io battery pack (50 V, 60 Ah) for the robot, laser scanners
and one PC for navigation tasks. The size of the base is
mainly defined by the required battery space. Nevertheless
the maximal footprint of the robot is approx. 600 mm and
the height of the base is approx. 340 mm.
The torso sits on the base and supports the sensor carrier,
manipulator and tray. It contains most of the electronics and
PCs necessary for robot control. The base and torso together
have a height of 770 mm.
The manipulator used is based on the Schunk LWA3, a
7-DOF light-weight arm. It has been extended by 120 mm
to increase the work area so that the gripper can reach the
floor but also a kitchen cupboard. It has a 6-DOF torque-
force-sensor and a slim quick-change system between the
manipulator and the 7-DOF Schunk Dexterous-Hand. The
force-torque sensor is used for force controlled movements
like opening draws and doors but also for teaching the robot
new tasks by physical interaction with the human. The quick-
change system allows the use of other grippers and robotic
hands like Schunk Anthropomorphic-Hand. The robot hand
has tactile sensors in its finger making advanced gripping
possible. Special attention was paid to the mounting of the
arm on the robot torso. The result is based on simulations
for finding the ideal work space covering the robot’s tray,
the floor and area directly behind the robot following the
‘two sides’ concept developed. Since the manipulator has a
hollow shaft no external cables are needed.
The tray is the main human-robot interface attached to
the robot. Experiences with former robots showed that the
passing of objects directly from human to robot via a
robot’s gripper was not satisfying. The very close interaction
necessary for such a task is not simple. The crucial timing of
when the object can be released can not be easily detected
by the robot. Similarly the user is not used to explicitly
engage into a ‘passing mode’ if an object is handed to
an other person, which is necessary for a robot. Between
humans it is something which is done very unconsciously and
automatically. We have therefore developed the tray concept
as an interface between robot and human for the passing of
objects but also by integrating a touchscreen for traditional
human-computer interaction. If the tray is not used it can
be retracted so that the robot is as compact as possible in
stand-by. If the robot needs to handy anything to a human it
is placed onto the tray and then offered to the human, who
can take it when it is suitable for him. Similarly a human
can place an object onto the robot’s tray at any time, not
needing to wait for the robot to extend and open its gripper.
The robot has a sensor carrier carrying high-resolution
Firewire stereo-vison cameras and 3D-TOF-cameras, en-
abling the robot to identify, to locate and to track objects
and people in 3D. These sensors are mounted on a 4-DOF
positioning unit allowing the robot to direct his sensors in
any area of interest. It is very important in our concept not
to create a face with these sensors and is very difficult to
avoid.
The convergence of the original design idea and the
underlying technology can be seen in 3 showing the robots
final appearance.
Fig. 3. Final Design
V. WHY THE ROBOT LOOKS AS IT DOES
Interestingly, while the careful consideration of appear-
ances of service robots has only relatively recently attracted
attention in the robotics community, comic designer Scott
McCloud presents an interesting framework for the design of
cartoon faces, namely a triangular design space along the di-
mensions of realistic/objective, iconic/subjective and abstract
[15]. Applied to robots, Androids clearly fall in the realistic,
objective, corner of the triangle, where researchers attempt
to faithfully imitate human appearance (and behaviour). The
iCat (Philips) or Papero (NEC) robots are situated in the
iconic corner which is more oriented towards inviting playful,
entertaining, and more socially-oriented interactions. Numer-
ous examples of robot faces (and bodies) along the realistic-
iconic spectrum exist, however, the abstract dimension is far
less populated. Here, we may find robots that are neither
iconic, nor do they try to mimic closely human or animal
shapes (anthropo- or zoomorphic), they are ‘something else’,
compared in art e.g. to the work of Picasso and others.
For abstract designs the focus of attention moves from the
meaning of the representation to the representation itself.
Applied to a robot it means that abstract robotic designs are
more likely to be considered as a ‘piece of art’, or a luxury
item, which is the concept we are pursuing. Note, anything
that moves and operates in the physical world, due to its
embodiment, will invite to some extent comparisons with
living beings, as it has already been shown by Heider and
Simmel more than 60 years ago: even abstract geometrical
shapes moving on a computer screen invite anthropomorphic
interpretations [12], [6]. However, there is a clear difference
between robot designs that explicitly invite such anthro-
pomorphic projections, and our approach, which does not
make any such direct attempts (unless the anthropomorphic
features are part of the robots functionality, i.e. possessing an
arm, which is a functional necessity). As discussed in [3], [6],
designs that are very realistic will invite people to consider
the robots as ’individuals’, e.g. an Android robot will be
considered to be of a particular gender, age, personality,
background etc, based on its appearance. An iconic design on
the other hand is far more open to subjective interpretation.
For example, a person who prefers a male robot, might
recognize it in the design, a person who prefers a female
robot might perceive it as well in the same design. Thus,
iconic designs may appeal to much larger user groups than
realistic designs, they may evoke very different subjective
interpretations and psychological projections by their users.
In our view, this work is supporting a synthesis of abstract
design as well as some iconic features. Correspondingly,
Care-O-bot 3 possess few iconic human-like features (e.g. an
arm), which is important so that people can relate to it and
are able to interpret the robot’s behaviour, but it possesses an
overall very abstract design that focuses on the representation
which is very suitable for an expensive high-tech domestic
robot.
Based on this design rationale, the next section identifies
the target user groups and presents remote interfaces for the
robot.
VI. TARGET USERS AND DESIGN OF A REMOTE
INTERFACE
With this project we target the area of household helper
robots. There exists a smorgasbord of different stakeholders
for this scenario (e.g. ‘Soccer moms’, ‘Techies’ etc.). We
define the minimum common ground for all users by:
being open to new technologies
experienced with electronic devices (like PIM, digital
camera or MP3 players)
We used the scenario based design method [25] to produce
our first interface concept. Each scenario is based on a single
persona [24].
Central to most scenario based designs is a textual descrip-
tion or narrative of a use episode. This description is called a
scenario. The scenario is described from the user’s (persona)
point of view and may include social background, resources,
constraints and background information. The scenario may
describe a currently occurring use, or a potential use that is
being designed.
Based on these premises we explored different scenarios
for a touch-pad based interaction concept. The persona used
in this phase ranged from millionaires with the need for
an electronic butler, retired engineers with the wish to have
technical companion to diabetic programmers with the need
to have a dependable nurse.
Because of the diversity of the persona used we came up
with different hardware solutions, ranging from small form
PDAs to full size Tablet PCs. As divers as the hardware were
the results for the actual user interfaces. The UI represents
the traditional gateway to the Care-O-bot 3 hardware. Its
abilities can be accessed through all designed UI variants.
As an example we will show two versions in less detail
and a third variant in more detail. The first version is based
on a persona called ‘Hartmut von Geiss’. He is a young
manager of an IT business. He uses the robot at his home to
support his daily housework. Casually his robot helps him
in multitasking situations: Video phone call from his boss,
during his diner while a parcel service is ringing Figure 4
shows the first design of an UI for this scenario.
Fig. 4. UI for scenario 1
This is a very straight forward design using a small tablet
PC with a decent segmentation of the available screen. All
designs that are based on a touch screen (design 1 and 3)
take the usual touch screen norms (e.g. VDI/VDE 3850) into
account. Design 2 is based on a PDA and uses the guidelines
that are appropriate for stylus passed input devices. The story
behind the design contains a persona called ‘Fabian Krasse’.
He is a diabetic programmer who wants a reliable nurse that
fits his technophile life-style. The interface (see figure 5) of
this scenario is based on a PDA that fits Fabians way of life
and working.
Fig. 5. UI for scenario 2
The last concept presented is based on a persona called
‘Patricia van der Dellen’ and represents the group of so
called ‘soccer moms’ - meaning they have the technical
equipment but not necessarily the knowledge of the un-
derlying technology. This is a more challenging group of
users and leads to an interesting UI concept. The hardware
consists of a tablet PC with finger touch capabilities. The
interaction concept is based on various ‘genies’ that represent
the different characteristics and services that the Care-O-bot
3 can offer (see figure 6).
Fig. 6. UI for scenario 3
The different genies cover the following areas: Household,
entertainment, medical, education, cooking and personal sec-
retary. These different clusters are also colour coded in the
UI. To support this user group in a optimal way we decided
to use a more system guided interaction model. All functions
are more or less deployed in a user guided way. First
impressions of the genie-metaphor seems promising. Next
step is to evaluate these different concepts with a usability
test as soon as the Care-O-bot prototype is available.
VII. CONCLUSIONS
In these days, many laboratories in the world contribute
to gain knowledge in robotics. Most of them focus on iso-
lated aspects of a robot’s capabilities, such as manipulation,
navigation etc.
But creating an appealing product is not solely a question
of bringing individual functions to perfection and to assemble
them afterwards. So far, few teams do ever come to the point,
when they can even start to reflect about a fully-fledged
product.
When constructing a holistic product for a service robot
application an engineering team faces totally different chal-
lenges. Beyond technological expertise it requires a common
vision and an interdisciplinary team with each member
feeling obliged to it. Hardware engineers, designers, infor-
mation technologists, psychologists, mathematicians, sociol-
ogists need to develop a common understanding about the
humans living in this world, about their ideas and tasks.
Also, the design team needs to have a clear idea on who
the potential users might be, people who might be interested
to change their lives by acquiring a robot. Knowing the
target user group also requires to study and understand the
desires, motives and attitudes of the user group. Such a
challenging endeavour requires an open-minded viewpoint
and an interdisciplinary design team.
This paper highlighted a few challenging issues in the
design of a service robot product, i.e. a robot meant to
fulfill a role as a useful and socially acceptable companion in
peoples homes. Placing the robot in domestic environments,
and testing it with real users will be a future challenge.
REFERENCES
[1] “ISO 9241-10, Ergonomic requirements for office work with visual
display terminals (VDTs) - Part 10: Dialogue principles,” 1996.
[2] J. A. Benfield, W. J. Szlemko, and P. A. Bell, “Driver personality and
anthropomorphic attributions of vehicle personality relate to reported
aggressive driving tendencies,Personality and individual Differences,
vol. 42, pp. 247–258, 2007.
[3] M. P. Blow, K. Dautenhahn, A. Appleby, C. L. Nehaniv, and D. Lee,
“Perception of robot smiles and dimensions for human-robot inter-
action design,” in The 15th IEEE International Symposium on Robot
and Human Interactive Communication (RO-MAN06). Hatfield, UK:
IEEE Press, 6-8 September 2006, pp. 469–474.
[4] L. R. Caporael, “Anthropomorphism and mechanomorphism: two faces
of the human machine,” Computers in Human Behaviour, vol. 2, pp.
215–234, 1986.
[5] K. Dautenhahn, “Socially intelligent robots: dimensions of human-
robot interaction,” Philosophical Transactions for the Royal Society
B: Biological Sciences, vol. 362(1480), pp. 679–704, 2007.
[6] K. Dautenhahn and I. Werry, “Towards interactive robots in autism
therapy: Background, motivation and challenges,Pragmatics and
Cognition, vol. 12(1), pp. 1–35, 2002.
[7] K. Dautenhahn, S. N. Woods, C. Kaouri, M. L. Walters, K. L. Koay,
and I. Werry, “What is a robot companion - friend, assistant or butler?”
in IROS 2005, IEEE IRS/RSJ International Conference on Intelligent
Robots and Systems, Edmonton, Alberta, Canada, August 2-6 2005,
pp. 1488–1493.
[8] T. J. Eddy, G. G. Gallup, and D. J. Povinelli, “Attribution of cognitive
states to animals: Anthropomorphism in comparative perspective,
Journal of Social Sciences, vol. 49, pp. 87–101, 1993.
[9] J. Goetz, S. Kiesler, and A. Powers, “Matching robot appearance and
behaviour to task to improve human-robot cooperation,” in Proceed-
ings of the 12th IEEE Workshop on Robot and Human Interactive
Communication, vol. Vol. IXX, Oct. 31–Nov. 2 2003.
[10] B. Graf and O. Barth, “Entertainment robotics: Examples, key tech-
nologies and perspectives,” in IROS-Workshop ”Robots in Exhibi-
tions”, 2002.
[11] M. Hans and B. Graf, “Robotic home assistant Care-O-bot II,” in
Advances in Human-Robot Interaction, E. P. et al., Ed. Heidelberg,
Germany: Springer Berlin / Heidelberg, 2004, pp. 371–384.
[12] F. Heider and M. Simmel, “An experimental study of apparent be-
haviour,American Journal of Psychology, vol. 57, pp. 243–259, 1944.
[13] H. Luczak, M. Roetting, and L. Schmidt, “Let’s talk: anthropomor-
phism as means to cope with stress of interacting with technical
devices,Ergonomics, vol. 46, no. 13/14, pp. 1361–1374, 2003.
[14] K. F. MacDorman, “Androids as an experimental apparatus,” in
Proceedings of CogSci-2005 Workshop: Toward Social Mechanisms
of Android Science, Stresa, Italy, 2005, pp. 106–118.
[15] S. McCloud, Understanding Comics: The Invisible Art. Harper
Collins Publishers, 1993.
[16] M. Mori, “Bukimi no tani [the uncanny valley translated by k. f.
macdorman and t. minato],” Energy, vol. 7, pp. 33–35, 1970, translated
by Karl F. MacDorman and Takashi Minato.
[17] C. Nass, J. S. Steuer, L. Henriksen, and C. Dryer, “Machines and
social attributions: Performance assessments of computers subsequent
to ‘self-’ or ‘other-’ evaluations,International Journal of Human-
Computer Studies, vol. 40, pp. 543–559, 1994.
[18] C. Nass, “Etiquette equality: exhibitions and expectations of computer
politeness,” Communications of the ACM, vol. 47, no. 4, pp. 35–37,
2004.
[19] C. Nass, J. Steuer, E. Tauber, and H. Reeder, “Anthropomorphism,
agency, and ethopoeia: computers as social actors,” in CHI ’93:
INTERACT ’93 and CHI ’93 conference companion on Human factors
in computing systems. New York, NY, USA: ACM Press, 1993, pp.
111–112.
[20] C. Nass, J. Steuer, and E. R. Tauber, “Computers are social actors,
in CHI ’94: Proceedings of the SIGCHI conference on Human factors
in computing systems. New York, NY, USA: ACM Press, 1994, pp.
72–78.
[21] K. L. Nowak and C. Rauh, “The influence of the avatar on online per-
ceptions of anthropomorphism, androgyny, credibility, homophily, and
attraction,” Journal of Computer-Mediated Communication, vol. 11,
no. 1, p. article 8, 2005.
[22] S. Parise, S. Kiesler, L. Sproull, and K. Waters, “Cooperating
with life-like interface agents,” Computers in Human Behavior,
vol. 15, no. 2, pp. 123–142, March 1999. [Online]. Available:
http://dx.doi.org/10.1016/S0747-5632(98)00035-1
[23] J. Pearson, J. Hu, H. P. Branigan, M. J. Pickering, and C. I. Nass,
“Adaptive language behavior in hci: how expectations and beliefs
about a system affect users’ word choice,” in CHI ’06: Proceedings
of the SIGCHI conference on Human Factors in computing systems.
New York, NY, USA: ACM Press, 2006, pp. 1177–1180.
[24] J. Pruitt and J. Grudin, “Personas: practice and theory,” in DUX ’03:
Proceedings of the 2003 conference on Designing for user experiences.
New York, NY, USA: ACM Press, 2003, pp. 1–15.
[25] M. B. Rosson and J. M. Carroll, Usability engineering: scenario-based
development of human-computer interaction. San Francisco, CA,
USA: Morgan Kaufmann Publishers Inc., 2002.
[26] M. Scopelliti, M. V. Giuliani, A. M. D’Amico, and F. Fornara, “If i
had a robot at home. peoples’ representation of deomestic robots,” in
Designing a more inclusive world, S. Keate, J. Clarkson, P. Langdon,
and P. Robinson, Eds. Springer, 2004, pp. 257–266.
[27] L. Sproull, M. Subramani, S. Kiesler, J. Walker, and K. Waters, “When
the interface is a face,” in Human values and the design of computer
technology, B. Friedman, Ed. Stanford, CA, USA: Center for the
Study of Language and Information, 1996, pp. 163–190.
[28] D. S. Syrdal, K. Dautenhahn, S. N. Woods, M. L. Walters, and K. L.
Koay, “’doing the right thing wrong’ - personality and tolerance to
uncomfortable robot approaches,” in The 15th IEEE International
Symposium on Robot and Human Interactive Communication (RO-
MAN06). Hatfield, UK: IEEE Press, 6-8 September 2006, pp. 183–
188.
[29] S. N. K. Watt, “Seeing things as people,” Ph.D. Thesis, Knowledge
Media Institute and Department of Psychology, Open University
Walton Hall Milton Keynes, UK, 1997.
... Furthermore, their anthropomorphic mimics raise the risk of entering the uncanny valley, where imperfect human resemblance can evoke unsettling feelings in observers. d) Robots with tablet faces: Arash [22], Iromec [23], RASA [24,25], and Care-o-bot [26,27] are examples of robots that display facial expressions using LCD screens (Fig. 4). Tablet faces rely on 2D avatars presented on a flat screen. ...
... The degree of freedom (DOF) should also be considered when designing a social robot. Leonardo [40], Robota [41], and Tito [42] can effectively copy movement with only a few Intelligent Service Robotics Fig. 1 Robots with fixed facial features: a TALIZO [11], b Pepper [12], c NAO [10], and d RIBA [13] Fig. 2 Robots with mechanical heads: a iCat [14], b Bandit [15], c Muecas [16], and d Flash [17] Fig. 3 Robots with Android faces: a Alice [18], b Sophia [19], c Ibuki [20], and d Little Sophia [21] Fig. 4 Robots with tablet faces: a Arash [22], b Iromec [23], c RASA [24,25], and d Care-o-bot [26,27] [33] robot, k PLEA robot [38] degrees of freedom in their heads and hands. The MOnarCH [43] robot, which has seven degrees of freedom, is capable of interacting with hospital staff and patients using only its arms. ...
Article
Full-text available
One of the most important aspects in the design of a social robot is its visual appeal, with the design of its head playing a particularly important role in this regard. The head design for social robots has been developed using a variety of ways; one that has become popular today is the use of an in-head projector to create a 3D face for the robot. In this research, we review the design specifications and development stages of the Taban 1 and Taban 2 social robots, which were developed for communication with children in educational sessions. One notable feature of these robots is the presence of a projector located inside the back of the head, which displays the image of different characters on various 3D masks, enhancing the robot's appeal and preventing children from getting bored with the interaction. Due to the low attractiveness of the Taban 1, the Taban 2 robot was developed to increase its desirability. The study explores the conceptual and detailed design of the robots, including their hardware and software components. As children prefer a more cartoon-like horizontal face, this study also highlights the advantages of a horizontal face design, allowing for more cartoon-like characters. To evaluate the effectiveness of child–robot interaction and to study whether the Taban 2 robot is more attractive to children than the Taban 1 or not, acceptance sessions were conducted. The participants expressed high satisfaction and positive reception towards Taban 2, considering it a likable, intelligent, and safe technological teaching aid.
... The robotic grasping system comprises of computational units such as the perception of the environment, processing of the perceived data, grasp planning framework, motion planning, etc., and the hardware components such as the robotic manipulators, grippers, vision sensors, etc. The use of robotic manipulation platforms can be found in various applications e.g., service robotics [5][6][7][8][9][10], dexterous object manipulation [11], collaborative robotics [12,13], assistive robotics [14], etc. In the following, the related research on the perception and grasping of 3D objects are discussed. ...
Article
Full-text available
Automatic grasping of unknown 3D objects is still a very challenging problem in robotics. Such challenges mainly originate from the limitations of perception systems and implementations of the grasp planning methods for handling arbitrary 3D objects on real robot platforms. This paper presents a complete framework for robotic grasping of unknown 3D objects in a tabletop environment. The framework comprises of a 3D perception system for obtaining the complete point cloud of the objects, followed by a module for finding the best grasp by an object-slicing based grasp planner, a module for trajectory generation for pick and place operations, and finally performing the planned grasps on a real robot platform. The proposed 3D object perception captures the complete geometry information of the target object using two depth cameras placed at different locations. A hole-filling algorithm is also proposed to quickly fill the missing data points in the captured point cloud of target object. The object-slicing based grasp planner is extended to handle the obstacles posed by the neighbouring objects on a tabletop environment. Then, the proposed framework is tested on common household objects by performing pick and place operations on a real robot fitted with an adaptive gripper. Moreover, finding the best feasible grasp in the presence of neighbouring objects is also demonstrated such as avoiding the table-top and surrounding objects.
... A robot working collaboratively with a user can improve its efficiency by modeling the user's behavior, for example by determining specific poses to hold an object in to facilitate fluid collaboration during assembly (Akkaladevi et al., 2016) or by anticipating and delivering the next required item in assembly (Hawkins et al., 2013(Hawkins et al., , 2014Maeda et al., 2014) or cooking (Koppula et al., 2016;Milliez et al., 2016), or by providing help under different initiative paradigms during assembly (Baraglia et al., 2016). Collaborative environmental assistance can also be used to perform joint actions with a user, such as in handovers (Cakmak et al., 2011;Kwon and Suh, 2012;Grigore et al., 2013;Broehl et al., 2016;Canal et al., 2018;Cserteg et al., 2018;Goldau et al., 2019;Lambrecht and Nimpsch, 2019;Nemlekar et al., 2019;Newman et al., 2020;Racca et al., 2020), where the goal is to transfer an object from the robot's end effector to the user's hand; or comanipulation (Koustoumpardis et al., 2016;Nikolaidis et al., 2016;El Makrini et al., 2017;Goeruer et al., 2018;Rahman, 2019b;DelPreto and Rus, 2019;Rahman, 2020;Wang et al., 2020), where the aim is for the user and the robot to jointly move an object to a specified location or provide redundancy in holding an object in a joint assembly task (Parlitz et al., 2008) or safety critical situation such as surgery (Su et al., 2018). ...
Article
Full-text available
As assistive robotics has expanded to many task domains, comparing assistive strategies among the varieties of research becomes increasingly difficult. To begin to unify the disparate domains into a more general theory of assistance, we present a definition of assistance, a survey of existing work, and three key design axes that occur in many domains and benefit from the examination of assistance as a whole. We first define an assistance perspective that focuses on understanding a robot that is in control of its actions but subordinate to a user’s goals. Next, we use this perspective to explore design axes that arise from the problem of assistance more generally and explore how these axes have comparable trade-offs across many domains. We investigate how the assistive robot handles other people in the interaction, how the robot design can operate in a variety of action spaces to enact similar goals, and how assistive robots can vary the timing of their actions relative to the user’s behavior. While these axes are by no means comprehensive, we propose them as useful tools for unifying assistance research across domains and as examples of how taking a broader perspective on assistance enables more cross-domain theorizing about assistance.
... The idea of explaining the robot's functions to avoid any human and gender associations, as for example suggested by Dufour and Ehrwein Nihan [59], can be found in the development of Fraunhofer's care robot Care-O-Bot, which is not described as anthropomorphic but as technomorphic to explain its function and capacity as a machine [60,61]. Neutralization as a method to handle gender stereotyping can be found in several care robots that were deliberately developed and designed to be genderless. ...
Article
Full-text available
Socio psychological studies show that gender stereotypes play an important role in human-robot interaction. However, they may have various morally problematic implications and consequences that need ethical consideration, especially in a sensitive field like eldercare. Against this backdrop, we conduct an exploratory ethical analysis of moral issues of gender stereotyping in robotics for eldercare. The leading question is what moral problems and conflicts can arise from gender stereotypes in care robots for older people and how we should deal with them. We first provide an overview on the state of empirical research regarding gender stereotyping in human-robot interaction and the special field of care robotics for older people. Starting from a principlist approach, we then map possible moral problems and conflicts with regard to common ethical principles of autonomy, care, and justice. We subsequently consider possible solutions for the development and implementation of morally acceptable robots for eldercare, focusing on three different strategies: explanation, neutralization, and queering of care robots. Finally, we discuss potentials and problems associated with these three strategies and conclude that especially the queering of robotics and the idea of a gender-fluid robot offers an innovative outlook that deserves closer ethical, social, and technological examination.
... We, therefore, decided to not use a humanoid robot, but rather a small nonhumanoid robotic object, reminiscent of a lamp, that was deliberately designed to be peripheral to human-human interaction [66,135] (see Fig. 1). Another advantage of using a non-humanoid robot is related to minimizing participants' unrealistic expectations from the robot's behavior, as sometimes reported in HRI studies involving humanoid robots [41,64,91,103]. It is however left to be tested if a simple robotic object that cannot mimic human behavior, with limited movement capabilities and a limited set of gestures, can positively impact an intimate human-human interaction such as emotional support. ...
Article
Full-text available
Emotional support in the context of psychological caregiving is an important aspect of human–human interaction that can significantly increase well-being. In this study, we tested if non-verbal gestures of a non-humanoid robot can increase emotional support in a human–human interaction. Sixty-four participants were invited in pairs to take turns in disclosing a personal problem and responding in a supportive manner. In the experimental condition, the robotic object performed emphatic gestures, modeled according to the behavior of a trained therapist. In the baseline condition, the robotic object performed up-and-down gestures, without directing attention towards the participants. Findings show that the robot’s empathy-related gestures significantly improved the emotional support quality provided by one participant to another, as indicated by both subjective and objective measures. The non-humanoid robot was perceived as peripheral to the natural human–human interaction and influenced participants’ behavior without interfering. We conclude that non-humanoid gestures of a robotic object can enhance the quality of emotional support in intimate human–human interaction.
Article
With the aging of society, there has been an increasing amount of research on elderly-assisted companion robots. However, many existing methods used in research insufficiently consider the physiological characteristics of the elderly or rely on a single mode of interaction, leading to inaccurate understanding of elderly individuals’ intents. In this paper, we design a multimodal intent understanding and interaction system for elderly-assisted companionship. The system presents the following main innovations: (1) Proposing a semantic-based multimodal fusion algorithm (MSFA) to integrate the semantic layers of gesture and speech, addressing the heterogeneity and asynchrony issues between the two modalities. (2) Assisting elderly individuals in completing daily tasks through the human–computer cooperative interaction control algorithm (HCC). Experimental results demonstrate that the proposed multimodal fusion algorithm achieves effective intent recognition and combines natural human–machine interaction with intent understanding. This not only accurately captures users’ interaction intents and assists in completing interactive tasks but also reduces users’ mental and cognitive load, achieving a more desirable interaction effect. Additionally, the subjective evaluation analysis by users further verifies the effectiveness of the system.
Chapter
Advanced driver assistance systems (ADASs) support drivers in multiple ways, such as adaptive cruise control, lane tracking assistance (LTA), and blind spot monitoring, among other services. However, the use of ADAS cruise control has been reported to delay reaction to vehicle collisions. We created a robot human-machine interface (RHMI) to inform drivers of emergencies by means of movement, which would allow drivers to prepare for the disconnection of autonomous driving. This study investigated the effects of RHMI on response to the emergency disconnection of the LTA function of autonomous driving. We also examined drivers’ fatigue and arousal using near-infrared spectroscopy (NIRS) on the prefrontal cortex. The participants in this study were 12 males and 15 females. We recorded steering torque and NIRS data in the prefrontal region across two channels during the manipulation of automatic driving with a driving simulator. The scenario included three events in the absence of LTA due to bad weather. All of the participants experienced emergencies with and without RHMI, implemented using two agents: RHMI prototype (RHMI-P) and RoBoHoN. Our RHMI allowed the drivers to respond earlier to emergency LTA disconnection. All drivers showed a gentle torque response for RoBoHoN, but some showed a steep response with RHMI-P and without RHMI. NIRS data showed significant prefrontal cortex activation in RHMI conditions (especially RHMI-P), which may indicate high arousal. Our RHMI helped drivers stay alert and respond to emergency LTA disconnection; however, some drivers showed a quick and large torque response only with RHMI-P.KeywordsHuman-machine interface (HMI)Automated drivingDriving assistance system (ADAS)Near-infrared spectroscopy (NIRS)
Chapter
The efficiency and convenience of gesture shortcuts have an important influence on user experience. However, it is unknown how the number of permitted swiping angles and their allowable range affect users’ performance and experience. In the present study, young and old users executed swiping in multiple directions on smartphones. Results showed that multiple allowable angles resulted in slower swiping speed and poorer user experience than the single allowable angle condition. However, as the number of allowable angles increased, only old users showed a significant decrease in swiping accuracy. Vertical-up and upper-right swiping were faster than swiping in the horizontal directions. Furthermore, narrower operable range of swiping only reduced swiping accuracy in the tilted direction. Though old users performed worse on swiping than younger users, their subjective ratings were more positive than younger users’. Suggestions on how to design swiping gestures on the human-mobile interface were discussed.KeywordsGesture shortcutsSwipe gesturesSwipe angleAge differenceUser experience
Chapter
Full-text available
Assessing people’s attitudes and preferences towards a domestic robot, showed that the elderly have a conflicting view of such a device [2]. Older people seem to recognize its potential usefulness in the home, but they are somewhat scared of possible damages caused by the robot and are afraid of intrusions into their privacy. As regards physical shape and behavior of the robot, they clearly express a preference towards a serious looking small robot, with a single cover color and slow movements. Moreover, most of them would like it not to be free to wander inside the house and would expect it to be programmed in a fixed way to execute specific tasks. However, when asked about the specific tasks the robot could perform in their home, people’s answers are somewhat vague or unrealistic. In fact, robots are still too far away from everyday life to be easily distinguished from other technological aids, and the attitude towards them mirrors the general attitude towards new technologies.
Article
Full-text available
People behave differently in the presence of other people than they do when they are alone. People also may behave differently when designers introduce more human-like qualities into computer interfaces. In an experimental study we demonstrate that people's responses to a talking-face interface differ from their responses to a text-display interface. They attribute some personality traits to it; they are more aroused by it; they present themselves in a more positive light. We use theories of person perception, social facilitation, and self-presentation to predict and interpret these results. We suggest that as computer interfaces become more "human-like," people who use those interfaces may change their own personas in response to them.
Article
Full-text available
This article discusses the potential of using interactive environments in autism therapy. We specifically address issues relevant to the Aurora project, which studies the possible role of autonomous, mobile robots as therapeutic tools for children with autism. Theories of mindreading, social cognition and imitation that informed the Aurora project are discussed and their relevance to the project is outlined. Our approach is put in the broader context of socially intelligent agents and interactive environments. We summarise results from trials with a particular mobile robot. Finally, we draw some comparisons to research on interactive virtual environments in the context of autism therapy and education. We conclude by discussing future directions and open issues.
Article
Full-text available
Subjects were asked to indicate the likelihood that each of 30 animals (chosen as exemplars of the major phylogenetic classes) could engage in three complex cognitive tasks. Subjects were also asked to rate the extent to which they felt each animal was similar to themselves and whether they felt the animal experienced the world in a manner similar to the way they experienced it. The results showed that in all cases the perceived similarity and inferred cognitive abilities of animals proceeded from lesser to greater in the following order: invertebrates, fish, amphibians, reptiles, birds, mammals (excluding dogs, cats, and primates). For pets (dogs and cats) and primates, there was a marked increase in perceived similarity and in the tendency to make attributions about complex cognitive characteristics. The data are discussed in the context of viewing anthropomorphism as a derivative of our ability to infer the mental states of conspecifics—an ability that evolved as a consequence of the need to take into account the experience and intentions of other humans. Although we routinely generalize this capacity to species other than our own, the evidence that the effects are reciprocal is extremely limited.
Chapter
Technical aids allow elderly and handicapped people to live independently in their private homes as long as they wish. As a contribution to these required technological solutions, a demonstrator platform for a mobile home care system - called Care-O-bot - was designed and implemented by Fraunhofer IPA, Stuttgart. Care-O-bot is a mobile service robot, which has the capability to perform fetch and carry and various other supporting tasks in home environments. This paper gives an overview about Care-O-bot’s functionalities and first practical tests. Keywords: Robotic Home Assistant, Service Robot, Care-O-bot, Household Tasks, Intelligent Mobility Aid
Article
This paper explores the ambiguity of the “human machine”. It suggests that anthropomorphism results from a “default schema” applied to phenomena, including machines, that a perceiver finds otherwise inexplicable. Mechanomorphism, the attribution of machine characteristics to humans, is a culturally derived metaphor that presently dominates cognitive science. The relationships between anthropomorphism and mechanomorphism pose a special difficulty for the question, “Can machines think?” Does a positive response reflect a cognitive bias on the part of the perceiver or a genuine attribute of the computer? The problem is illustrated for Turing's “imitation game” for thinking machines, and a strategy for constraining anthropomorphic attributions is proposed.
Article
Research in the area of driver anger and aggression has shown that several personality factors contribute to the growing problem. Pilot research indicated that drivers attribute human qualities such as a gender and name to their vehicles which suggested that this tendency to anthropomorphize the vehicle might predict aggressive driving tendencies. Two hundred four undergraduates completed personality inventories for both themselves and their vehicle along with several measures of driving anger and aggressive tendencies. Results suggest that driver and vehicle personalities were related but distinct, indicating that drivers were not just projecting their own personality onto the vehicle. Driver and vehicle personality scores were correlated with several indexes of aggressive driving tendencies. In some cases, vehicle personality predicted aggressive driving better than driver personality. However, initial decision of drivers to anthropomorphize did not relate to differences in aggressive driving tendencies.