Content uploaded by Jan Seifert
Author content
All content in this area was uploaded by Jan Seifert on Nov 18, 2015
Content may be subject to copyright.
Care-O-bot 3 — Rationale for human-robot interaction design
Christopher Parlitz∗, Martin H¨
agele∗, Peter Klein†, Jan Seifert†and Kerstin Dautenhahn‡
∗Fraunhofer IPA
Nobelstr. 12, D-70569 Stuttgart, Germany
Email: {parlitz, haegele}@ipa.fraunhofer.de
†User Interface Design GmbH, Teinacher Str. 38, D-71634 Ludwigsburg, Germany
Email: {pklein, jseifert}@uidesign.de
‡Adaptive Systems Research Group, School of Computer Science, University of Hertfordshire, Hatfield,
Hertfordshire AL10 9AB, UK
Email: K.Dautenhahn@herts.ac.uk
Abstract—This paper presents the design rationale for a
new service robot, Care-O-bot 3 that is meant to operate as
a companion robot in people’s homes. Note, the emphasis in
the paper is on the design of a companion robot as a product,
not as a research prototype. The design is motivated from a
multi-disciplinary viewpoint and compared to other approaches
in the field which often focus on a humanoid appearance. In this
paper we put forward an abstract design with iconic features.
We argue that such a more ‘technomorphic’ design may be
appealing to a potentially large user group. Potential target
user groups of the robot are identified. The realization of such
a design, including details on the robots mobile base and torso,
manipulator, sensors, and remote interfaces are presented in
the paper.
I. INTRODUCTION
It makes a great difference whether you may pursue a
vision or whether you want to claim a market niche. Both
aims are equally admirable. Still, not surprisingly they do
require totally different strategies. We refrain from the notion
of a humanoid robot in the short run and claim: the humanoid
robot as a product is a matter of the less near future. Based
on this, we would like to give an account of our strategy on
how to design a household robot of the nearer future using
established remote user interfaces and creating a unique
external design for intuitive human-robot interaction.
II. PHYSICAL LOOK & FEEL: ANTHROPOMORPHISM OR
TECHNOMORPHISM
Humans talk! They talk to other humans whom they
suppose to be listening. But they do also talk to perfectly
unsuited ‘fellows’. They talk with a cranky car, with a
headstrong computer, with their pushy alarm clock, even with
astubborn ketchup bottle. [2] reports: 47% identify their
vehicle by gender and nearly 26% had given it a name. [13]
reports similar results about computers. The phenomenon
This work was partly funded as part of the research project DESIRE by
the German Federal Ministry of Education and Research (BMBF) under
grant no. 01IME01A and partially conducted within the Integrated Project
COGNIRON (‘The Cognitive Robot Companion’ - www.cogniron.org)
and funded by the European Commission Division FP6-IST Future and
Emerging Technologies under Contract FP6-002020.
that humans attribute human-like characteristics, motives, or
behaviour to inanimate objects is called anthropomorphism.
Anthropomorphism is a constant pattern in human cog-
nition [4], [8], [19], [29]. The interaction of a human with
a robot (or any kind of machine) can not completely elude
it. However, we may want to keep such attributions to an
absolute minimum. The engineer needs to decide whether to
promote an anthropomorphic perceived robot or to minimise
it. According to [16] the uncanny valley would suggest to
either stay in the domain of very non-human, toy-like robots,
or to create a robot that appears to be almost perfectly
human-like, because a robot in between may elicit rather
fearful responses. Unfortunately, at present the uncanny
valley is not a good starting point for robot engineering and
lacks a solid empirical foundation [14].
Furthermore, there is disagreement. The matching hypoth-
esis [9] predicts the most successful human-robot interaction
if the robot’s appearance matches its role in the interaction.
In highly interactive social or playful tasks participants in
a study preferred the human-like robot. In serious, less
emotional tasks however they did prefer the machine-like
robot [9]. Similar findings done in our own studies are
described below. We must be aware of the fact that the
appearance of the robot communicates its strengths and
competences to the user.
Further arguments against a human-like robot can be
derived from basic usability principles. Firstly, we need to
establish a stable channel of communication. The interaction
with a machine shall minimise tendencies of ‘misunder-
standings’. The more complex interaction techniques based
on gesture or language are still in an early stadium of
development. Albeit the achievements of these technologies
they are still error-prone and cannot provide a stable basis of
human-machine communication in real-world applications.
Secondly, successful interaction is not merely a question
of establishing a reliable channel. Since we do not want to
create an individual personality but a tool to support us in our
lives, the interaction with a machine shall satisfy the users’
expectancies [1]. This requires that the machine interprets its
input correctly.
Human-like appearance is likely to trigger expectations
that go beyond the actions of a machine. But being humanoid
in appearance does hardly suffice to meet the expectancy
of human-like reactions. To achieve this, the robot needs
to interpret situations correctly to adapt its behavior. This
requires elaborate models of cognition and emotion. Even
though research makes progress in these matters, e.g. within
the Cogniron Project, this is not suitable every-day technol-
ogy yet. Instead findings suggest, that if a machine triggers
high expectations concerning its capabilities, the user adapts
accordingly and tends to overchallenge the machine [23]
while getting frustrated himself.
Furthermore, the relation between human and robot gets
even more complicated if we expand the focus from the
capabilities of the robot to the characteristics of the inter-
action. Research about human-machine interaction is well
established in science. The interaction between a human-like
robot and a human, however, goes far beyond a traditional
human-machine relation. In this context patterns of social
behavior become more important [18], [22]. Thus, robot
designer also needs to be familiar with issues regarding social
interaction aspects. At present, however, findings are still
too preliminary to serve as design guidelines for a socially
acceptable humanoid service robot.
Based on these arguments, we decided against a humanlike
robot and investigated measures to avoid anthropomorphic
attributions, and instead support technomorphic perceptions.
Nass et al. repeatedly noted measures that are supporting an-
thropomorphic attributions: use of natural language, display
of a face, demonstration of emotion, interactivity, and role
acquisition [17], [18].
As summarised and supported by [21] anthropomorphic
attributions may be enhanced if the robot permits a clear
attribution to a gender. A genderless appearance should im-
prove technomorphic perceptions. From this follows that the
engineer needs to carefully reflect about the robots shape and
especially the display of a face. Furthermore, language output
needs to be carefully designed to avoid gender attributions
(provided that this is possible).
Since a face enhances the impression of a human-like
interaction ‘partner’ [17], [27], which we try to avoid, the
use of faces is not recommended in our particular approach.
Altogether we deduce the claim of a non-humanoid appear-
ance.
We may now take a closer look at behavioral variables.
In the long run, language and gesture communication will
inevitably replace buttons, keyboards, or touch pads. But,
as noted above, such techniques may amplify interaction
problems. While including some of these techniques on
the robot, we are aiming for an interface guaranteeing an
unambiguous communication channel between human and
robot.
Many interaction problems may be seen as a consequence
of anthropomorphic attributions, because it favors the per-
ception of autonomous actions [4], [13]. In any situation the
robot shall communicate to the user what it is doing and
that its actions are deduced from the commands of the user.
The users’ perception of control should gain more attention
than usual. Usability literature provides a bunch of measures,
especially a task-oriented interaction design, supplemented
by a feedback strategy using informative messages and
continuous status display.
In our approach we avoid emotion-like robot behavior or
any display of a distinct, explicit personality. For example,
the robot should not describe itself as being ‘hungry’ when
the battery is empty. Status indication shall avoid explicit
metaphors taken from living beings. The robot shall not refer
to himself by name as ‘I’, because this implies the notion
of a self and the robot’s awareness of its individuality [19].
The same is valid for the use of voices [19], [20]. Note,
as we will discuss in more detail below, anthropomorphic
projections and interpretations of a robot cannot be avoided
completely, due to the inherent tendency of human beings
to perceive the world in terms of intentional and motivated
entities, however, from a design stance there is a choice:
either to exploit and build on such tendencies, or to try
to avoid any explicit reference to anthropo- or zoomorphic
designs. Our choice is the latter alternative.
Despite of all the constraints described above, we still want
to build a pleasurable device. The robot itself may not be
the cause of amusement. But it is to find amusement with
friends. It is a modern lifestyle product. A tool that may
make life easier and more comfortable. The design shall be
appealing, not too mechanical. We aim for an organic shape
and a smooth touch. These considerations provide the basis
for appearance and functionality, as well as interaction and
interface design.
III. VIEWS OF POTENTIAL USERS OF INTELLIGENT
SERVICE ROBOTS
What are people’s views on the role of an intelligent
service robot in the home? Different studies have investigated
people’s attitudes towards domestic robots. Khan [28] carried
out a survey in order to examine adults’ attitudes towards
an intelligent service robot. Participants were 21-60 years
old, and the majority belonged to the age group 21-30.
Results show that most participants were positive towards
the idea of an intelligent service robot and view it as a
domestic machine or a smart intelligent equipment that can
be ’controlled’, but is intelligent enough to perform typical
household tasks. Participants also prefer a robot to be neutral
towards gender and age. Scopelliti et al. [26] investigated
people’s representation of domestic robots across three dif-
ferent generations and found that while young people tend
to have positive feelings towards domestic robots, elderly
people were more frightened of the prospect of a robot in
the home. Studies within the European project Cogniron
assessed people’s attitudes towards robots via questionnaires
following live human-robot interaction trials [7]. Responses
from 28 adults (the majority in the age range 26-45) indicated
that a large proportion of participants were in favour of
a robot companion, but would prefer it to have a role
of an assistant (79%), machine/appliance (71%) or servant
(46%). Few wanted a robot companion to be a ‘friend’. The
majority of the participants wanted the robot to be able to do
household tasks. Also, participants preferred a robot that is
predictable, controllable, considerate and polite. Human-like
communication was desired for a robot companion, however,
human-like behaviour and appearance were less important.
These three studies, conducted in different European coun-
tries, agreed with respect to the desired role of a service
robot in the home: an assistant able to carry out useful tasks,
and not necessarily a ’friend’ with human-like appearance.
Such findings are consistent with the definition of a robot
companion which must be a) able to perform a range of
useful tasks or functions, and b) must carry out these tasks
or functions in a manner that is socially acceptable and
comfortable for people it shares the environment with and/or
it interacts with [28]. This approach, that we put forward in
this paper, complements other approaches that view a robot
as a ‘pet’ or even a ‘child substitute’, relying on people, as
‘caregivers’ to bond with and ‘care’ about the robot, see a
discussion of different paradigms in [5].
IV. ROBOT DESIGN
A. Approach
Considering the above, the goal was to create an unique
and iconic design for a service robot depicting an innovative
product perception away from a humanoid approach. The
design intends to convey a future product vision that is very
different from existing humanoid robots, and that will create
fascination and acceptance for service robots.
To extract necessary functionality, first of all the roles
(Butler, Info-Terminal, Attraction, ...) and typical tasks (Lay
a table, Serve drinks, Fetch and Carry tasks) of the robot
were defined. Simultaneously, available state-of-the-art robot
technology was evaluated. Constraints concerning size and
weight set by a typical house-hold environment had to be
considered. Finally, the experiences made with former robot
developments [10], [11] delivered valuable input. The basic
concept developed was to define two sides of the robot.
One side is called the ‘working side’ and is located at the
back of the robot away from the user. This is where all
technical devices like manipulators and sensors which can
not be hidden and need direct access to the environment are
mounted. The other side is called the ‘serving side’ and is
intended to reduce possible users’ fears of mechanical parts
by having smooth surfaces and a likable appearance. This
is the side where all physical human-robot interaction will
take place. One of the first design sketches can be seen in
fig. 1. After several steps of design-technology convergence
a simplified rendering can be seen in fig 2. Based on these
images the underlying technology was integrated into this
shape.
The robot can be divided into the following components:
Robot mobility and base, torso, manipulator, tray and sensor
carrier with sensors.
The robot is driven by four wheels. Each wheel’s orienta-
tion and rotational speed can be set individually. This gives
Fig. 1. First design sketch
Fig. 2. First technical rendering
the robot an omnidirectional drive enabling advanced move-
ments and simplifying complete kinematic chain (platform-
manipulator-gripper) control. The wheeled drive was chosen
over leg drive because of safety (no risk of falling) and
stability during manipulation. The base also includes the Li-
Io battery pack (50 V, 60 Ah) for the robot, laser scanners
and one PC for navigation tasks. The size of the base is
mainly defined by the required battery space. Nevertheless
the maximal footprint of the robot is approx. 600 mm and
the height of the base is approx. 340 mm.
The torso sits on the base and supports the sensor carrier,
manipulator and tray. It contains most of the electronics and
PCs necessary for robot control. The base and torso together
have a height of 770 mm.
The manipulator used is based on the Schunk LWA3, a
7-DOF light-weight arm. It has been extended by 120 mm
to increase the work area so that the gripper can reach the
floor but also a kitchen cupboard. It has a 6-DOF torque-
force-sensor and a slim quick-change system between the
manipulator and the 7-DOF Schunk Dexterous-Hand. The
force-torque sensor is used for force controlled movements
like opening draws and doors but also for teaching the robot
new tasks by physical interaction with the human. The quick-
change system allows the use of other grippers and robotic
hands like Schunk Anthropomorphic-Hand. The robot hand
has tactile sensors in its finger making advanced gripping
possible. Special attention was paid to the mounting of the
arm on the robot torso. The result is based on simulations
for finding the ideal work space covering the robot’s tray,
the floor and area directly behind the robot following the
‘two sides’ concept developed. Since the manipulator has a
hollow shaft no external cables are needed.
The tray is the main human-robot interface attached to
the robot. Experiences with former robots showed that the
passing of objects directly from human to robot via a
robot’s gripper was not satisfying. The very close interaction
necessary for such a task is not simple. The crucial timing of
when the object can be released can not be easily detected
by the robot. Similarly the user is not used to explicitly
engage into a ‘passing mode’ if an object is handed to
an other person, which is necessary for a robot. Between
humans it is something which is done very unconsciously and
automatically. We have therefore developed the tray concept
as an interface between robot and human for the passing of
objects but also by integrating a touchscreen for traditional
human-computer interaction. If the tray is not used it can
be retracted so that the robot is as compact as possible in
stand-by. If the robot needs to handy anything to a human it
is placed onto the tray and then offered to the human, who
can take it when it is suitable for him. Similarly a human
can place an object onto the robot’s tray at any time, not
needing to wait for the robot to extend and open its gripper.
The robot has a sensor carrier carrying high-resolution
Firewire stereo-vison cameras and 3D-TOF-cameras, en-
abling the robot to identify, to locate and to track objects
and people in 3D. These sensors are mounted on a 4-DOF
positioning unit allowing the robot to direct his sensors in
any area of interest. It is very important in our concept not
to create a face with these sensors and is very difficult to
avoid.
The convergence of the original design idea and the
underlying technology can be seen in 3 showing the robots
final appearance.
Fig. 3. Final Design
V. WHY THE ROBOT LOOKS AS IT DOES
Interestingly, while the careful consideration of appear-
ances of service robots has only relatively recently attracted
attention in the robotics community, comic designer Scott
McCloud presents an interesting framework for the design of
cartoon faces, namely a triangular design space along the di-
mensions of realistic/objective, iconic/subjective and abstract
[15]. Applied to robots, Androids clearly fall in the realistic,
objective, corner of the triangle, where researchers attempt
to faithfully imitate human appearance (and behaviour). The
iCat (Philips) or Papero (NEC) robots are situated in the
iconic corner which is more oriented towards inviting playful,
entertaining, and more socially-oriented interactions. Numer-
ous examples of robot faces (and bodies) along the realistic-
iconic spectrum exist, however, the abstract dimension is far
less populated. Here, we may find robots that are neither
iconic, nor do they try to mimic closely human or animal
shapes (anthropo- or zoomorphic), they are ‘something else’,
compared in art e.g. to the work of Picasso and others.
For abstract designs the focus of attention moves from the
meaning of the representation to the representation itself.
Applied to a robot it means that abstract robotic designs are
more likely to be considered as a ‘piece of art’, or a luxury
item, which is the concept we are pursuing. Note, anything
that moves and operates in the physical world, due to its
embodiment, will invite to some extent comparisons with
living beings, as it has already been shown by Heider and
Simmel more than 60 years ago: even abstract geometrical
shapes moving on a computer screen invite anthropomorphic
interpretations [12], [6]. However, there is a clear difference
between robot designs that explicitly invite such anthro-
pomorphic projections, and our approach, which does not
make any such direct attempts (unless the anthropomorphic
features are part of the robots functionality, i.e. possessing an
arm, which is a functional necessity). As discussed in [3], [6],
designs that are very realistic will invite people to consider
the robots as ’individuals’, e.g. an Android robot will be
considered to be of a particular gender, age, personality,
background etc, based on its appearance. An iconic design on
the other hand is far more open to subjective interpretation.
For example, a person who prefers a male robot, might
recognize it in the design, a person who prefers a female
robot might perceive it as well in the same design. Thus,
iconic designs may appeal to much larger user groups than
realistic designs, they may evoke very different subjective
interpretations and psychological projections by their users.
In our view, this work is supporting a synthesis of abstract
design as well as some iconic features. Correspondingly,
Care-O-bot 3 possess few iconic human-like features (e.g. an
arm), which is important so that people can relate to it and
are able to interpret the robot’s behaviour, but it possesses an
overall very abstract design that focuses on the representation
which is very suitable for an expensive high-tech domestic
robot.
Based on this design rationale, the next section identifies
the target user groups and presents remote interfaces for the
robot.
VI. TARGET USERS AND DESIGN OF A REMOTE
INTERFACE
With this project we target the area of household helper
robots. There exists a smorgasbord of different stakeholders
for this scenario (e.g. ‘Soccer moms’, ‘Techies’ etc.). We
define the minimum common ground for all users by:
•being open to new technologies
•experienced with electronic devices (like PIM, digital
camera or MP3 players)
We used the scenario based design method [25] to produce
our first interface concept. Each scenario is based on a single
persona [24].
Central to most scenario based designs is a textual descrip-
tion or narrative of a use episode. This description is called a
scenario. The scenario is described from the user’s (persona)
point of view and may include social background, resources,
constraints and background information. The scenario may
describe a currently occurring use, or a potential use that is
being designed.
Based on these premises we explored different scenarios
for a touch-pad based interaction concept. The persona used
in this phase ranged from millionaires with the need for
an electronic butler, retired engineers with the wish to have
technical companion to diabetic programmers with the need
to have a dependable nurse.
Because of the diversity of the persona used we came up
with different hardware solutions, ranging from small form
PDAs to full size Tablet PCs. As divers as the hardware were
the results for the actual user interfaces. The UI represents
the traditional gateway to the Care-O-bot 3 hardware. Its
abilities can be accessed through all designed UI variants.
As an example we will show two versions in less detail
and a third variant in more detail. The first version is based
on a persona called ‘Hartmut von Geiss’. He is a young
manager of an IT business. He uses the robot at his home to
support his daily housework. Casually his robot helps him
in multitasking situations: Video phone call from his boss,
during his diner while a parcel service is ringing Figure 4
shows the first design of an UI for this scenario.
Fig. 4. UI for scenario 1
This is a very straight forward design using a small tablet
PC with a decent segmentation of the available screen. All
designs that are based on a touch screen (design 1 and 3)
take the usual touch screen norms (e.g. VDI/VDE 3850) into
account. Design 2 is based on a PDA and uses the guidelines
that are appropriate for stylus passed input devices. The story
behind the design contains a persona called ‘Fabian Krasse’.
He is a diabetic programmer who wants a reliable nurse that
fits his technophile life-style. The interface (see figure 5) of
this scenario is based on a PDA that fits Fabians way of life
and working.
Fig. 5. UI for scenario 2
The last concept presented is based on a persona called
‘Patricia van der Dellen’ and represents the group of so
called ‘soccer moms’ - meaning they have the technical
equipment but not necessarily the knowledge of the un-
derlying technology. This is a more challenging group of
users and leads to an interesting UI concept. The hardware
consists of a tablet PC with finger touch capabilities. The
interaction concept is based on various ‘genies’ that represent
the different characteristics and services that the Care-O-bot
3 can offer (see figure 6).
Fig. 6. UI for scenario 3
The different genies cover the following areas: Household,
entertainment, medical, education, cooking and personal sec-
retary. These different clusters are also colour coded in the
UI. To support this user group in a optimal way we decided
to use a more system guided interaction model. All functions
are more or less deployed in a user guided way. First
impressions of the genie-metaphor seems promising. Next
step is to evaluate these different concepts with a usability
test as soon as the Care-O-bot prototype is available.
VII. CONCLUSIONS
In these days, many laboratories in the world contribute
to gain knowledge in robotics. Most of them focus on iso-
lated aspects of a robot’s capabilities, such as manipulation,
navigation etc.
But creating an appealing product is not solely a question
of bringing individual functions to perfection and to assemble
them afterwards. So far, few teams do ever come to the point,
when they can even start to reflect about a fully-fledged
product.
When constructing a holistic product for a service robot
application an engineering team faces totally different chal-
lenges. Beyond technological expertise it requires a common
vision and an interdisciplinary team with each member
feeling obliged to it. Hardware engineers, designers, infor-
mation technologists, psychologists, mathematicians, sociol-
ogists need to develop a common understanding about the
humans living in this world, about their ideas and tasks.
Also, the design team needs to have a clear idea on who
the potential users might be, people who might be interested
to change their lives by acquiring a robot. Knowing the
target user group also requires to study and understand the
desires, motives and attitudes of the user group. Such a
challenging endeavour requires an open-minded viewpoint
and an interdisciplinary design team.
This paper highlighted a few challenging issues in the
design of a service robot product, i.e. a robot meant to
fulfill a role as a useful and socially acceptable companion in
peoples homes. Placing the robot in domestic environments,
and testing it with real users will be a future challenge.
REFERENCES
[1] “ISO 9241-10, Ergonomic requirements for office work with visual
display terminals (VDTs) - Part 10: Dialogue principles,” 1996.
[2] J. A. Benfield, W. J. Szlemko, and P. A. Bell, “Driver personality and
anthropomorphic attributions of vehicle personality relate to reported
aggressive driving tendencies,” Personality and individual Differences,
vol. 42, pp. 247–258, 2007.
[3] M. P. Blow, K. Dautenhahn, A. Appleby, C. L. Nehaniv, and D. Lee,
“Perception of robot smiles and dimensions for human-robot inter-
action design,” in The 15th IEEE International Symposium on Robot
and Human Interactive Communication (RO-MAN06). Hatfield, UK:
IEEE Press, 6-8 September 2006, pp. 469–474.
[4] L. R. Caporael, “Anthropomorphism and mechanomorphism: two faces
of the human machine,” Computers in Human Behaviour, vol. 2, pp.
215–234, 1986.
[5] K. Dautenhahn, “Socially intelligent robots: dimensions of human-
robot interaction,” Philosophical Transactions for the Royal Society
B: Biological Sciences, vol. 362(1480), pp. 679–704, 2007.
[6] K. Dautenhahn and I. Werry, “Towards interactive robots in autism
therapy: Background, motivation and challenges,” Pragmatics and
Cognition, vol. 12(1), pp. 1–35, 2002.
[7] K. Dautenhahn, S. N. Woods, C. Kaouri, M. L. Walters, K. L. Koay,
and I. Werry, “What is a robot companion - friend, assistant or butler?”
in IROS 2005, IEEE IRS/RSJ International Conference on Intelligent
Robots and Systems, Edmonton, Alberta, Canada, August 2-6 2005,
pp. 1488–1493.
[8] T. J. Eddy, G. G. Gallup, and D. J. Povinelli, “Attribution of cognitive
states to animals: Anthropomorphism in comparative perspective,”
Journal of Social Sciences, vol. 49, pp. 87–101, 1993.
[9] J. Goetz, S. Kiesler, and A. Powers, “Matching robot appearance and
behaviour to task to improve human-robot cooperation,” in Proceed-
ings of the 12th IEEE Workshop on Robot and Human Interactive
Communication, vol. Vol. IXX, Oct. 31–Nov. 2 2003.
[10] B. Graf and O. Barth, “Entertainment robotics: Examples, key tech-
nologies and perspectives,” in IROS-Workshop ”Robots in Exhibi-
tions”, 2002.
[11] M. Hans and B. Graf, “Robotic home assistant Care-O-bot II,” in
Advances in Human-Robot Interaction, E. P. et al., Ed. Heidelberg,
Germany: Springer Berlin / Heidelberg, 2004, pp. 371–384.
[12] F. Heider and M. Simmel, “An experimental study of apparent be-
haviour,” American Journal of Psychology, vol. 57, pp. 243–259, 1944.
[13] H. Luczak, M. Roetting, and L. Schmidt, “Let’s talk: anthropomor-
phism as means to cope with stress of interacting with technical
devices,” Ergonomics, vol. 46, no. 13/14, pp. 1361–1374, 2003.
[14] K. F. MacDorman, “Androids as an experimental apparatus,” in
Proceedings of CogSci-2005 Workshop: Toward Social Mechanisms
of Android Science, Stresa, Italy, 2005, pp. 106–118.
[15] S. McCloud, Understanding Comics: The Invisible Art. Harper
Collins Publishers, 1993.
[16] M. Mori, “Bukimi no tani [the uncanny valley translated by k. f.
macdorman and t. minato],” Energy, vol. 7, pp. 33–35, 1970, translated
by Karl F. MacDorman and Takashi Minato.
[17] C. Nass, J. S. Steuer, L. Henriksen, and C. Dryer, “Machines and
social attributions: Performance assessments of computers subsequent
to ‘self-’ or ‘other-’ evaluations,” International Journal of Human-
Computer Studies, vol. 40, pp. 543–559, 1994.
[18] C. Nass, “Etiquette equality: exhibitions and expectations of computer
politeness,” Communications of the ACM, vol. 47, no. 4, pp. 35–37,
2004.
[19] C. Nass, J. Steuer, E. Tauber, and H. Reeder, “Anthropomorphism,
agency, and ethopoeia: computers as social actors,” in CHI ’93:
INTERACT ’93 and CHI ’93 conference companion on Human factors
in computing systems. New York, NY, USA: ACM Press, 1993, pp.
111–112.
[20] C. Nass, J. Steuer, and E. R. Tauber, “Computers are social actors,”
in CHI ’94: Proceedings of the SIGCHI conference on Human factors
in computing systems. New York, NY, USA: ACM Press, 1994, pp.
72–78.
[21] K. L. Nowak and C. Rauh, “The influence of the avatar on online per-
ceptions of anthropomorphism, androgyny, credibility, homophily, and
attraction,” Journal of Computer-Mediated Communication, vol. 11,
no. 1, p. article 8, 2005.
[22] S. Parise, S. Kiesler, L. Sproull, and K. Waters, “Cooperating
with life-like interface agents,” Computers in Human Behavior,
vol. 15, no. 2, pp. 123–142, March 1999. [Online]. Available:
http://dx.doi.org/10.1016/S0747-5632(98)00035-1
[23] J. Pearson, J. Hu, H. P. Branigan, M. J. Pickering, and C. I. Nass,
“Adaptive language behavior in hci: how expectations and beliefs
about a system affect users’ word choice,” in CHI ’06: Proceedings
of the SIGCHI conference on Human Factors in computing systems.
New York, NY, USA: ACM Press, 2006, pp. 1177–1180.
[24] J. Pruitt and J. Grudin, “Personas: practice and theory,” in DUX ’03:
Proceedings of the 2003 conference on Designing for user experiences.
New York, NY, USA: ACM Press, 2003, pp. 1–15.
[25] M. B. Rosson and J. M. Carroll, Usability engineering: scenario-based
development of human-computer interaction. San Francisco, CA,
USA: Morgan Kaufmann Publishers Inc., 2002.
[26] M. Scopelliti, M. V. Giuliani, A. M. D’Amico, and F. Fornara, “If i
had a robot at home. peoples’ representation of deomestic robots,” in
Designing a more inclusive world, S. Keate, J. Clarkson, P. Langdon,
and P. Robinson, Eds. Springer, 2004, pp. 257–266.
[27] L. Sproull, M. Subramani, S. Kiesler, J. Walker, and K. Waters, “When
the interface is a face,” in Human values and the design of computer
technology, B. Friedman, Ed. Stanford, CA, USA: Center for the
Study of Language and Information, 1996, pp. 163–190.
[28] D. S. Syrdal, K. Dautenhahn, S. N. Woods, M. L. Walters, and K. L.
Koay, “’doing the right thing wrong’ - personality and tolerance to
uncomfortable robot approaches,” in The 15th IEEE International
Symposium on Robot and Human Interactive Communication (RO-
MAN06). Hatfield, UK: IEEE Press, 6-8 September 2006, pp. 183–
188.
[29] S. N. K. Watt, “Seeing things as people,” Ph.D. Thesis, Knowledge
Media Institute and Department of Psychology, Open University
Walton Hall Milton Keynes, UK, 1997.