Conference PaperPDF Available

Projecting Life Onto Robots: The Effects of Cultural Factors and Design Type on Multi-Level Evaluations of Robot Anthropomorphism

Authors:
AbstractExisting research has shown that people often
attribute human-like attributes to robots, which is generally
known as the anthropomorphismphenomenon. We use the
notion of “multi-dimensional anthropomorphism,” to perform a
more fine-grained analysis of anthropomorphism in relation to
robots in terms of several dimensions (e.g., uniquely and
typically human, being alive or not, having emotions or not).
Additionally, we expand on existing work, which has mostly
focused on organism-based robot designs, by including
object-based robot designs in our study of robot
anthropomorphism. The results of an online survey study with
775 U.S. (393) and Chinese (382) participants show how people’s
personal characteristics (e.g., nationality) affect their
perceptions of the anthropomorphism of robots, and how such
perceptions differ between organism- and object-based robot
designs. The effect on people’s multi-dimensional
anthropomorphism perceptions suggests new design
implications for robotic technologies.
I. INTRODUCTION
Anthropomorphism refers to the phenomenon that people
tend to ascribe human-like traits to non-living things [6]. The
Human-robot Interaction (HRI) community has spent
significant effort to understand the cause, impact, and design
implications of anthropomorphism. For example, researchers
have found that human-like design features of the robot (e.g., a
human-like head [5]), or the personal traits of users (e.g.,
cultural background [20]) may impact the anthropomorphism
ascribed by the user to the robot, and influence how they
interact. These findings have guided the design of various
robots, particularly those meant to be deployed in social
scenarios (e.g., elderly care) [10].
Existing literature on anthropomorphism primarily
focused on users’ interaction with human-like or animal-like
robots. Recently, object-like robots are getting more and more
popular. Kwak et al. [23] categorized robot designs into two
categories: organism-based robot design and object-based
robot design. Organism-based robot design conceptualizes a
robot to resemble a living form. Human-like or animal-like
robot designs belong to this category. The Pleo dinosaur toy is
an example of this design approach. On the other hand,
object-based robot design augments ordinary objects with
robotic technologies to make them intelligent and interactive.
Haodan Tan is with the School of Informatics, Computing, and
Engineering, Indiana University, Bloomington, IN 47401 USA.
(corresponding author to provide phone: 408-839-0436; e-mail:
haodtan@indiana.edu).
Dakuo Wang is a Research Scientist with IBM T.J. Watson Research,
Yorktown Heights, NY 10598 USA (e-mail: dakuo.wang@ibm.com).
Selma Sabanovic is with the School of Informatics, Computing, and
Engineering, Indiana University, Bloomington, IN 47401 USA. (e-mail:
selmas@indiana.edu).
IRobot’s Roomba is one such example that incorporates
intelligence and agency into a vacuum cleaner.
To the best of our knowledge, anthropomorphism in
objected-based robots has not been sufficiently studied, as
most prior research concerns organism-based robots. Existing
research on anthropomorphism of organism-based robots tells
us that perceptions of anthropomorphism rely on the
human-like or animal-like features (e.g., head or talking) of
organism-based robots [8]. Such design features are very
different, if not absent, in objected-based robots. Thus, we
may expect to see a different understanding of how people
perceive such robots. In this paper, we aim to understand how
people’s perceptions of anthropomorphism differ when they
experience the two different types of robot designs.
Perception is a subjective experience of an individual; thus
it often has individual differences. In this paper, we also
explore the personal factors that may affect a person’s
anthropomorphic perception, in addition to the two designs of
robots. Existing research on organism-based robots sheds
lights on the candidate personal factors. They could be a
person’s cultural background [1, 25, 33], religious belief [26]
and prior experience with technology [9]. We examine
whether these factors affect anthropomorphic perceptions in
the context of object-based robots.
While early research used a “yes or no” model to describe
people’s tendency to anthropomorphize, recent work proposes
cognitive models with multi-dimensional scales for
anthropomorphic perception (e.g., [9] and [14]). For example,
Haslam et al. [14] propose that the perception of
anthropomorphism should distinguish typically-human
characteristics from uniquely-human characteristics, as people
may perceive them differently. Thus, the measurement of
users’ perceptions should be scored along these scales. In this
paper, we examine the impact of design and personal factors
on such multiple dimensions of anthropomorphic perception.
To provide a systematic examination of the interactions of
robot design type, personal factors, and multi-layered
anthropomorphic perception, we conducted a large-scale
survey study to explore how people from different cultural
backgrounds (Chinese and American) ascribe
anthropomorphic characteristics to two types of robots
(object-based robot and organism-based robot). We also
measured the personal factors of gender, age, religion, and
prior experience with technology. This study aims to extend
our understanding of anthropomorphic perception to
incorporate multiple dimensions for more nuanced analysis,
and to suggest implications for future robot designs.
Projecting Life onto Robots: The effects of cultural factors and
design type on multi-level evaluations of robot anthropomorphism
Haodan Tan, Dakuo Wang, Selma Sabanovic, Member, IEEE
II. RELATED WORK
In this section, we will review HRI studies in three
sub-domains: people’s perception of anthropomorphism, the
effect of robot design, and the effect of personal factors on
anthropomorphism.
A. The Perception of Anthropomorphism
Anthropomorphism affects how people experience and
interact with robots. Anthropomorphization of a robot might
make people cherish it [11], increase trust [42], take more
responsibility to maintain a robot [19], or have a higher
tolerance to its mistakes [16]. These findings suggest that the
perception of anthropomorphism can facilitate human-robot
interaction. Therefore, it is crucial to further study how to
elicit anthropomorphic perception and measure it for a variety
of robot designs.
Previous studies have used several questionnaires for
measuring anthropomorphism. The Godspeed questionnaire,
designed by Bartneck et al. [2], uses five items to gauge
people’s feelings towards an artifact such as whether the
artifact is fake or natural, then calculate an average score out
of the five items for anthropomorphism. Heerink et al. [15]
created a questionnaire based on the Technology Acceptance
Model (TAM) and the Unified Theory of Acceptance and Use
of Technology (UTAUT). In the questionnaire, perceived
sociability refers to the human-like social cues of a robot.
Ruijten et al. [32] proposed a questionnaire to measure the
perception of anthropomorphism by categorizing it into
typically human characteristics and uniquely human
characteristics. Waytz et al. [43] created the Individual
Differences on Anthropomorphism Questionnaire (IDAQ) to
understand differences in people’s propensity to
anthropomorphize more generally.
In many cases, anthropomorphism was measured as one
aggregated score, such as in the Godspeed questionnaire.
Many studies using this measurement found a linear
relationship between the humanlike-ness of robots and
people’s anthropomorphism ratings (e.g., [5, 16, 22]).
In recent years, several researchers proposed more
complex views of anthropomorphism. Haslam et al. [14]
conceptualize the perception of anthropomorphism as a
continuum that ranges from typically human characteristics to
uniquely human characteristics. In this model,
typically-human characteristics are general lifelike
characteristics, while uniquely-human characteristics refer to
traits that are unique to the human species. Furthermore, some
researchers argue that the phenomenon of anthropomorphism
is multi-layered. Persson et al. [29] argue that
“anthropomorphism should mean different things on different
levels.” By this, he means that people ascribe lifelikeness at
different levels, from primitive categorization (merely live or
non-live), to attributing emotion or intention, to giving social
roles. Fink [9] also proposed lower and higher levels of
anthropomorphism, similarly to Haslam et al.’s theory [14].
If anthropomorphism is multi-layered, the
anthropomorphic perception of different kinds of robots could
vary on each dimension, considering that different kinds of
robots present different features and behaviors. However, few
studies have explored the anthropomorphization of different
design types of robots using a multi-layered framework.
Object-based robots have the appearance of ordinary objects
and abstract lifelike behaviors, so it is possible that the
perception of such robots is more strongly associated with
certain levels of anthropomorphic perception, such as
primitive attributions, while organism-based robots may
associate more strongly with other levels, such as attributions
of emotion, while both are anthropomorphized in some way.
B. How Robot Design Affects Perception of
Anthropomorphism
Most studies of anthropomorphism used technologies in
human or animal-like forms as stimuli (e.g., [28, 31]. As a
result, many of these findings only reflect the effects of human
or animal-like forms on anthropomorphism. The aim of
creating these robots is to resemble living organisms to
develop machines as artificial organisms [23]. This research
paradigm makes it difficult to understand the influence of
anthropomorphism on human-computer interaction
independently from artifacts with living organism forms.
While evidence has shown that more human-like forms
could induce stronger perceptions of anthropomorphism [13,
16], human-like forms are not appropriate for HRI in all
circumstances. Human-like forms could arouse uncanny
valley effects, create embarrassment, or induce high
expectations that do not match the technology’s capabilities
(See [44] for review). In these cases, it would be ideal to be
able to reap the benefits of anthropomorphism induced by
interactive systems with abstract and everyday object forms,
avoiding problems associated with a human-like form.
Besides the popularity of designing humanoids, there has
been a surge in the design of a different kind of robot, which
focuses on imbuing everyday objects and common tools with
intelligence and prioritizes functionality. Kwak, San Kim and
Choi [23] refer to this as object-based robot design. A similar
concept is that of the robject, proposed by Rey et al. [30] to
define robotic everyday objects. Many other HRI studies have
designed robotic everyday life objects without defining them
as such. For example, the emotive robotic drawer designed by
Mok et al. [27] can open and close at various speeds to present
emotional states and an animated character.
The lack of organism-like form could affect the
anthropomorphic perception of object-based robots, but a few
studies have investigated their anthropomorphic perception. In
studies that investigated people’s interactions with a robotic
vacuum, researchers found that people talked with, applied
social etiquette to, and named the machine ([11], [38]). Fraune
et al. [12] found that people preferred a social trashcan robot
that presents social behaviors to those that do not. These
studies imply that people anthropomorphize robots in abstract
and everyday object forms as well, but do not investigate the
different dimensions of anthropomorphism that might operate
in scenarios that use such robots.
In addition, those studies measured people’s interaction
with only a single robot, instead of comparing several kinds of
robots to understand the impact of the type of design on
anthropomorphic perception. Therefore, a quantitative
measurement of anthropomorphism and comparison among
several robots at different levels of human-likeness would be
beneficial for deepening our understanding of the
anthropomorphism as it relates to robot design type.
C. The Effect of Personal Factors on Anthropomorphism
Along with robot design type, individual differences
among users can also be a factor influencing
anthropomorphism. Relevant individual differences may
include cultural background, religious belief, demographic
background, and experience with technology and robots.
The phenomenon of ascribing humanlike-ness to objects
seems to be culturally variable. People in different social
contexts, with different beliefs and experiences, may have
different “cultural models” [33, 36], which allow them to
interpret varying characteristics as having a semblance of life.
Kaplan [20] proposed that Japanese people in general have a
higher tendency to accept robots compared to Westerners, due
to their traditional philosophies and animism.
Studies also found people’s perceptions of robots are
influenced by their religious and spiritual beliefs. Macdorman
et al. [26] found that Japanese participants had a stronger
tendency to associate robots with humans in contrast to their
U.S. fellows, and explained the phenomenon by citing
historical and religious differences. Wagner [41] argued
against essentialist religious and cultural reasons, and pointed
out that government policies in Japan drive the vision of
Japanese users as friendly toward robots.
In general, East-Asian countries share cultural beliefs
rooted in Buddhism and Confucianism [26]. Cross-cultural
comparisons between the responses of East Asian and U.S.
citizens to robots have been made in Japan and South Korea
(e.g., [1, 25]). In this paper, we want to expand on this
literature by exploring anthropomorphic perceptions in China
and the U.S., as robotic technology is a rapidly growing topic
in China and its culture shares some of the philosophical and
religious backgrounds of previously studied East Asian
populations.
In addition to cultural and religious background, people
may also differ in their level of anthropomorphizing robots
because of other individual factors, such as age, experience
with, and attitudes towards intelligent systems and robots (See
[8] for review).
Individual differences may also interact with robot
type/approach to affect people’s perceptions of
anthropomorphism. For example, Asian and Western people
have differences in acceptance of different kinds of robots [1,
25]. Lee and Sabanovic’s cross-cultural survey study [25]
found that Koreans had a stronger preference for humanlike
robots compared to U.S. participants. The same study revealed
that nationality affects people’s evaluations of several robots.
Korean participants give a higher rating to Aibo on smartness
than U.S. participants, and they thought the Geminoid (a
humanoid) is mean to a significantly greater extent compared
to ratings given by U.S. and Turkish participants. In another
study, Lee and Sabanovic [24] conducted interviews in South
Korea and the U.S. and found the Americans expect a more
abstract and mechanical robot, whereas Koreans expect a
robot in a human-like form for in-home use. We are interested
in whether these personal factors have an interaction effect
with design type in their influence on the abovementioned
dimensions of anthropomorphism.
The tendency to attribute human-like characteristics to
robots may be different for organism-based and object-based
robot designs. We are interested in people’s perceptions of
anthropomorphism across multiple robot types, as well as the
impact of personal factors, such as gender, nationality,
technology use, and cultural factors, on such perception.
Specifically, we seek to understand:
RQ1. How does design type affect people’s perception of
robots on different layers of anthropomorphism?
RQ2. How do personal factors impact the different levels
of anthropomorphism between the two kinds of robots?
To answer these exploratory questions, we conducted two
large-scale anonymous online surveys in the United States and
China. The survey deployed in China was in Chinese, and the
one in the US was in English.
III. METHODS
A. Procedure
We recruited participants through online platforms (Zhubajie
and Douban in China and Amazon Mechanical Turk in the
U.S.). After giving consent, participants watched a video clip
of one of six intelligent systems and then answered six
customized questions from Individual Differences on
Anthropomorphism Questionnaire (IDAQ) to evaluate their
perceptions about the robot they just saw. Each participant
watched all videos for the six systems and evaluated them
afterward. The order of presenting the videos was randomized.
Finally, participants answered the rest of questionnaire that
collects their IDAQ, Prior Experience of Technology and
Robot, and demographic information. Remunerations in each
location matched the local average rate for such labor (5 RMB
in China and 2 USD in the U.S.).
B. Materials
To compare people’s anthropomorphic perceptions of robotic
systems with different design type, we chose iRobot, Ottoman,
AIBO, Keepon, ASIMO, and Erica to present (see Fig. 1).
iRobot and Ottoman are object-based robots. AIBO and
Keepon are organism-based robots without human cues.
ASIMO and Erica are humanoids and are used to validate the
anthropomorphic ratings by comparison with the other two
types of robots (see section 4.2).
Object-based robots:
iRobot - is designed for cleaning and does not involve
much interaction. The video presented how it works in a
home;
Ottoman is a robotic stool that can automatically
approach a person to let her/him put feet on it. It can also
respond to a person’s reaction by stopping its approach,
moving tentatively. Therefore, the video presents its moving
pattern and its reaction to a person;
Organism-based robots without humanlike features:
AIBO is a robotic dog designed for entertainment. The
video presents its behaviors of waving tail, standing up and
approaching a person;
Keeponis a small robot for entertainment. It can “dance”
to the rhythm of music by bouncing its body and moving its
head in four degrees of freedom. Therefore, the video presents
its dance to music;
Organism-based robots with humanlike features:
ASIMO is a humanoid with socializing as its primary
function, so the video captured a use case with humans. In the
video, ASIMO acts as a guide in a building, providing
directions to a person and inviting them to sit on a sofa;
Ericais a humanoid designed to imitate a female human
being. She has facial expressions to an extent. In the video, she
answers a reporter’s question in a public conference.
For each of the six technologies, we crafted a 20-second
video clip that showed how they could interact with people
and their environment in a typical use scenario. Our
operationalization of design type, therefore, includes the
robots’ appearance, behaviors, and interactions with people
and the environment. The original videos were collected from
YouTube.
C. Questionnaire
In this study, we used Waytz’s IDAQ [43] to measure the
perception of anthropomorphism. The IDAQ allowed us to
measure the differences between populations in two different
countries China and the U.S. We customized1 the IDAQ to
1 For example, the original question asks: “To what extent does a car have
free will?.” The customized version asks: “To what extent does iRobot you
just saw have free will?”
investigate people’s perceptions of anthropomorphism for
each of the six robots. This customized version investigates
how design type is related to the different human-like traits
perceived in the robots. The questionnaire also includes
demographic information and the original IDAQ. In the
following, we explained the details of the survey’s
sub-sections and variables that we measured.
Modified IDAQ Items for Six Technologies and Reliability.
The IDAQ evaluates people’s perceptions of six traits,
including the extent to which an artifact is active, has
intentions, has free will, consciousness, has a mind of its
own, and could experience emotions. Each of these
customized IDAQ questions has a 5-point Likert scale
response (from not at all to very much). We conducted a
reliability analysis on the modified IDAQ items around each
robot. The lowest Cronbach’s Alpha was .879.
IDAQ. We also adopted the original IDAQ as a baseline.
Prior Experience of Technology and Robot. In the survey,
we asked participants’ prior experience of technologies from
Automated teller machines (ATMs) to Microwave ovens, and
their prior experience of interacting with robots (e.g., Factory
robots or robot toys).
Demographic Information. Participant’s gender, age,
residency, native language, educational background, and
participant’s religious preference were collected (See Table 1
for demographic information).
C. Participants
Any person over 18 years of age and a resident in China or the
United States met our criteria and was included in the study. In
total, we collected 804 responses, 397 in the U.S. and 407 in
China. We excluded 20 participants because they completed
the questionnaire too quickly (< 3 mins, the average time is 10
mins), 8 participants for repeatedly choosing the same options
in at least one section (e.g., always the first option for IDAQ),
and 1 participant because the person was not a resident of the
U.S. or China. After cleaning, we had 775 responses in total:
393 in U.S. and 382 in China.
Figure 1 : The screenshot of the videos of intelligent systems and robots
presented in the questionnaire
A). Object-based robot
B). Organism-based robot without humanlike features
C). Organism-based robot with humanlike features (Humanoids)
From left to right an d top to bottom, they are iRobot, Ottoman,
AIBO, Keepon, ASIMO, and Erica.
TABLE 1. DEMOGR APHIC INFORM ATION ON PAR TICIPANTS
US
N = 393
N= 382
Age [M(sd)]
3.2 (1.3)
Gender
Female
194
Male
198
Other
1
Education
High School
47
College
309
Graduate
37
Religious Belief
Catholic
95
Buddhism
7
102
None
190
Others
101
61
Experience With Tech [M(sd)]
3.2 (.41)
Experience With Robot [M(sd)]
2.1 (.62)
2.4 (.75)
*Age was measured in category: 1: 18-25; 2: 26-30; 3 : 31-40; 4: 41-50; 5: 51-60; 6: above 60 yrs
old; For experience with technology and robot, 0: no experience 4: a lot experience.
IV. RESULTS
Data were analyzed in SPSS, and the significance level for
all performed tests was set to alpha = .05.
A. The Impact of Personal Factors on Anthropomorphism
Due to the unbalanced nature of our sample (the average
age of U.S. participants is not equivalent to the age of Chinese
participants), we conducted a mixed linear model to
investigate the effects of specific personal factors on
participants’ perception of anthropomorphism on robots. As
fixed effects, we entered nationality, age, gender, religious
belief, average IDAQ score, experience with technology, and
experience with robot into the model. We hypothesize the
participants, even the ones from the same culture or with
similar levels of technology use, may still have other
individual differences reflected in the anthropomorphism
rating. Therefore, as random effects, we had intercepts for
subjects. In order to conduct mixed linear model, we transfer
our data to long data version based on the variable of design
type. The design type of robots (object-based and
organism-based) is entered as a repeated measurement. Visual
inspection of residual plots did not reveal any obvious
deviations from homoscedasticity or normality.
Nationality. The average rating of anthropomorphism
differs based on nationality (F (1, 596) = 46.911, p < 0.001).
Chinese participants (M = 2.49) had a significantly higher
customized IDAQ rating than U.S. participants (M = 1.99).
Age and Gender. We did not find significant effects on the
perception of anthropomorphism for age or gender (F (5, 596)
= 1.313, p = 0.256, F (5, 596) = 0.024, p = 0.876,
respectively).
Religious Belief. The result shows people’s perceptions of
anthropomorphism differ according to their religious beliefs
(F (2, 596) = 10.130, p < 0.001). A Sidak post hoc analysis
showed that people with Buddhist belief (M = 2.38) rated the
anthropomorphism of robots significantly higher than people
with no religious belief (M =2.12, p < 0.001). There is no
significant difference between the Catholic and Buddhist
groups (p = 0.194), and between the Catholic group and the
non-religious group (p = 0.232).
Experience with Technology and Robot. People’s
experiences with robots had a significant effect on their
perceptions of anthropomorphism (F (1, 596) = 4.257, p =
0.04). A Pearson correlation showed that the two ratings are
positively correlated (r = 0.252). No significant effect on
anthropomorphism rating was found for technology
experience (F (1, 596) = 1.703, p = 0.192).
B. Comparison on Anthropomorphism Rating of
Object-Based and Organism-Based Robots
Our data from the two countries have non-homogeneous
variances and nationality is a categorical variable. To avoid
the interference of nationality, we report results from two
countries separately in this section (U.S.: N = 393; China: N =
382). Before we compared the two sets of anthropomorphic
ratings, we performed a Shapiro-Wilk Test of Normality to
test the normality of the data. The result shows all the
anthropomorphic ratings we analyze below deviate from a
normal distribution (p < 0.001). Therefore, we use Wilcoxon
Signed Rank Test to compare the ratings (see Table 2).
1) Comparison of Overall Ratings of Anthropomorphism
First of all, we compared the average anthropomorphic
ratings of object-based robots (iRobot and Mechanical stool)
and organism-based robots (AIBO and Keepon).
In the U.S. dataset, surprisingly, a Wilcoxon signed-rank
test showed that the average anthropomorphism rating for the
organism-based robots (M = 1.58) is lower than the rating (M
= 1.63) for the object-based robots (Z = 4.763, p < 0.001). The
comparisons between humanoids (Erica and ASIMO) with the
two categories of robots show that the anthropomorphism
rating for humanoids is significantly higher than the other two
kinds (with organism-based robot: Z = 12.015, p < 0.001; with
object-based robot: Z = 9.358, p < 0.001).
In contrast, in the Chinese sample, the average
anthropomorphic rating for the two organism-based robots (M
= 2.89) is higher than the rating (M = 2.79) for the two
object-based robots (Z = 4.37, p < 0.001). The comparison
between humanoids and the two categories of robots shows
that the rating for humanoids is significantly higher than the
other two kinds (with organism-based robot: Z = 5.234, p <
0.001; with object-based robot: Z = 7.134, p < 0.001).
We are interested in how object-based robots is different
from organism-based robots on various levels of
anthropomorphism, based on the multi-layer framework.
Therefore, we conducted analysis on each item of the
customized IDAQ to compare the two kinds of robots.
2) Comparison on Activeness
In the U.S. data, the object-based robot’s rating (M = 3.35)
on activeness is significantly higher than the organism-based
robot’s rating (M = 3.19) (Z = 4.786, p < 0.001). On the
contrary, in China, the object-based robot’s rating (M = 3.31)
on activeness is significantly lower than the organism-based
robot’s rating (M = 3.55) (Z = 5.87, p < 0.001).
TABLE 2. AVERAGE RATINGS (STANDARD DEV IATIONS) OF ANTHROPOMORP HISM AND EACH ANTHROPOMORPHIC LEVEL A MIND OF ITS OWN, INTENTION, FRE E
WILL, EMOTIONS, CONSCIOUSNESS. 1 5 INDICATED FROM THE RATING OF ANTHROPOM ORPHIC TRAITS FROM NOT AT ALL TO VERY MUCH
Nation
N
Activeness
Emotion Free will Intention
Consciousne
ss
A mind of its
own
Anthropomor
phism
Object-bas
ed robots
U.S.
393
3.35(0.82)
1.09(0.37)
1.27(0.61)
1.64(0.93)
1.13(0.41)
1.32(0.66)
1.63(0.35)
China
382
3.31(0.95)
2.67(1.02)
2.62(1.02)
2.87(1.00)
2.65(1.06)
2.62(1.03)
2.79(0.36)
Organism-
based
robots
U.S.
393
3.19(0.85)
1.20(0.50)
1.21(0.49)
1.40(0.69)
1.17(0.46)
1.29(0.57)
1.58(0.34)
China
382
3.55(0.88)
2.98(0.96)
2.64(1.03)
2.75(0.98)
2.69(1.00)
2.74(0.98)
2.89(0.34)
Humanoid
s
U.S.
393
3.42(0.87)
1.31(0.70)
1.44(0.79)
1.86(1.06)
1.33(0.73)
1.65(0.94)
1.84(0.69)
China
382
3.55(0.93)
3.16(1.02)
2.89(1.09)
3.12(1.03)
3.01(1.06)
3.07(1.05)
3.13(0.88)
3) Comparison on Emotion
Opposite to the rating of active, the U.S. participants rate
emotion for object-based robot (M = 1.09) significantly lower
than the scale for organism-based robot (M = 1.20) (Z = 5.354,
p < 0.001). Similarly, in China, emotion rated for
object-based robot (M = 2.67) significantly lower than the
scale for organism-based robot (M = 2.98) (Z = 7.17, p <
0.001).
4) Comparison on Free will
In the U.S., the rating of free will for object-based robot (M
= 1.27) is significantly higher than for organism-based robot
(M = 1.21) (Z = 2.635, p = 0.008). However, in China, there is
no significant difference between the two types of robots (Z =
0.495, p = 0.620).
5) Comparison on Intention
Our participants from the U.S. rated intention for
object-based robots (M = 1.64) significantly higher than for
organism-based robots (M = 1.40) (Z = 7.509, p < 0.001). Our
participants from China rated Intention for object-based robots
(M = 2.87) slightly higher than for organism-based robots (M
= 2.75) (Z = 2.49, p = 0.013).
6) Comparison on Consciousness
Similar to the rating of emotion, our participants from the
U.S. rated consciousness for object-based robots (M = 1.13)
significantly lower than the scale for organism-based robot (M
= 1.17) (Z = 2.702, p = 0.007). However, no significant
difference between two the types of robots was found in
Chinese data (Z = 1.257, p = 0.209).
7) Comparison on A Mind of its Own
No significant differences were found on the rating of a
mind of its own between the two kinds of robots in U.S. data
(Z = 1.198, p = 0.231). However, in China, the object-based
robot’s rating (M = 2.62) on this scale is lower than the ratings
of organism-based robots (M = 2.74) (Z = 3.120, p = 0.002).
We also compared the ratings of anthropomorphic traits
between the humanoids and the other two types of robots. The
results show that humanoids got significantly higher ratings in
all six questions (all p < 0.001).
In summary, the comparisons of different facets of
anthropomorphism are not all consistent with the comparison
on the overall rating of modified IDAQ. Specifically, our
participants perceived more activeness, free will, and intention
in the object-based robots than in the organism-based robots,
while they were more inclined to perceive emotion and
consciousness in organism-based robot than object-based
robots.
C. The Impact of Personal Factors on the Multiple levels of
Anthropomorphism
Our next question is whether personal factors, such as
nationality and religious belief, influence how people perceive
anthropomorphism in different kinds of robots. To explore this
question, we conducted another Mixed Linear Model analysis.
Since we know that age, gender, experience with technology
have no significant effect on the rating of anthropomorphism,
these variables were excluded from the model. Visual
inspection of residual plots did not reveal any obvious
deviations from homoscedasticity or normality.
1) Nationality and the Robot Type
A mixed linear model analysis showed that the interaction
of nationality and the type of robot have a significant effect on
the rating of anthropomorphism (F (1, 605) = 29.232, p <
0.001). A Sidak post hoc analysis showed that Chinese
participants gave a higher anthropomorphism rating to
organism-based robots (M = 2.56), compared to the rating of
the object-based designed robots (M = 2.41, p < 0.001); while
Americans were more inclined to anthropomorphize
object-based designed robots (M = 2.08) than the
organism-based designed robots (M = 1.99, p = 0.002).
2) Religious Belief and the Type of robot
No significant effect was found in the interaction between
religious belief and the type of robot for anthropomorphism
rating (F (2, 605) = 1.019, p = 0.362).
We also conducted mixed linear models for each
anthropomorphic trait. We did not find significant effect in the
interaction of religious belief and the type of robot on
anthropomorphism rating.
3) The Interaction Effect of Nationality and the Type of
Robot on the Anthropomorphic Traits
The results indicate that the interaction of nationality and
the type of robot have a significant effect on the rating of
activeness (F (1, 611) = 40.204, p < 0.001). A Sidak post hoc
analysis shows that Chinese rate activeness lower for both
object-based robots (M = 3.00) and organism-designed robots
(M = 3.225) compared to Americans (object- : M = 3.76, p <
0.001; organism- : M = 3.62, p < 0.001).
The interaction of nationality and the type of robot has a
significant effect on the rating of emotion (F (1, 611) =
22.954, p < 0.001). Contrast to the rating of activeness, a Sidak
post hoc analysis shows that Chinese rate emotion higher for
both object-based designed robot (M = 2.36) and
organism-based robot (M = 2.71) compared to American
(object-based: M = 1.50, p < 0.001; organism-based: M = 1.59,
p < 0.001).
No significant effect has been found for the interaction of
nationality and the type of robot on the rating of freewill (F (1,
611) = 2.642, p = 0.105).
The interaction of nationality and the type of robot have a
significant effect on the rating of intention (F (1, 611) = 4.523,
p = 0.034). A Sidak post hoc analysis shows a similar trend as
the rating of emotion, that Chinese rate intention higher for
both object-based designed robot (M = 2.52) and
organism-based robot (M = 2.39) compared to American
(object- : M = 2.10, p < 0.001; organism- : M = 1.84, p <
0.001).
No significant effect was found for interaction of
nationality and the type of robot on the rating of consciousness
(F (1, 611) = 0.657, p = 0.418).
The interaction of nationality and the type of robot have a
significant effect on the rating of a mind of its own (F (1, 611)
= 13.229, p < 0.001). A Sidak post hoc analysis that Chinese
rate a mind of its own higher for both object-based robot (M =
2.25) and organism-based robots (M = 2.39) compared to
American participants (object-: M = 1.79, p < 0.001;
organism-: M = 1.74, p < 0.001).
V. DISCUSSION AND CONCLUSION
This paper investigated how personal factors together with
the robots’ design types influence people's evaluations of
anthropomorphism on multiple levels. Our results showed that
the anthropomorphism of robots varies among participants
with different nationalities, religious belief, and prior
experience with robotic technologies. In addition, our results
illustrated that people’s anthropomorphism evaluations differ
in different types of robots. Finally, the analysis suggests that
one personal factor (a participant’s country of residence) has
an interaction effect with the design types of robots on the
overall anthropomorphism perception (i.e. the total score of
the customized IDAQ) and on some of the dimensions of
anthropomorphism (i.e., the scores of some items of the
customized IDAQ). In this section, we discuss the implication
of these results in designing robotic systems, particularly
focusing on the emerging category of object-based robots.
A. Measuring Anthropomorphism: A multi-level model
This study quantitatively measured people’s perceptions of
anthropomorphism based on Persson et al. [29]’s argument
that “anthropomorphism should be decomposed into multiple
layers.” Our results supported this argument: participants’
perceptions of a robot’s anthropomorphism vary in different
dimensions (e.g., emotion, activeness, free will) of perception.
A participant may perceive higher tendency in the activeness,
free will and intention dimensions of an object-based robot
than of an organism-based robot, meanwhile (s)he may give a
lower score in the emotion and consciousness dimensions of
the object-based robot than of the organism-based robot. The
previous way of treating the perception of anthropomorphism
as a single score neglects such nuanced differences. Thus, the
future design of robotic systems should consider users’
perceptions of anthropomorphism as a collection of various
levels, and a more accurate understanding of the combination
of these levels may affect how a robot is perceived.
Such divergent scores on multiple levels are not hard to
interpret: the activeness, free will, and intention dimensions
describe general lifelike traits shared by many living forms
and could relate to the general notion of autonomous
movement, whereas emotion and consciousness dimensions
depict intellectual traits possessed mostly by humans and other
intelligent animals. Emotion and consciousness traits are less
likely to be perceived in object-based robots.
This result presents a more complicated story than ones in
previous studies, which suggest a positive linear relationship
between the number of human-like design features and the
perception ratings of anthropomorphism (e.g., [5, 16, 22]).
This study found, instead, more human-like design features
may not lead to a higher rating on all levels of
anthropomorphism in all contexts. In future work, it would be
interesting to explore how people interpret different robot
design types in relation to dimensions of anthropomorphism.
B. Life-Like Versus Human-Like Designs of Robots
Anthropomorphism traditionally refers to human-likeness.
The various ratings on the multiple layers of
anthropomorphism of object-based robots in our study suggest
that there is something beyond human-likeness: life-likeliness.
Life-likeliness is a concept coined by Schmitz [34] that refers
to lifelike characteristics in general. Such characteristics
include human-like features, but also animal-like, or even
abstract lifelike features.
Several previous works have pioneered life-like designs in
interactive systems (e.g., [17, 29, 39]). For example, a mobile
phone can change its screen’s angle to lean toward the user to
reflect a life-like welcoming behavior [17]; or an interactive
faucet can extend or retreat to show the user a welcoming or
rejection behavior [39]. These designs do not necessarily
associate with human-likeness, but they signal some of the
life-likeness layers of the multi-layered anthropomorphism
concept. By decomposing the anthropomorphism concept into
multiple quantifiable layers, researchers can better understand
the perceptions and could propose fine-grained designs of
lifelikeness using the various combinations of these layers.
The anthropomorphism of robotic technologies can have
positive as well as negative effects on people’s interactions
with the technologies. For example, anthropomorphic design
helps with users’ acceptance of the technology [15]. To the
contrary, anthropomorphic design may promote a stronger
emotional bond between the human and the technology and
increase the user’s tendency towards social isolation [18]. By
decomposing the abstract concept of anthropomorphism into
multiple layers, researchers may identify which layers of
anthropomorphism or what types of the lifelike designs are
desired for certain robots, scenarios, and cultural contexts.
C. Personalize Design for Anthropomorphism
The results show that differences in personal factors
(nationality, age, religious belief, and prior experience with
robots) influence users’ perception of anthropomorphism.
This result is consistent with previous research results that
cultural background affects people’s perceptions of robots [1,
20, 33]. Our results extend the understanding of such
anthropomorphism perceptions to object-based robots.
Moreover, we found an interaction effect between the
robot type and the user’s nationality on the perception of
anthropomorphism. For example, Chinese users rated
organism-based robot higher in the overall anthropomorphism
perception compared to U.S. users, even though human-like
features are absent. That is an important user difference to
consider in interaction design [34].
The interaction effect of nationality and type of robot on
certain anthropomorphic traits suggests a careful design of
robotic features for different populations. The result suggests
that, when designing each layer of anthropomorphic
perception, designers should take the users’ nationality into
consideration. However, our study did not reveal how exactly
nationality has an interaction effect with design type on the
perception of anthropomorphism of technology. Further
investigations should be performed on this issue.
Considering the potential benefits of embracing
life-likeliness in an intelligent system, it is important to be able
to actively design lifelike artifacts. Designing life-likeliness
requires a comprehensive understanding of how various
properties of artifacts and the people that use them are
associated with attributions of living traits.
ACKNOWLEDGMENT
We thanks to all the participants and reviewers for their
contribution to this work.
REFERENCES
[1] Bartneck, C. et al. 2005. Cultural differences in attitudes towards
robots. Proc. Symposium on Robot Companions (SSAISB 2005
Convention) (2005), 14.
[2] Bartneck, C. et al. 2009. Measurement instruments for the
anthropomorphism, animacy, likeability, perceived intelligence, and
perceived safety of robots. International journal of social robotics. 1, 1
(2009), 7181.
[3] Breazeal, C. 2003. Toward sociable robots. Robotics and autonomous
systems. 42, 3 (2003), 167175.
[4] DiSalvo, C. et al. 2003. The hug: an exploration of robotic form for
intimate communication. Robot and human interactive communication,
2003. Proceedings. ROMAN 2003. The 12th IEEE international
workshop on (2003), 403408.
[5] DiSalvo, C.F. et al. 2002. All robots are not created equal: the design
and perception of humanoid robot heads. Proceedings of the 4th
conference on Designing interactive systems: processes, practices,
methods, and techniques (2002), 321326.
[6] Duffy, B.R. 2003. Anthropomorphism and the social robot. Robotics
and autonomous systems. 42, 3 (2003), 177190.
[7] Eyssel, F. et al. 2011. Effects of anticipated human-robot interaction
and predictability of robot behavior on perceptions of
anthropomorphism. Proceedings of the 6th international conference on
Human-robot interaction (2011), 6168.
[8] Fink, J. 2012. Anthropomorphism and human likeness in the design of
robots and human-robot interaction. International Conference on Social
Robotics (2012), 199208.
[9] Fink, J. et al. 2014. Dynamics of anthropomorphism in human-robot
interaction. Frontiers in Cognitive Science. (2014).
[10] Fong, T. et al. 2003. A survey of socially interactive robots. Robotics
and autonomous systems. 42, 3 (2003), 143166.
[11] Forlizzi, J. and DiSalvo, C. 2006. Service robots in the domestic
environment: a study of the roomba vacuum in the home. Proceedings
of the 1st ACM SIGCHI/SIGART conference on Human-robot
interaction (2006), 258265.
[12] Fraune, M.R. et al. 2015. Rabble of robots effects: Number and type of
robots modulates attitudes, emotions, and stereotypes. Proceedings of
the Tenth Annual ACM/IEEE International Conference on
Human-Robot Interaction (2015), 109116.
[13] Gong, L. 2008. How social is social responses to computers? The
function of the degree of anthropomorphism in computer
representations. Computers in Human Behavior. 24, 4 (2008), 1494
1509.
[14] Haslam, N. et al. 2008. Attributing and denying humanness to others.
European review of social psychology. 19, 1 (2008), 5585.
[15] Heerink, M. et al. 2009. Measuring acceptance of an assistive social
robot: a suggested toolkit. Robot and Human Interactive
Communication, 2009. RO-MAN 2009. The 18th IEEE International
Symposium on (2009), 528533.
[16] Hegel, F. et al. 2008. Understanding social robots: A user study on
anthropomorphism. RO-MAN 2008-The 17th IEEE International
Symposium on Robot and Human Interactive Communication (2008),
574579.
[17] Hemmert, F. et al. 2013. Animate mobiles: proxemically reactive
posture actuation as a means of relational interaction with mobile
phones. Proceedings of the 7th International Conference on Tangible,
Embedded and Embodied Interaction (2013), 267270.
[18] Hoffman, G. and Ju, W. 2014. Designing robots with movement in
mind. Journal of Human-Robot Interaction. 3, 1 (2014), 89122.
[19] Kahn Jr, P.H. et al. 2012. “Robovie, you’ll have to go into the closet
now”: Children’s social and moral relationships with a humanoid robot.
Developmental psychology. 48, 2 (2012), 303.
[20] Kaplan, F. 2004. Who is afraid of the humanoid? Investigating cultural
differences in the acceptance of robots. International journal of
humanoid robotics. 1, 03 (2004), 465480.
[21] Khaoula, Y. et al. 2014. Concepts and applications of
human-dependent robots. International Conference on Human Interface
and the Management of Information (2014), 435444.
[22] Kiesler, S. and Goetz, J. 2002. Mental models of robotic assistants.
CHI’02 extended abstracts on Human Factors in Computing Systems
(2002), 576577.
[23] Kwak, S.S. et al. 2017. The Effects of Organism-Versus Object-Based
Robot Design Approaches on the Consumer Acceptance of Domestic
Robots. International Journal of Social Robotics. (2017), 119.
[24] Lee, H.R. et al. 2012. Cultural design of domestic robots: A study of
user expectations in Korea and the United States. 2012 IEEE
RO-MAN: The 21st IEEE International Symposium on Robot and
Human Interactive Communication (Sep. 2012), 803808.
[25] Lee, H.R. and Sabanović, S. 2014. Culturally variable preferences for
robot design and use in South Korea, Turkey, and the United States.
Proceedings of the 2014 ACM/IEEE international conference on
Human-robot interaction (2014), 1724.
[26] MacDorman, K.F. et al. 2009. Does Japan really have robot mania?
Comparing attitudes by implicit and explicit measures. AI & society.
23, 4 (2009), 485510.
[27] Mok, B.K. et al. 2014. Empathy: interactions with emotive robotic
drawers. Proceedings of the 2014 ACM/IEEE international conference
on Human-robot interaction (2014), 250251.
[28] Osawa, H. et al. 2007. Anthropomorphization Framework for
Human-Object Communication. JACIII. 11, 8 (2007), 10071014.
[29] Persson, P. et al. 2000. AnthropomorphismA multi-layered
phenomenon. Proc. Socially Intelligent AgentsThe Human in the
Loop, AAAI Press, Technical report FS-00-04. (2000), 131135.
[30] Rey, F. et al. 2009. Interactive mobile robotic drinking glasses.
Distributed Autonomous Robotic Systems. 8, (2009), 543551.
[31] Riek, L.D. et al. 2009. How anthropomorphism affects empathy toward
robots. Proceedings of the 4th ACM/IEEE international conference on
Human robot interaction (2009), 245246.
[32] Ruijten, P.A. et al. 2014. Introducing a rasch-type anthropomorphism
scale. Proceedings of the 2014 ACM/IEEE international conference on
Human-robot interaction (2014), 280281.
[33] Šabanović, S. 2010. Emotion in robot cultures: Cultural models of
affect in social robot design. Proceedings of the Conference on Design
& Emotion (D&E2010) (2010).
[34] Schmitz, M. 2011. Concepts for life-like interactive objects.
Proceedings of the fifth international conference on Tangible,
embedded, and embodied interaction (2011), 157164.
[35] Seo, S.H. et al. 2015. Poor Thing! Would You Feel Sorry for a
Simulated Robot?: A comparison of empathy toward a physical and a
simulated robot. Proceedings of the Tenth Annual ACM/IEEE
International Conference on Human-Robot Interaction (2015), 125
132.
[36] Shore, B. 1998. Culture in mind: Cognition, culture, and the problem of
meaning. Oxford University Press.
[37] Sirkin, D. et al. 2015. Mechanical Ottoman: How Robotic Furniture
Offers and Withdraws Support. Proceedings of the Tenth Annual
ACM/IEEE International Conference on Human-Robot Interaction
(New York, NY, USA, 2015), 1118.
[38] Sung, J.-Y. et al. 2007. “My Roomba is Rambo”: intimate home
appliances. International Conference on Ubiquitous Computing (2007),
145162.
[39] Togler, J. et al. 2009. Living Interfaces: The Thrifty Faucet.
Proceedings of the 3rd International Conference on Tangible and
Embedded Interaction (New York, NY, USA, 2009), 43–44.
[40] Vallg\a arda, A. 2008. PLANKS: A Computational Composite.
Proceedings of the 5th Nordic Conference on Human-computer
Interaction: Building Bridges (New York, NY, USA, 2008), 569574.
[41] Wagner, C. 2009. “The Japanese way of robotics”: Interacting
“naturally”with robots as a national character? Robot and Human
Interactive Communication, 2009. RO-MAN 2009. The 18th IEEE
International Symposium on (2009), 510515.
[42] Waytz, A. et al. 2014. The mind in the machine: Anthropomorphism
increases trust in an autonomous vehicle. Journal of Experimental
Social Psychology. 52, (2014), 113117.
[43] Waytz, A. et al. 2010. Who sees human? The stability and importance
of individual differences in anthropomorphism. Perspectives on
Psychological Science. 5, 3 (2010), 219232.
[44] Zlotowski, J. et al. 2015. Anthropomorphism: opportunities and
challenges in humanrobot interaction. International Journal of Social
Robotics. 7, 3 (2015), 347360.
[45]
... to their human identity' (Mende et al., 2019). Therefore, it is crucial to develop a systematic understanding of service robots' anthropomorphism (Tan et al., 2018). It is worth noting that discussions about the anthropomorphism of robots began very early in science fiction literature, television series and films. ...
... These studies have proposed that the anthropomorphic design of a robot can be achieved by factors such as movement (Castro-González et al., 2016), verbal communication (Pillai & Sivathanu, 2020), gestures (Tian et al., 2021) and emotions (Lin et al., 2019). Tan et al. (2018) perform a more fine-grained analysis of robots' anthropomorphism in terms of several dimensions (e.g., uniquely and typically human, being alive or not, having emotions or not). These factors affect not only humans' perception of robots but also their behavior when they interact with robots (Złotowski et al., 2014). ...
... Please provide complete bibliographic details of this reference. Corbin and Strauss, (2015), Eyssel et al., (2011), Fussell et al., (2008, Powers and Kiesler, (2006), Tan et al., (2018) and Złotowski et al., (2014). ...
Article
Robots have been widely used in social production, especially in the service industry. As their use continues to spread, their anthropomorphic design, which increases a robot’s efficiency and effectiveness in terms of human–robot interaction, becomes increasingly important. Based on grounded theory, this study encodes one-to-one in-depth interview data, and constructs a theoretical model of service robots’ anthropomorphism. The results show that service robots’ anthropomorphism comprises four dimensions: mission completion (core), user sensory experience (external manifestation), artificial intelligence (guarantee), and unique human characteristics (promotion). Meanwhile, a linkage between the dimensions and factors is proposed. This study thus systematically elucidates service robots’ anthropomorphism. The findings provide some implications for practitioners to design anthropomorphic robots and develop tools for evaluating anthropomorphism, corresponding factors and potential influences.
... Scholars concur that anthropomorphism constitutes a form of human cognition that centers on the assignment of human characteristics to a nonhuman entity and that can be elicited when humans observe or interact with a robot (Bartneck et al., 2009;Duffy, 2003;Epley et al., 2007;Fink, 2012). While anthropomorphism can be elicited in response to a broad range of robots, ranging from machinelike to humanlike robots (Tan et al., 2018), imbuing robots with humanlike characteristics is likely to facilitate anthropomorphism (Fink, 2012;Tan et al., 2018). Anthropomorphism thus needs to be distinguished from the anthropomorphic design of robots and, more specifically, from the properties of a robot that may (or may not) elicit anthropomorphism (Fink, 2012;Ruijten, 2018;Yogeeswaran et al., 2016). ...
... Scholars concur that anthropomorphism constitutes a form of human cognition that centers on the assignment of human characteristics to a nonhuman entity and that can be elicited when humans observe or interact with a robot (Bartneck et al., 2009;Duffy, 2003;Epley et al., 2007;Fink, 2012). While anthropomorphism can be elicited in response to a broad range of robots, ranging from machinelike to humanlike robots (Tan et al., 2018), imbuing robots with humanlike characteristics is likely to facilitate anthropomorphism (Fink, 2012;Tan et al., 2018). Anthropomorphism thus needs to be distinguished from the anthropomorphic design of robots and, more specifically, from the properties of a robot that may (or may not) elicit anthropomorphism (Fink, 2012;Ruijten, 2018;Yogeeswaran et al., 2016). ...
Article
Full-text available
With robots increasingly assuming social roles (e.g., assistants, companions), anthropomorphism (i.e., the cognition that an entity possesses human characteristics) plays a prominent role in human–robot interactions (HRI). However, current conceptualizations of anthropomorphism in HRI have not adequately distinguished between precursors, consequences, and dimensions of anthropomorphism. Building and elaborating on previous research, we conceptualize anthropomorphism as a form of human cognition, which centers upon the attribution of human mental capacities to a robot. Accordingly, perceptions related to a robot’s shape and movement are potential precursors of anthropomorphism, while attributions of personality and moral value to a robot are potential consequences of anthropomorphism. Arguing that multidimensional conceptualizations best reflect the conceptual facets of anthropomorphism, we propose, based on Wellman’s (1990) Theory-of-Mind (ToM) framework, that anthropomorphism in HRI consists of attributing thinking, feeling, perceiving, desiring, and choosing to a robot. We conclude by discussing applications of our conceptualization in HRI research.
... In addition, previous studies have documented small but significant differences in anthropomorphism between social groups (Letheren et al., 2016;Tan et al., 2018;Whelan et al., 2019). Prominent social groups that may differ in their attitudes toward electronic devices are groups of different ages. ...
Article
This research examines anthropomorphism by testing the values that people attribute to electronic devices. We ask four main questions: Do people attribute human values to devices; Do devices differ in their value profiles; What underlies the attribution of values to devices; and Do individual and social differences affect these attributions. In Study 1, participants ( N = 265) attributed Schwartz’s 10 basic human values to devices, reported their personal value priorities, as well as the frequency and difficulty in using each device. In Study 2, participants ( N = 231) attributed values to devices, and responses were analyzed by age. Results show that people attribute human values to electronic devices, and that each device has a unique value profile. Anthropomorphizing devices reflects social consensus as to the symbolic meaning of each device, rather than projection of personal values or frequency of device use. Members of social groups share device values and differ from members of other groups.
... Kinesthetic creativity has been studied mostly in artistic performance by humans (Ros and Demiris, 2013;Tan et al., 2018). To our knowledge, this work represents an early approach to the study of kinesthetic creativity in social robots. ...
Article
Full-text available
Creativity in social robots requires further attention in the interdisciplinary field of human–robot interaction (HRI). This study investigates the hypothesized connection between the perceived creative agency and the animacy of social robots. The goal of this work is to assess the relevance of robot movements in the attribution of creativity to robots. The results of this work inform the design of future human–robot creative interactions (HRCI). The study uses a storytelling game based on visual imagery inspired by the game “Story Cubes” to explore the perceived creative agency of social robots. This game is used to tell a classic story for children with an alternative ending. A 2 × 2 experiment was designed to compare two conditions: the robot telling the original version of the story and the robot plot twisting the end of the story. A Robotis Mini humanoid robot was used for the experiment, and we adapted the Short Scale of Creative Self (SSCS) to measure perceived creative agency in robots. We also used the Godspeed scale to explore different attributes of social robots in this setting. We did not obtain significant main effects of the robot movements or the story in the participants’ scores. However, we identified significant main effects of the robot movements in features of animacy, likeability, and perceived safety. This initial work encourages further studies experimenting with different robot embodiment and movements to evaluate the perceived creative agency in robots and inform the design of future robots that participate in creative interactions.
... Kinaesthetic creativity has been studied mostly in artistic performance by humans (Tan et al., 2018;Ros and Demiris, 2013). To our knowledge, this work represents an early approach to the study of kinaesthetic creativity in social robots. ...
Preprint
Full-text available
Creativity in social robots requires further attention in the interdisciplinary field of Human-Robot Interaction (HRI). This paper investigates the hypothesised connection between the perceived creative agency and the animacy of social robots. The goal of this work is to assess the relevance of robot movements in the attribution of creativity to robots. The results of this work inform the design of future Human-Robot Creative Interactions (HRCI). The study uses a storytelling game based on visual imagery inspired by the game 'Story Cubes' to explore the perceived creative agency of social robots. This game is used to tell a classic story for children with an alternative ending. A 2x2 experiment was designed to compare two conditions: the robot telling the original version of the story and the robot plot-twisting the end of the story. A Robotis Mini humanoid robot was used for the experiment. As a novel contribution, we propose an adaptation of the Short Scale Creative Self scale (SSCS) to measure perceived creative agency in robots. We also use the Godspeed scale to explore different attributes of social robots in this setting. We did not obtain significant main effects of the robot movements or the story in the participants' scores. However, we identified significant main effects of the robot movements in features of animacy, likeability, and perceived safety. This initial work encourages further studies experimenting with different robot embodiment and movements to evaluate the perceived creative agency in robots and inform the design of future robots that participate in creative interactions.
... Some researchers have reported that users already look at these AI systems differently than the traditional computer systems but more like human partners [69], where anthropomorphism effect plays a critical role in user perceptions (e.g. [49,62]. Researchers and designers are actively asking: what are the updated frameworks and theories that we can leverage to help us design better AI systems to work with human? ...
Preprint
Data science (DS) projects often follow a lifecycle that consists of laborious tasks for data scientists and domain experts (e.g., data exploration, model training, etc.). Only till recently, machine learning(ML) researchers have developed promising automation techniques to aid data workers in these tasks. This paper introduces AutoDS, an automated machine learning (AutoML) system that aims to leverage the latest ML automation techniques to support data science projects. Data workers only need to upload their dataset, then the system can automatically suggest ML configurations, preprocess data, select algorithm, and train the model. These suggestions are presented to the user via a web-based graphical user interface and a notebook-based programming user interface. We studied AutoDS with 30 professional data scientists, where one group used AutoDS, and the other did not, to complete a data science project. As expected, AutoDS improves productivity; Yet surprisingly, we find that the models produced by the AutoDS group have higher quality and less errors, but lower human confidence scores. We reflect on the findings by presenting design implications for incorporating automation techniques into human work in the data science lifecycle.
... However, these aforementioned embodied systems relied heavily on non-verbal communication (e.g., eye gaze, body orientation) or anthropomorphism features (Tan, Wang, & Sabanovic, 2018) to engage children in learning activities, and these features are not supported by CAs. Despite many studies suggesting that embodied systems' non-verbal cues help establish social relationships with learners and thus positively affect learning (e.g., Gordon et al., 2016;Kennedy et al., 2016), such non-verbal behaviors may also place more cognitive load on the children, which may inhibit children's capacity to process information related to the learning and concentrate on the conversation (Kennedy, Baxter, & Belpaeme, 2015). ...
Article
Storybook reading accompanied by adult-guided conversation provides a stimulating context for children’s language development. Conversational agents powered by artificial intelligence, such as smart speakers, are prevalent in children’s homes and have the potential to engage children in storybook reading as language partners. However, little research has explored the effectiveness of using conversational agents to support children’s language development. This study examined how an automated conversational agent can read stories to children via a smart speaker while asking questions and providing contingent feedback. Using a randomized experiment among 90 children aged three to six years, this study compared these children’s story comprehension and verbal engagement in storybook reading with a conversational agent versus an adult. The conversational agent’s guided conversation was found to be as supportive in improving children’s story comprehension as that provided by an adult language partner. At the same time, this study uncovered a number of differences in children’s verbal engagement when interacting with a conversational agent versus with an adult. Specifically, children who read with the conversational agent responded to questions with better intelligibility, whereas those who read with an adult responded to questions with higher productivity, lexical diversity, and topical relevance. And the two groups responded to questions with a similar level of accuracy. In addition, questions requiring high cognitive demand amplified the differences in verbal engagement between the conversational agent and adult partner. The study offers important implications for developing and researching conversational agent systems to support children’s language development.
Article
Full-text available
The topic of mental state attribution to robots has been approached by researchers from a variety of disciplines, including psychology, neuroscience, computer science, and philosophy. As a consequence, the empirical studies that have been conducted so far exhibit considerable diversity in terms of how the phenomenon is described and how it is approached from a theoretical and methodological standpoint. This literature review addresses the need for a shared scientific understanding of mental state attribution to robots by systematically and comprehensively collating conceptions, methods, and findings from 155 empirical studies across multiple disciplines. The findings of the review include that: (1) the terminology used to describe mental state attribution to robots is diverse but largely homogenous in usage; (2) the tendency to attribute mental states to robots is determined by factors such as the age and motivation of the human as well as the behavior, appearance, and identity of the robot; (3) there is a computer < robot < human pattern in the tendency to attribute mental states that appears to be moderated by the presence of socially interactive behavior; (4) there are conflicting findings in the empirical literature that stem from different sources of evidence, including self-report and non-verbal behavioral or neurological data. The review contributes toward more cumulative research on the topic and opens up for a transdisciplinary discussion about the nature of the phenomenon and what types of research methods are appropriate for investigation.
Article
Full-text available
The current size of the market for domestic robots is smaller than expected, despite the rapid advance in robotic technologies. On the basis of the previous literature, we attempt to make a distinction between two design approaches for domestic robots: organism- versus object-based robot designs. This research investigates the effects of these domestic robot design approaches on consumer acceptance. Encompassing the theories of Human–Robot Interaction, design, and marketing, we predict that object-based robot design will be more effective than organism-based robot design for consumers’ evaluation of and intent to purchase domestic robots. We also predict that the categorization of robots will mediate the effects of robot design approaches on the evaluation. Two studies using two types of robots were conducted, and the results supported the hypotheses.
Conference Paper
Full-text available
Relational artifacts (human-dependent) should have two aspects of subjective effects: Rorschach and evocative. During interaction, a robot has to anticipate the state (relationship) of the interactive person from the emotional to cognitive level to convey its Rorschach response. Consequently, the robot should behave as an evocative object to indicate the characteristic of animacy, which should accomplished using a potentially interactive architecture to coordinate the Rorschach and evocative effects. In this paper, we present two kinds of relational artifacts – a sociable trash box (STB) and a Talking-Ally.
Conference Paper
Full-text available
Relational artifacts (human-dependent) should have two aspects of subjective effects: Rorschach and evocative. During interaction, a robot has to anticipate the state (relationship) of the interactive person from the emotional to cognitive level to convey its Rorschach response. Consequently, the robot should behave as an evocative object to indicate the characteristic of animacy, which should accomplished using a potentially interactive architecture to coordinate the Rorschach and evocative effects. In this paper, we present two kinds of relational artifacts – a sociable trash box (STB) and a Talking-Ally.
Article
Full-text available
Anthropomorphism is a phenomenon that describes the human tendency to see human-like shapes in the environment. It has considerable consequences for people’s choices and beliefs. With the increased presence of robots, it is important to investigate the optimal design for this technology. In this paper we discuss the potential benefits and challenges of building anthropomorphic robots, from both a philosophical perspective and from the viewpoint of empirical research in the fields of human-robot interaction and social psychology. We believe that this broad investigation of anthropomorphism will not only help us to understand the phenomenon better, but can also indicate solutions for facilitating the integration of human-like machines in the real world.
Conference Paper
Robots are expected to become present in society in increasing numbers, yet few studies in human-robot interaction (HRI) go beyond one-to-one interaction to examine how emotions, attitudes, and stereotypes expressed toward groups of robots differ from those expressed toward individuals. Research from social psychology indicates that people interact differently with individuals than with groups. We therefore hypothesize that group effects might similarly occur when people face multiple robots. Further, group effects might vary for robots of different types. In this exploratory study, we used videos to expose participants in a between-subjects experiment to robots varying in Number (Single or Group) and Type (anthropomorphic, zoomorphic, or mechanomorphic). We then measured participants' general attitudes, emotions, and stereotypes toward robots with a combination of measures from HRI (e.g., Godspeed Questionnaire, NARS) and social psychology (e.g., Big Five, Social Threat, Emotions). Results suggest that Number and Type of observed robots had an interaction effect on responses toward robots in general, leading to more positive responses for groups for some robot types, but more negative responses for others.
Article
This paper shows how cultural models relating to the display, perception, and experience of emotion are reflected in social robot design through a comparative analysis of social robotics in Japan and the US. A more implicit approach to emotional expression in Japanese robotics is related to community-oriented social practices and existing cultural forms such as Noh theatre, which encourages situational interpretation of neutral facial expressions, and interdependent notions of self. Western robot designs, in contrast, display more explicit expressions of emotion that may be related to an independent definition of the self. I review the literature on cultural models of affect to identify relevant themes, which I use to analyze historical and contemporary uses of cultural models in technology design. I conclude by suggesting possible research directions and design implications for cross-cultural robotics. By tracing particular cultural models of affect as they are embodied in technological artifacts, we gain a new perspective on the repeated assembly of culture through technology.
Article
This paper describes our approach to designing, developing behaviors for, and exploring the use of, a robotic footstool, which we named the mechanical ottoman. By approaching unsuspecting participants and attempting to get them to place their feet on the footstool, and then later attempting to break the engagement and get people to take their feet down, we sought to understand whether and how motion can be used by non-anthropomorphic robots to engage people in joint action. In several embodied design improvisation sessions, we observed a tension between people perceiving the ottoman as a living being, such as a pet, and simultaneously as a functional object, which requests that they place their feet on it-something they would not ordinarily do with a pet. In a follow-up lab study (N=20), we found that most participants did make use of the footstool, although several chose not to place their feet on it for this reason. We also found that participants who rested their feet understood a brief lift and drop movement as a request to withdraw, and formed detailed notions about the footstool's agenda, ascribing intentions based on its movement alone.
Article
In designing and evaluating human-robot interactions and interfaces, researchers often use a simulated robot due to the high cost of robots and time required to program them. However, it is important to consider how interaction with a simulated robot differs from a real robot; that is, do simulated robots provide authentic interaction? We contribute to a growing body of work that explores this question and maps out simulated-versus-real differences, by explicitly investigating empathy: how people empathize with a physical or simulated robot when something bad happens to it. Our results suggest that people may empathize more with a physical robot than a simulated one, a finding that has important implications on the generalizability and applicability of simulated HRI work. Empathy is particularly relevant to social HRI and is integral to, for example, companion and care robots. Our contribution additionally includes an original and reproducible HRI experimental design to induce empathy toward robots in laboratory settings, and an experimentally validated empathy-measuring instrument from psychology for use with HRI.