Conference PaperPDF Available

Abstract

Congruence of verbal and nonverbal behavior of a robot is essential to establish a seamless and natural interaction between humans and robots. We have investigated the preference and user experience of the senior participants that a robot approaches with expressive movement that is tailored or incongruent to the message the robot brings. Twelve elderly experienced three scenarios with different messages varying from good news, bad news, and functional news, accompanied by the corresponding approaching behavior and body posture. The user experience of these elderly was evaluated during these scenarios using a questionnaire and a semi-structured interview. The participants were interviewed about their experience, preference, and motivation for the different robot motion behaviors. The analysis showed no significant difference in user experiences in the good news and functional news scenarios. Also, no clear preference among the different robot behaviors was found. However, a significant preference for congruence in the sad news scenario was found with a clear preference for the sad approaching behavior.
Abstract Congruence of verbal and nonverbal behavior of
a robot is essential to establish a seamless and natural
interaction between humans and robots. We have investigated
the preference and user experience of the senior participants
that a robot approaches with expressive movement that is
tailored or incongruent to the message the robot brings. Twelve
elderly experienced three scenarios with different messages
varying from good news, bad news, and functional news,
accompanied by the corresponding approaching behavior and
body posture. The user experience of these elderly was
evaluated during these scenarios using a questionnaire and a
semi-structured interview. The participants were interviewed
about their experience, preference, and motivation for the
different robot motion behaviors. The analysis showed no
significant difference in user experiences in the good news and
functional news scenarios. Also, no clear preference among the
different robot behaviors was found. However, a significant
preference for congruence in the sad news scenario was found
with a clear preference for the sad approaching behavior.
I. INTRODUCTION
Over the coming years, the proportion of elderly in the
world population is going to rapidly increase [1], with an
increase of care demand as a result [2]. To accommodate both
the elderly and healthcare professionals in providing
efficiency and quality in care, assistive technology can
provide a solution [3]. Robots are a promising example of
assistive technology that can support care and independence
in different ways [3]. They are already successfully being
used in different applications for the elderly in supporting
independent living. An important condition for this is the
acceptance of robots as companions in their domestic
environment [4], [5]. A contributing factor to the long-term
acceptance of a social and assisting robot is their social skills
[5], [6], such as the ability to create a clear purpose of the
robot [7] evokes likability [5], use different and modalities for
expression [8]. These skills should resonate with the user’s
preference to increase acceptance [3].
An important aspect of robot behavior is moving in space
and approaching humans. Several studies have been
conducted on the users’ preference of different angles of
approaching (both frontal or on a slight angle), proximity, and
social distance depending on social context and special zones
around the robot [4], [9]. Even though they look at the user’s
preference regarding approaching behavior, they do not take
into consideration what emotional and social values are linked
to engaging in these social interaction [10], [11]. This is a
missed opportunity since emotions serve a critical role in the
coordination of social interaction as they provide an incentive
for promoting social relationships [8], [12], [13].
Human emotions and intentions are usually
communicated nonverbally through gaze, facial expressions,
gestures, and body language [10], [14]. The way of conveying
emotional intent using this communication is through motion.
Motion is defined as the change of position over time. This
refers to both: gross movement trajectories (the global
movement) and to the movement of a part of a body such as
an arm to make a gesture (local movement) [15]. Prominent
features for displaying expressivity in motion are speed and
acceleration [9], [10], [16]. Similarly to a human, a humanoid
robot can display expressivity through arm gestures, body and
head movement, eye gaze, and gross movement trajectories
[9], [17], whereof head movement is most easily recognizable
in conveying emotional intent [17], [18]. Through applying
similar behavior strategies as humans do, a robot can use this
intuitive understanding of familiarity in motion to provide a
common ground for an understanding of and communication
with humans [17].
A commonly used method to apply these motion
familiarities is the Laban Movement Analysis (LMA) [10],
[19] shows the use of LMA features for expressive robot
movement. Work by Saerbeck and Bartneck [9] used the
features acceleration (difference in speed) and curvature
(differences in direction) for conveying emotional intent
through motion using a Roomba robot. Other research [10],
[19] used LMA to design expressive robot movement. These
studies found that acceleration has the highest impact on the
perceived affect. Motion trajectories can be generalized and
applied to multiple embodiments, Barakova and Lourens [10]
showed that the movement and not the anthropomorphic
shape were carrier of the affective component. These two
studies [9], [10] also showed that changes in speed and
direction affect the perceived affective state of a robot. More
recent research by Cui, Maguire, and LaViers [19] presents
how the LMA framework is used to shape expressive
movement patterns for aerial robots for the elderly in a home
environment to show intention. They used the LMA features
weight, time, and space to shape the expressive flying patterns
of the robot. However, they did not evaluate which emotional
intent of the expressive behavior was showing or evaluate it
in a social context.
Preferences of Seniors for Robots Delivering a Message With
Congruent Approaching Behavior
M.T.H. van Otterdijk, M.M.E. Neggers, J. Torresen, and E.I. Barakova.
2021 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO)
Virtual Conference, July 8-10, 2021
978-1-6654-4952-6/21/$31.00 ©2021 IEEE 65
Furthermore, when expressing emotions in interaction, it
is essential to consider the social context of the interaction.
This context provides the receiver with information about the
sender's motives. That is why expressing emotions could be
regarded as the expression of social motives and not as
emotions on their own [20]. This information about the social
environment influences the emotional perception of the
receiver [21]. That was also found in research by Tsiourti,
Weiss, Wac, and Vincze [18], who looked at using different
motion features for displaying emotions. They used
expressive head and body movement combined with
locomotion expressions. They asked participants between the
ages of 18 to 69 to identify the emotional states of sadness,
happiness, and surprise of the robot by making them look at
video footage of a Hobbit and Pepper robot in a context-free
environment. They mention that situational information can
influence the attribution of emotion to motion.
Enhancing the robot behavior with emotional intent may
increase the naturalness and effectiveness of the interaction
[14], and as a result, enhance the relationship with the user
[22]. This is important for the acceptance of robots in a home
context, especially, if they will be used in the future for the
elderly to aid them. However, there does not seem to be clarity
on which expressive behavior is preferred by specific users
when being approached by a robot in each context. Therefore,
this research will investigate the preference of the elderly
users regarding different expressive approaching behaviors of
a social robot in a home environment.
A. Multimodal Elderly Care System
This research is part of the Multimodal Elderly Care
Systems (MECS) project [23]. This project focuses on using
the user-centered design of a robotic system in the
development of assistive technology. A part of the project
focuses on performance and privacy improvements by
applying sensors like cameras on a robot companion rather
than having permanently mounted ones in a home. Even
though many systems have been designed for the elderly, only
a few have been adopted on a larger scale. We expect that the
limited user involvement and little user-tested iteration are
reasons for this. The current study will explore the congruence
of expressive behavior and conveyed the message as a factor
for the user's acceptance.
B. Hypothesis and Expected Outcomes
We hypothesize that the elderly will prefer the robot
approaching them with movement trajectories and body
language, referred to as expressive movement tailored to the
message the robot brings them. Moreover, we expect the
elderly to have a better user experience if the robot behavior
is tailored to the message it brings. We base this hypothesis
on people's preference for congruency in behavior in a social
context [8], [20]. The expected outcomes of this research are
to provide guidelines for the further improvement of robots
assistive behaviors in a home environment for the elderly.
Particularly, on how expressive robot behavior should look to
provide the best user experience and what features are
important according to the elderly to convey this expressivity.
The rest of the paper is organized as follows: Section II
will discuss the methodology with details about the
experiment design while section III reports the results and
findings from the experiments. Finally, in section IV, the
results will be discussed and concluded.
II. METHODOLOGY
A. Participants
A total of twelve participants were recruited and
participated in the experiment. The inclusion criteria for the
study were: a) the participant should be aged above 60 years.
b) The participant should speak Dutch, and c) the participant
should not have a severe mental disability. The selected
participants for this research had the following characteristics:
5 male and 7 female with a mean age of 81.75 (SD = 8.164),
and 5 of them had prior experience with a robot, and 7 did not.
B. Setting
This study was conducted at a care facility for the elderly
in Eindhoven, the Netherlands, Vitalis Berckelhof. The
experiments have been held in a public living space inside this
residential home. This location is selected as it provides the
best representation of the living environment of the elderly,
therefore providing a test environment that closely resembles
their home. To ensure focus on the experiments, the
participants were seated in a far-off corner in this public space
as it provided the fewest distractions from the robot scenarios.
Each participant was seated in a chair on the far end of a table.
The researcher was sitting at the opposite end of the table. The
participants were able to see and conversate with the
researcher. This distant layout is a result of following up on
measures taken for COVID-19 prevention, as this would
ensure a distance of at least 1.5 meters. The participant would
be approached by the robot from the participant’s right side,
as the participant would naturally turn towards the robot as
he/she noticed movement. The starting position of the robot
was about 2.5 meters away from the table. Pictures of the
setting of the experiment are shown in Fig. 1a & 1b.
Figures 1a & 1b. Experiment with on the left, setup of the chairs, and the
robot’s starting position on the right. The resarcher would sit on the left
side of the table and the participant on the right side. The robot would
stand at a distance of 2.5 meter away.
66
C. Design and Materials
1) Selected Robot and Control
During this study, the Pepper robot was used. Pepper is a
humanoid robot created by SoftBanks Robotics. It can engage
in interaction with humans through speech and movement.
Moreover, it can safely move on its own because of its sensors
and cameras. Additionally, it can use features such as the
head, arms, and torso to display non-verbal communication to
enhance its communication [24]. Because of these
capabilities, the Pepper robot was selected for the
experiments. The Pepper robot was programmed using
Python script with the use of the pepper_nocv_2_0 library.
During the experiment, the Pepper robot was controlled using
Wizard of Oz modality.
2) Scenarios
The robot has been used in three different scenarios. In the
first scenario, the robot would approach the elderly to bring
good news. This news was fictional and revolved around the
elderly getting a new grandchild. In the second scenario, the
robot would approach the elderly to bring it bad news. This is
fictional bad news and revolved around the robot accidentally
breaking a cup. In the third approaching scenario, the robot
will approach the elderly to remind him/her to take their
medication and ask whether it can assist them in getting it.
These scenarios are created as they each provide a distinctly
different social context in which the robot approaches the
elderly. Each scenario starts with the robot standing up
straight and ends when the robot has delivered its news.
3) Designing Expressive Robot Behavior
For each scenario, three different approaching behaviors
for the robot were designed. These approaching behaviors
include a movement trajectory and expressive body language
showing a neutral, sad, or happy expression. This involves an
expressive position of the head, movement speed, and
expressive trajectory. The expressive head position of the
robot is modeled after positions shown in the work of Tsiourti
et al [18]. This involves a slightly raised head for the happy, a
straight head position for the neutral, and a downward-facing
head for the sad robot expression. Additionally, to add an
extra layer of distinction and expressivity, an eye LED color
was added to the head positions. For the happy head position,
the yellow color was selected and for the sad head position,
the color blue was selected as those colors are associated with
these emotions [25]. Those are shown in Fig. 2a to 2c.
To add expressivity in the trajectory, the first step was to
define different movement speeds by the emotion as
mentioned in the work by Tsiourti et al. [18]. The robot with
the happy head position was assigned the fastest speed of
0.55m/sec. The robot with the neutral head position was
assigned a slower speed of 0.33m/sec, and the robot with the
sad head position was assigned the slowest speed of
0.11m/sec. For the design of the expressive trajectory of the
robot, inspiration was drawn from work by Saerbeck and
Bartneck [9]. These trajectories were previously used for a
Roomba robot. Therefore, adaptations to the trajectories were
made so they would be suitable as approaching behavior for
Pepper and would cover a 2.5-meter distance. This resulted
in the trajectory patterns as shown in Fig. 3a to 3c. These
trajectories combined with the expressive head positions
were video recorded and evaluated with one peer researcher
and four people outside the project, of these, two were people
who fall within the target group of the study. This evaluation
was done as a pre-test for the robot behavior and for
obtaining feedback on the design. This feedback is then
incorporated into the robot behavior. All these behaviors
were used in every scenario. The order of these different
behaviors was randomized within each scenario. Moreover,
all participants are shown all the different robot behaviors
during the study.
Figures 2a to 2c. Expressive Head Position of the robot. From left to right:
Happy Head Position, Neutral Head Position and Sad Head Position.
Figures 3a to 3c. Expressive Movement Trajectories of the robot. From left
to right: Happy Movement Trajectory, Neutral Movement Trajectory and
Sad Movement Trajectory.
4) Measurements
To analyze the user experience of the participants
regarding the different approaching behaviors, a
questionnaire was used. It is based on the RoSaS-
questionnaire [26]. The RoSaS is based on the Godspeed
questionnaire and is used for assessing the user’s perception
of the robot’s attributes. It was selected as a foundation as it
provides more reliable results than Godspeed [26]. The
RoSaS uses the scales discomfort, warmth, and competence.
This final scale was excluded from the questionnaire as it was
not the most appropriate scale for this research. It was decided
to create a shortened version of the RoSaS, given that the
respondents are asked to answer these questions nine times
with a long questionnaire would be too excessive, and can
result in less accurate answers. The created questionnaire
consisted of three repetitive questions related to comfort (c),
appropriateness of behavior (s), and warmth of the robot’s
appearance (w). The questions were replied to using a seven-
point Likert scale in a single question. The comfort scale runs
from 1 (= very uncomfortable) to 7 (= very comfortable). The
appropriateness scale runs from 1 (= very inappropriate) to 7
67
(= very appropriate). The scale of warmth runs from 1 (= very
distant) to 7 (= very friendly). Participants were asked to fill
in a score after each approach variant. So, per scenario, they
would answer these questions three times for a total of nine
times. Answering these three questions took approximately 2
minutes.
In addition to answering the questionnaire, semi-
structured interviews were held after every scenario. These
were held to get an impression of the motivations for the
answers given in the questionnaire. The questions asked in the
interview were related to the preference of the participant for
an approaching behavior, the motivation for this preference,
why the other behaviors were enjoyed less, and how the
interaction could be improved. These interviews lasted
between 5 to 10 minutes. Additionally, the participants were
asked some general questions related to their experience, the
importance of the robot behavior related to its message, and
what features are important for conveying emotional intent.
This lasted between 10 to 15 minutes.
D. Procedure
Prior to the research, the project was approved by the
Ethical Board of the University of Technology Eindhoven
with approval number: ERB2020ID185.
Before participating in the experiment, the participants
were randomly approached by the researcher. The participants
were approached in a shared space to invite them to
participate. When approached, the elderly would receive
verbal information on the outline, the study's aim, and be
introduced to the robot. If an elderly was interested in
participating, he/she received written information about the
study. When willing to participate, each participant was asked
to consent to participate. After this, participants were
randomly assigned an order of robot approach behavior by the
researcher.
During the experiment, the participants were shown three
approaching behaviors by a robot with movement trajectories
and expressive body language. After experiencing an
approaching behavior, the participant was asked to fill in the
questionnaire. After experiencing all three different robot
behaviors of a single scenario, the participants were shortly
interviewed. The total amount of time spent on each
experiment was between 30- 45 minutes.
E. Data Analysis
To investigate the assumption that the elderly have a better
user experience when approached by a robot with expressive
behavior (movement trajectories and body language) that is
tailored to the message that the robot is bringing them a
Repeated Measures ANOVA was used. This was done for
each scale individually. We hypothesize that in scenario 1
(good news scenario), Chappy, Shappy, and Whappy will score
highest in scenario 1 compared to Cneutral, Sneutral, and Wneutral,
and Csad, Ssad, and Wsad. We expect that the scales related to
the sad scores will score the worst in this scenario. Similarly,
the assumption is that in scenario 2 (sad news scenario), Csad,
Ssad, and Wsad will score highest compared to the same scales,
only then applied for the other robot behaviors Happy and
Neutral. We expect the happy scores to score worse compared
to the neutral scale. In scenario 3 (functional news scenario),
the final assumption is that the Cneutral, Sneutral, and Wneutral will
score highest compared to the same scales, only then applied
for the other robot behaviors Happy and Sad.
Additionally, to investigate the hypothesis that the elderly
have a preference to be approached by a robot with expressive
behavior (movement trajectories and body language) that is
tailored to the message that the robot brings them. Moreover,
to investigate what robot features are important in
successfully conveying emotion, the data gathered from the
interview was used. The interviews were transcribed and
analyzed using thematic analysis. The thematic analysis was
conducted with an inductive approach as it allows for the data
to determine the themes to be found and to ensure that no
details have been missed. This coding was done manually.
The codes used in the analysis are Preference, Motivation Pro
(proximity, impression, body posture, gesture, voice, speed,
and head position), Motivation Con (impression, speed, and
head position), Features (speed, body posture, voice, head
expression, gestures, and impression), Showing Emotion and
Matching Message with Behavior.
III. RESULTS
A. H1: Preference in Approaching Behavior
For the first scenario (Good News Scenario), the majority
of the participants indicated to prefer the NAB (Neutral
Approaching Behavior) and the HAB (Happy Approaching
Behavior). Participants who preferred the NAB mentioned
that the robot appeared friendly because of its body posture
of the robot and its speed. Participants who preferred the
HAB based the friendly appearance of the robot on the
amount of eye contact, proximity, and quick movement.
“He approached faster and stopped closer, which made him
sound nice, and the voice seemed nicer as well.” (R10). The
frequency of the preferences for the different approaching
behaviors of the Good News Scenario is shown in Figure 4.
For the second scenario (Bad News Scenario), the
majority of the participants indicated to prefer the SAB
(Sad Approaching Behavior). Participants who preferred
the SAB based this on the robot’s appearance and head
position which made the participants think the robot was
showing regret for breaking a cup. “He approaches very
hesitantly and does not dare to look at anyone, continues to
look down.” (R12). The frequency of the preferences for
the different approaching behaviors of Scenario 2 is shown
in Figure 5.
68
Figure 4. Frequency of Preference of Approaching Behavior in the Good
News Scenario (Scenario 1). The Participants Preferred the Happy News to
be Delivered with the Happy or Neutral Nonverbal Behavior.
Figure 5. Frequency of Preference of Approaching Behavior in the Bad
News Scenario (Scenario 2). The Participants Preferred the Bad News to be
Delivered with Sad Nonverbal Behavior.
Figure 6. Frequency of preference of Approaching Behavior in the
Functional News Scenario (Scenario 3). The Participants did Not Have a
Preference in the Nonverbal Behavior of the Robot.
For the third scenario, the majority of the participants did
not have a preference for any of the approaching behaviors
as is shown in Fig 6. Five of these participants expressed that
they could not choose between the HAB and NAB. Those
participants all mentioned the importance of the efficient
conviction of the message. This is demonstrated when the
robot directly explains what its’ motive is for its’ approach.
This explanation is supported by two other participants. “I
think, if you have something to say, then you should just go
ahead and say what up with a raised and plain face.” (R2).
Other participants provided a motivation that was related
specifically to the scenario. That participant mentioned that
he/she did not care for the approach as the robot should not
touch their medication. Table 1 summarizes all participants’
motivations to either prefer or dislike an approaching
behavior of the robot.
TABLE I. SUMMARY OF MOTIVATION FOR SELECTED PREFERENCE
OF PARTICIPANTS ACROSS THE DIFFERENT SCENARIOS
MOTIVE
OVERVIEW OF SCENARIOS
HAB
NAB
SAB
Positive
Argument
Friendly
appearance
robot (eye
contact,
proximity and
movement
speed)
Friendly and
clear
appearance
robot (body
posture and
movement
speed)
Friendly
appearance robot
(voice intonation
and bent forward
body posture)
Showing regret*
Speed (matching
tempo user)**
Negative
Argument
Upward head
position (social
distance, lack of
eye contact)
Body posture
(arrogant)
Speed (too
enthusiastic)*
Lack of eye
contact
Proximity (too
close in
personal space)
Downward head
position
(disinterest,
hesitance, social
distance, no eye
contact)
HAB = Happy Approaching Behavior, NAB = Neutral Approaching
Behavior, and SAB = Sad Approaching Behavior
* = only applicable for Scenario 2
** = only applicable for Scenario 3
A. H2: User Experience of the Approaching Behaviors
1) Scenario 1 Bringing Good News
Normality and sphericity assumptions were not violated
for the three variables in Scenario 1. A repeated-measures
ANOVA showed that the elderly did not find any of the
approaching behaviors more comfortable to others, F (2, 22)
= .144, p = .866, partial η2 = 0.13. Additionally, it showed
that the elderly did not find any of the approaching behaviors
more appropriate to others, F (2, 22) = .623, p = .546, partial
η2 = .054. Finally, it showed that the elderly did not find any
of the approaching behaviors warmer to others, F (2, 22) =
2.369, p = .117, partial η2 = .117.
2) Scenario 2 Bringing Bad News
Normality and sphericity assumptions were not violated
for the three variables in Scenario 2. A repeated-measures
ANOVA showed that the elderly did not find any of the
approaching behaviors more comfortable to others, F (2, 22)
= 1.953, p = .166, partial η2 = 0.151. However, it showed that
the elderly find one of the approaching behaviors more
appropriate to others, F (2, 22) = 7.805, p = .003, partial η2 =
.415. Pairwise comparisons further revealed that the SAB (M
= 5.083, SD = .645) was significantly more suitable than the
NAB (M = 4.917, SD = .514) and the HAB (M =3.667, SD
=.632). Finally, the NAB was rated significantly higher than
the HAB.
0
2
4
6
Happy Neutral Sad No preference
Preference Approaching Behavior Scenario 1
Amount of preferences
0
5
10
Happy Neutral Sad No preference
Preference Approaching Behavior Scenario 2
Amount of preferences
0
2
4
6
8
Happy Neutral Sad No preference
Preference Approaching Behavior Scenario 3
Amount of preferences
69
Finally, it showed that the elderly did find one of the
approaching behaviors warmer to others, F (2, 22) = 5.923, p
= .009, partial η2 = .350. Pairwise comparisons further
revealed that the SAB (M = 5.167, SD = .562) was perceived
as significantly warmer than the NAB (M = 4.833, SD = .490)
and the HAB (M = 3.500, SD =.500). Finally, the NAB was
rated significantly higher than the HAB.
3) Scenario 3 Functional News
Normality and sphericity assumptions were not violated
for the three variables in Scenario 3, except for the Warmth-
scale. Hence the Huyn-Feldt Epsilon was used for the
interpretation of the results from the ANOVA. A repeated-
measures ANOVA showed that the elderly did not find any of
the approaching behaviors more comfortable to others, F (2,
22) = .138, p = .872, partial η2 = 0.12. Additionally, it showed
that the elderly did not find any of the approaching behaviors
more appropriate to others, F (2, 22) = 1.553, p = .234, partial
η2 = .124. Finally, it showed that the elderly did not find any
of the approaching behaviors warmer to others, F (1.192,
9.539) = 1.857, p = .207, partial η2 = .188.
IV. DISCUSSION
This research investigated the preference for congruence
of expressive approaching behaviors by a social robot with
different social contexts of delivering messages that can
provoke different emotional reactions in senior citizens.
Moreover, we looked at whether the user experience of the
elderly was influenced by the different expressive
approaching behaviors of the robot. We hypothesized that the
elderly would prefer and have a better user experience if the
expressive robot behavior is congruent to the message, it
brings them.
Our results partially rejected the hypothesis that the
elderly have a preference for congruence between the nature
of the news and the corresponding expressive movement since
there was no significant difference in their perception in the
good news and functional news scenario, but not in the sad
news scenario. These results contradict findings reported in
[8], [20], which found that people prefer congruence in
behavior related to the social context of an interaction. The
most plausible explanation for our results be that the senior
citizens perceived the neutral and happy body posture as
similar, despite that the designed behaviors of the robots were
different. This was concluded by both: the questionnaire and
the interviews. The difference between neutral and happy
bodily expression could be improved in future studies, for
instance, by including gestures and pretesting if these
expressive behaviors are perceived as different from each
other by the tested user group. Moreover, we could research
whether other user groups also experience neutral and happy
behaviors as identical to eliminate the influence of age. The
hypothesis was confirmed for the sad news scenario, which is
in line with research by [8], [20]. The research reported in [8]
and [20] was not performed with robots, so further design
iterations on the expressive robot behaviors could bring to
more consistent results. An alternative explanation is that sad
behavior is easier to be noticed in general, and especially by
this user group since people would be triggered if something
is not OK.
Moreover, during the interviews, it became clearer that the
participants did not actively perceive the different trajectories
of the robot's approaching behavior. This could indicate that
people perceive expressive behavior holistically. This would
be in line with findings reported by [27] that report that people
use both the input from faces and body language
simultaneously when creating a judgment of an emotion.
Differently, in [9], [10], [13] non-anthropomorphic robots
were used, and since the movement was the only cue to
evaluate, the participants could easily recognize the emotion
expressed by the movement. Most of the participants did
notice the difference in the speed of the robot but did not
connect it to the robot’s expressivity. As the authors of [10],
[13] argue, not merely the speed, but the acceleration patterns
are carriers of the emotional expression, and in this study, we
have simplified the dynamical expression of emotion to the
speed, and not to the acceleration patterns as recommended in
[10], [13], [15].
In future studies, it would be interesting to analyze the
body posture of the participants when interacting with a robot
with expressive approaching behavior to see whether they
unconsciously match the expression of the robot. Since
humans tend to mimic the behavior of their conversation
partner [8], such behavior could be an indication of an
unconscious higher engagement with a happy robot. Our
findings are in line with the conclusions made by other
researchers that point out the impact of the head position of
the robot on identifying emotional intent [17], [18].
There are several limitations to the study. The first
limitation is the relatively small sample size. This makes it
hard to draw firm conclusions on the preference of the elderly
regarding the different approaching behaviors in general, as
some statistical tests require the input of more data. A second
limitation is the cognitive capabilities of some participants.
Three of them suffered mild dementia-related symptoms. This
influenced their recollection capabilities and made it,
therefore, harder to rely on the interview outcomes. We expect
that this will have a limited impact on the data acquired
because the questionnaire was filled in right after the
interaction. A third limitation could be that half of the
participants had prior experience with a robot. This could
have influence on the results. To limit this influence, we
included the interviews to gain a better understanding of the
reasoning behind the answers given in the questionnaires.
V. CONCLUSION
We investigated the preference and user experience of the
elderly regarding the expressive approaching behavior of a
robot in a home context with behavior that matched the
projected emotional effect of its message. Our results showed
no significant preference and better user experience in the
happy and functional news scenarios. However, we did find
70
significant results when the robot would approach an elderly
with sad expressive approaching behavior when bringing sad
news. These results can help robot developers in effectively
shaping congruent approaching behavior for robots.
However, additional research based on the different possible
reasons for these findings as discussed in the previous section
is needed.
ACKNOWLEDGMENT
This work is partially supported by The Research Council
of Norway (RCN) as a part of the projects: Multimodal
Elderly Care systems (MECS) under grant agreement no.
247697, Vulnerability in the Robot Society (VIROS) under
Grant Agreement No. 288285, Predictive and Intuitive Robot
Companion (PIRC) under Grant Agreement No. 312333 and
through its Centre of Excellence scheme, RITMO with
Project No. 262762. The authors thank all the elderly that
participated in the experiment, as well as the health care
professionals of the elderly home of Vitalis Berkenhoff for
their time, aid, and hospitality which made the experiment
possible.
REFERENCES
[1] United Nations, Department of Economic and Social Affairs, and
Population Division, “World Population Prospects 2019
Highlights.”
https://population.un.org/wpp/Publications/Files/WPP2019_High
lights.pdf (accessed Sep. 28, 2020).
[2] A. Vercelli, I. Rainero, L. Ciferri, M. Boido, and F. Pirri,
“Robots in Elderly Care,” Sci. J. Digit. Cult., 2017, doi:
10.4399/97888255088954.
[3] S. Bedaf, P. Marti, and L. De Witte, “What are the preferred
characteristics of a service robot for the elderly? A multi-country
focus group study with older adults and caregivers,” Assist.
Technol., 2019, doi: 10.1080/10400435.2017.1402390.
[4] P. A. M. Ruijten and R. H. Cuijpers, “Stopping distance for a
robot approaching two conversating persons,” in RO-MAN 2017
- 26th IEEE International Symposium on Robot and Human
Interactive Communication, 2017, doi:
10.1109/ROMAN.2017.8172306.
[5] A. S. Ghazali, J. Ham, E. Barakova, and P. Markopoulos,
“Persuasive Robots Acceptance Model (PRAM): Roles of Social
Responses Within the Acceptance Model of Persuasive Robots,”
Int. J. Soc. Robot., 2020, doi: 10.1007/s12369-019-00611-1.
[6] K. Dautenhahn, “Socially intelligent robots: Dimensions of
human-robot interaction,” in Philosophical Transactions of the
Royal Society B: Biological Sciences, 2007, doi:
10.1098/rstb.2006.2004.
[7] M. M. A. De Graaf, S. Ben Allouch, and J. A. G. M. Van Dijk,
“Long-term acceptance of social robots in domestic
environments: Insights from a user’s perspective,” in AAAI
Spring Symposium - Technical Report, 2016.
[8] S. M. Jones and J. G. Wirtz, “‘Sad monkey see, monkey do:’
Nonverbal matching in emotional support encounters,Commun.
Stud., 2007, doi: 10.1080/10510970601168731.
[9] M. Saerbeck and C. Bartneck, “Perception of affect elicited by
robot motion,” in 5th ACM/IEEE International Conference on
Human-Robot Interaction, HRI 2010, 2010, doi:
10.1145/1734454.1734473.
[10] E. I. Barakova and T. Lourens, “Expressing and interpreting
emotional movements in social games with robots,” Pers.
Ubiquitous Comput., 2010, doi: 10.1007/s00779-009-0263-2.
[11] G. Venture and D. Kulić, “Robot Expressive Motions: A Survey
of Generation and Evaluation Methods,” ACM Trans. Human-
Robot Interact., 2019, doi: 10.1145/3344286.
[12] D. Keltner and A. M. Kring, “Emotion, social function, and
psychopathology,” Rev. Gen. Psychol., 1998, doi: 10.1037/1089-
2680.2.3.320.
[13] C. Herrera Perez and E. I. Barakova, “Expressivity Comes First,
Movement Follows: Embodied Interaction as Intrinsically
Expressive Driver of Robot Behaviour,” in Modelling Human
Motion, Cham: Springer, 2020, pp. 299313.
[14] E. Mwangi, E. I. Barakova, M. Díaz-Boladeras, A. C. Mallofré,
and M. Rauterberg, “Directing Attention Through Gaze Hints
Improves Task Solving in Human–Humanoid Interaction,” Int. J.
Soc. Robot., 2018, doi: 10.1007/s12369-018-0473-8.
[15] T. Schulz, J. Herstad, and J. Torresen, “Classifying Human and
Robot Movement at Home and Implementing Robot Movement
Using the Slow In , Slow Out Animation Principle,” Int. J. Adv.
Intell. Syst., 2018.
[16] F. E. Pollick, H. M. Paterson, A. Bruderlin, and A. J. Sanford,
“Perceiving affect from arm movement,” Cognition, 2001, doi:
10.1016/S0010-0277(01)00147-0.
[17] S. Saunderson and G. Nejat, “How Robots Influence Humans: A
Survey of Nonverbal Communication in Social HumanRobot
Interaction,” Int. J. Soc. Robot., 2019, doi: 10.1007/s12369-019-
00523-0.
[18] C. Tsiourti, A. Weiss, K. Wac, and M. Vincze, “Designing
emotionally expressive robots: A comparative study on the
perception of communication modalities,” in HAI 2017 -
Proceedings of the 5th International Conference on Human
Agent Interaction, 2017, doi: 10.1145/3125739.3125744.
[19] H. Cui, C. Maguire, and A. LaViers, “Laban-inspired task-
constrained variable motion generation on expressive aerial
robots,” Robotics, 2019, doi: 10.3390/ROBOTICS8020024.
[20] U. Hess and S. Hareli, “The Role of Social Context for the
Interpretation of Emotional Facial Expressions,” in
Understanding Facial Expressions in Communication: Cross-
Cultural and Multidisciplinary Perspectives, 2015.
[21] M. E. Kret and B. De Gelder, “Social context influences
recognition of bodily expressions,” Exp. Brain Res., 2010, doi:
10.1007/s00221-010-2220-8.
[22] I. Leite, A. Pereira, S. Mascarenhas, C. Martinho, R. Prada, and
A. Paiva, “The influence of empathy in human-robot relations,”
Int. J. Hum. Comput. Stud., 2013, doi:
10.1016/j.ijhcs.2012.09.005.
[23] University of Oslo, “Multimodal Elderly Care Systems
(MECS).”
https://www.mn.uio.no/ifi/english/research/projects/mecs/
(accessed Sep. 28, 2020).
[24] SoftBank Robotics, “Pepper.”
https://www.softbankrobotics.com/emea/en/pepper (accessed
Oct. 14, 2020).
[25] D. O. Johnson, R. H. Cuijpers, and D. van der Pol, “Imitating
Human Emotions with Artificial Facial Expressions,” Int. J. Soc.
Robot., 2013, doi: 10.1007/s12369-013-0211-1.
[26] M. K. X. J. Pan, E. A. Croft, and G. Niemeyer, “Evaluating
Social Perception of Human-to-Robot Handovers Using the
Robot Social Attributes Scale (RoSAS),” in ACM/IEEE
International Conference on Human-Robot Interaction, 2018,
doi: 10.1145/3171221.3171257.
[27] H. Aviezer, Y. Trope, and A. Todorov, “Holistic person
processing: Faces With Bodies Tell the Whole Story,” J. Pers.
Soc. Psychol., 2012, doi: 10.1037/a0027411.
71
Article
Full-text available
In the last years, there have been rapid developments in social robotics, which bring about the prospect of their application as persuasive robots to support behavior change. In order to guide related developments and pave the way for their adoption, it is important to understand the factors that influence the acceptance of social robots as persuasive agents. This study extends the technology acceptance model by including measures of social responses. The social responses include trusting belief, compliance, liking, and psychological reactance. Using the Wizard of Oz method, a laboratory experiment was conducted to evaluate user acceptance and social responses towards a social robot called SociBot. This robot was used as a persuasive agent in making decisions in donating to charities. Using partial least squares method, results showed that trusting beliefs and liking towards the robot significantly add the predictive power of the acceptance model of persuasive robots. However, due to the limitations of the study design, psychological reactance and compliance were not found to contribute to the prediction of persuasive robots' acceptance. Implications for the development of persuasive robots are discussed.
Article
Full-text available
We examine how robot movement can help human-robot interaction in the context of a robot helping people over 60-years old at home. Many people are not familiar with a robot moving in their home. We present four movement conditions to classify movement between a human and robot at home. Using phenomenology and familiarity, we recognize some of these conditions from other interactions people have with other moving things. Using techniques from animation in movies, we give to the robot a distinctive style that can make the robot's movement more familiar and easier to understand. Further on, we examine animation and present how to implement the animation principle of slow in, slow out with a research robot that can control its speed. We close the paper with future work on how to use the classification system, how to build on the slow in, slow out principle implementation for animated robots, and an outline for a future experiment.
Article
Full-text available
In this paper, we report an experimental study designed to examine how participants perceive and interpret social hints from gaze exhibited by either a robot or a human tutor when carrying out a matching task. The underlying notion is that knowing where an agent is looking at provides cues that can direct attention to an object of interest during the activity. In this regard, we asked human participants to play a card matching game in the presence of either a human or a robotic tutor under two conditions. In one case, the tutor gave hints to help the participant find the matching cards by gazing toward the correct match, in the other case, the tutor only looked at the participants and did not give them any help. The performance was measured based on the time and the number of tries taken to complete the game. Results show that gaze hints (helping tutor) made the matching task significantly easier (fewer tries) with the robot tutor. Furthermore, we found out that the robots’ gaze hints were recognized significantly more often than the human tutor gaze hints, and consequently, the participants performed significantly better with the robot tutor. The reported study provides new findings towards the use of non-verbal gaze hints in human–robot interaction, and lays out new design implications, especially for robot-based educative interventions.
Article
Full-text available
This multi-perspective study focuses on how a service robot for the elderly should behave when interacting with potential users. An existing service robot and a scenario were used as a concrete case which was discussed and analyzed during focus group sessions with older adults (n = 38), informal caregivers (n = 24) and professional caregivers (n = 35) in the Netherlands, France and the UK. A total of seven topics—privacy, task execution, environment, appearance, behavior, visitors and communication—were explored. The results showed that some of the characteristics mentioned were unique to a user group, but several were cross-cutting. Overall, potential users expected the service robot to be customizable in order to match the users’ needs and preferences. Also, high expectations concerning its functioning and behavior were expressed which sometimes could even be compared to the qualities of a human being. This emphasizes the complexity of the development of service robots for older adults and highlights the need for a personalized and flexible solution. One size does not fit all, and specific attention should be paid to the development of the robot’s social behavior and skills beyond a mere functional support for the person.
Chapter
Social robotics is concerned with the development of embodied agents that can interact naturally with humans in social contexts. Such agents need to gather information about the interaction in a way similar to that of humans—that is, relying not only on verbal communication but taking into account the expressivity and intentionality of movement and the intonation of speech. It is commonly accepted that expressivity derives from a set of specialized behaviours, which often function as expressions of emotions. In this paper, we advocate for an embodied dynamic interaction approach, arguing that not just certain specialized behaviours are expressive, but rather all embodied interaction, insofar as it creates a relationship with the world, is intrinsically expressive and provides important contextual cues. This non-reductionist approach highlights the importance of movement understanding for emotion and cognition generally. Drawing from emotion theory, we present an interdisciplinary approach that uses dance as an empirical and experiential domain of research naturally concerned with the issue of expressivity beyond paradigmatic expressions. In particular, the Laban system that captures expressivity in dance serves as the foundation for an interaction design of embodied objects, robots in particular, capable of embedding (i.e. performing and understanding) movement expressivity in social interaction. In conclusion, we argue that there are grounds for more research in social robots that base their interactions on dynamical principles, going beyond occasional expressivity.
Article
Robots that have different forms and capabilities are used in a wide variety of situations; however, one common point to all robots interacting with humans is their ability to communicate with them. In addition to verbal communication or purely communicative movements, robots can also use their embodiment to generate expressive movements while achieving a task, to convey additional information to its human partner. This article surveys state-of-the-art techniques that generate whole-body expressive movements in robots and robot avatars. We consider different embodiments such as wheeled, legged, or flying systems and the different metrics used to evaluate the generated movements. Finally, we discuss future areas of improvement and the difficulties to overcome to develop truly expressive motions in artificial agents.
Article
As robots become more prevalent in society, investigating the interactions between humans and robots is important to ensure that these robots adhere to the social norms and expectations of human users. In particular, it is important to explore exactly how the nonverbal behaviors of robots influence humans due to the dominant role nonverbal communication plays in social interactions. In this paper, we present a detailed survey on this topic focusing on four main nonverbal communication modes: kinesics, proxemics, haptics, and chronemics, as well as multimodal combinations of these modes. We uniquely investigate findings that span across these different nonverbal modes and how they influence humans in four separate ways: shifting cognitive framing, eliciting emotional responses, triggering specific behavioral responses, and improving task performance. A detailed discussion is presented to provide insights on nonverbal robot behaviors with respect to the aforementioned influence types and to discuss future research directions in this field.
Conference Paper
This work explores social perceptions of robots within the domain of human-to-robot handovers. Using the Robotic Social Attributes Scale (RoSAS), we explore how users socially judge robot receivers as three factors are varied: initial position of the robot arm prior to handover, grasp method employed by the robot when receiving a handover object trading off perceived object safety for time efficiency, and retraction speed of the arm following handover. Our results show that over multiple handover interactions with the robot, users gradually perceive the robot receiver as being less discomforting and having more emotional warmth. Additionally, we have found that by varying grasp method and retraction speed, users may hold significantly different judgments of robot competence and discomfort. With these results, we recognize empirically that users are able to develop social perceptions of robots which can change through modification of robot receiving behaviour and through repeated interaction with the robot. More widely, this work suggests that measurement of user social perceptions should play a larger role in the design and evaluation of human-robot interactions and that the RoSAS can serve as a standardized tool in this regard.
Conference Paper
Socially assistive agents, be it virtual avatars or robots, need to engage in social interactions with humans and express their internal emotional states, goals, and desires. In this work, we conducted a comparative study to investigate how humans perceive emotional cues expressed by humanoid robots through five communication modalities (face, head, body, voice, locomotion) and examined whether the degree of a robot's human-like embodiment affects this perception. In an online survey, we asked people to identify emotions communicated by Pepper - a highly human-like robot and Hobbit - a robot with abstract humanlike features. A qualitative and quantitative data analysis confirmed the expressive power of the face, but also demonstrated that body expressions or even simple head and locomotion movements could convey emotional information. These findings suggest that emotion recognition accuracy varies as a function of the modality, and a higher degree of anthropomorphism does not necessarily lead to a higher level of recognition accuracy. Our results further the understanding of how people respond to single communication modalities and have implications for designing recognizable multimodal expressions for robots.