Conference PaperPDF Available

Exploring the Link between Self-assessed Mimicry and Embodiment in HRI

Authors:

Abstract and Figures

This work explores the relationship between a robot's embodiment and people's ability to mimic its behavior. It presents a study in which participants were asked to mimic a 3D mixed-embodied robotic head and a 2D version of the same character. Quantitative and qualitative analysis were performed from questionnaires. Quantitative results show no significant influence of the character's embodiment on the self-assessed ability to mimic it, while qualitative ones indicate a preference for mimicking the robotic head.
Content may be subject to copyright.
Exploring the Link between Self-assessed Mimicry and
Embodiment in HRI
Maike Paetzel
Uppsala University,
Department of Information
Technology, Sweden
maike.paetzel@it.uu.se
Isabelle Hupont
Université Pierre et Marie
Curie, Institut des Systèmes
Intelligents et de Robotique,
Paris, France
hupont@isir.upmc.fr
Giovanna Varni
Université Pierre et Marie
Curie, Institut des Systèmes
Intelligents et de Robotique,
Paris, France
varni@isir.upmc.fr
Mohamed Chetouani
Université Pierre et Marie
Curie, Institut des Systèmes
Intelligents et de Robotique,
Paris, France
chetouani@isir.upmc.fr
Christopher Peters
KTH Royal Institute of
Technology, Department of
Computational Science
Technology, Sweden
chpeters@kth.se
Ginevra Castellano
Uppsala University,
Department of Information
Technology, Sweden
ginevra.castellano@it.uu.se
ABSTRACT
This work explores the relationship between a robot’s em-
bodiment and people’s ability to mimic its behavior. It
presents a study in which participants were asked to mimic
a 3D mixed-embodied robotic head and a 2D version of the
same character. Quantitative and qualitative analysis were
performed from questionnaires. Quantitative results show
no significant influence of the character’s embodiment on
the self-assessed ability to mimic it, while qualitative ones
indicate a preference for mimicking the robotic head.
CCS Concepts
Human-centered computing Empirical studies in
HCI; Computing methodologies Intelligent agents;
Keywords
Human-robot interaction; Mimicry; Embodiment.
1. INTRODUCTION
This paper investigates the relationship between a char-
acter’s embodiment and people’s ability to mimic in social
human-robot interactions. Social robots play an important
role in assistive settings, where they provide social and phys-
ical support [7]. However, the success of such social robots
is highly depended on their likability and perceived pleasure
to interact with them. Mimicry has shown to have a posi-
tive effect on the likability of an artificial agent and research
from psychology suggests that this holds for both the mim-
icker and the one being mimicked [6]. In addition to the
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
HRI ’17 Companion March 06-09, 2017, Vienna, Austria
c
2017 Copyright held by the owner/author(s).
ACM ISBN 978-1-4503-4885-0/17/03. . . $15.00
DOI: http://dx.doi.org/10.1145/3029798.3038317
implicit use of mimicry to strengthen the rapport between a
human and a robot, mimicry is also explicitly used in autism
therapy, among others [2]. Despite the advantage of using
mimicry, mimicking robots is difficult due to technical limi-
tations in robot’s faces. Back-projected robot platforms like
Furhat [1] use technology from virtual agents to accurately
display human-like facial expressions. With this work, we
analyze if people assess the effort to mimic the Furhat robot
due to its 3D presence differently from the same character
displayed in 2D.
2. METHODOLOGY
In this paper, we empirically address what influence the
type of embodiment (2D virtual character vs. 3D mixed-
embodied robot) has on the self-assessed ability to mimic
using laughter as a case study.
2.1 Experimental Design & Stimuli
We designed a within-subject experiment with the two in-
dependent variables type of embodiment and type of laugh-
ter in which participants were asked to mimic an artificial
character. A male character was presented as a 3D mixed-
embodied Furhat robot [1] and a 2D virtual agent on a screen.
Figure 1: 2D representation of the stimulus (left),
3D representation (middle) and a user performing
the mimicry task (right).
245
In our study, we chose a joyful laughter associated with pos-
itive emotions and a schadenfreude laughter associated with
both negative and positive emotions [5] as a stimulus. In
addition, two trial facial expressions were displayed.
Both laughter types consist of an audio and facial expres-
sion component and a head movement. The audio for the
laughter stimuli was selected in an online pre-study. Vir-
tual facial expression synthesis was grounded in the work by
Ruch et al. [5], which describes laughter according to the
Facial Action Coding System (FACS).
The self-assessed ability to mimic the robot was the depen-
dent variable in this experiment. Participants rated how well
they mimicked the character, how much effort the mimicry
took and how comfortable they felt on a 5 point Likert scale.
In addition, they could freely elaborate on their experience
in a final questionnaire after the experiment.
23 students (Age: M = 26.5, SD = 2.5, 21.7% female)
enrolled in Computer Science and related subjects were re-
cruited to take part in the experiment. The data of two
participants were excluded from the analysis because the
data suggested a misunderstanding of the task.
2.2 Experimental Setup & Procedure
The experimental sessions took place in a private labo-
ratory room at Uppsala University. The participant was
standing at a distance of about 100 cm from the character
that was placed on a table at approximately 170 cm from the
ground. An iPad was available for filling-in questionnaires.
Prior to the experiment, participants filled out an online
questionnaire including demographic and personality ques-
tions which aimed to assess their level of gelotophobia (“the
fear of being laughed at”), among others. Participants with
a high gelotophobia rating were excluded from participation.
After arriving at the experiment session, participants gave
informed consent to participate. They were then instructed
to mimic by imitating facial expressions, head movements
and voice within 8 seconds given for each mimicry task. Af-
ter each mimicry recording, participants rated their mimicry
performance on the iPad.
Each behavior was mimicked and assessed three times be-
fore the next behavior was displayed. The order of embodi-
ment and laughter type was determined using Latin square
prior to the experiment to minimize ordering influences.
After all four behaviors were mimicked three times for
the first embodiment type, participants were given a short
break while the embodiment was switched. Then, the second
mimicry session started. It included the same four behaviors
as in the previous embodiment in the same order.
3. RESULTS
Quantitative Analysis
Since this short paper is only focused on the independent
variable type of embodiment, a One-way ANOVA with Type
III sum of squares was performed. The results show no
significant influence of the embodiment on the self-assessed
ability to mimic, F(1,268) = 0.015, p = 0.903, the effort to
mimic the character, F(1,268) = 0.061, p = 0.806, and the
comfort during the mimicry, F(1,268) = 0.983, p = 0.322.
Qualitative Analysis
In the free-text assessment in the end of the experiment,
participants generally described it as more difficult to mimic
the 2D character. It was noted that the mimicry in 2D was
more strange “due to the tangible face in 3D”, that “every
movement of the eyes and small micro-expression were much
clearer and noticeable in 3D” and that the 3D version was
“easy to follow”. Participants also commented that the “2D
character was not as pleasant to mimic as the 3D character”
and that they “liked interacting with the 3D representation
better”. Only one participant noted that “the behaviors were
more easily discerned in the 2D version of the model”.
4. DISCUSSION & CONCLUSION
The quantitative analyses showed no difference in the abil-
ity to mimic the 2D versus the 3D embodiment of the char-
acter. Interestingly, previous work in the literature showed
different results. Leyzberg et al. [4], for example, found a
clear influence of the embodiment type on task success, but
not in the context of mimicry. Moreover, Hofree et al. [3]
found differences between the ability to mimic an android
robot and the ability to mimic a 2D video recording of the
same. Contrary to their work, however, we used a mixed
embodiment (and not fully robotic) platform.
In opposition to the quantitative analyses, participants
mentioned in the post-questionnaire they found the expres-
sions to be clearer visible and felt they were better able to
mimic in 3D, which would be more in line with other related
work [3][4]. These contradictory findings are interesting, be-
cause they suggest that the feeling of task success exam-
ined qualitatively afterwards differs from the more system-
atic measures during the interaction. This early exploratory
work is part of a larger study on conscious mimicry of social
agents. In the future, we will introduce a method to objec-
tively measure the ability to mimic and thereby understand
which of the results from self-assessment matches with the
objective analysis. In addition, we aim to further investigate
the influence of the likability on the ability to mimic.
5. REFERENCES
[1] S. Al Moubayed, J. Beskow, G. Skantze, and
B. Granstr¨
om. Furhat: a back-projected human-like
robot head for multiparty human-machine interaction.
In Cognitive Behavioural Systems, pages 114–130. 2012.
[2] S. Boucenna, D. Cohen, A. N. Meltzoff, P. Gaussier,
and M. Chetouani. Robots learn to recognize
individuals from imitative encounters with people and
avatars. Scientific reports, 6, 2016.
[3] G. Hofree, P. Ruvolo, M. S. Bartlett, and
P. Winkielman. Bridging the mechanical and the
human mind: spontaneous mimicry of a physically
present android. PloS one, 9(7):e99934, 2014.
[4] D. Leyzberg, S. Spaulding, M. Toneva, and
B. Scassellati. The physical presence of a robot tutor
increases cognitive learning gains. In CogSci, 2012.
[5] W. F. Ruch, J. Hofmann, and T. Platt. Investigating
facial features of four types of laughter in historic
illustrations. The European Journal of Humour
Research, 1(1):99–118, 2013.
[6] M. Stel and R. Vonk. Mimicry in social interaction:
Benefits for mimickers, mimickees, and their
interaction. British Journal of Psychology,
101(2):311–323, 2010.
[7] A. Tapus, M. Mataric, and B. Scassellati. The grand
challenges in socially assistive robotics. Robotics and
Automation Magazine, 14(1):1–7, 2007.
246
... There are studies (Bonfert et al., 2021;Fischer et al., 2012;Paetzel et al., 2017) that investigated the influence of robot embodiment on users. Through the creation of various embodiments and comparison of quantitative and qualitative data from user experiences, these studies discovered that differences in embodiment have an impact on usability. ...
... In the field of cHRI, emotional induction for educational purposes has not been investigated before. However, in terms of emotion research, studies have been conducted to investigate empathy [30], [35], behavior synchronisation [5], [32], [23], [9] and mimicry between the user and the agent (whether a robot or a virtual character) [24], [47], [25], [43]. The closest to emotional induction is emotional mimicry also called emotional contagion that the agent elicits in the user in an interaction. ...
Conference Paper
Full-text available
According to psychology research, emotional induction has positive implications in many domains such as therapy and education. Our aim in this paper was to manipulate the Regulatory Focus Theory to assess its impact on the induction of regulatory focus related emotions in children in a pretend play scenario with a social robot. The Regulatory Focus Theory suggests that people follow one of two paradigms while attempting to achieve a goal; by seeking gains (promotion focus - associated with feelings of happiness) or by avoiding losses (prevention focus - associated with feelings of fear). We conducted a study with 69 school children in two different conditions (promotion vs. prevention). We succeeded in inducing happiness emotions in the promotion condition and found a resulting positive effect of the induction on children's social engagement with the robot. We also discuss the important implications of these results in both educational and child robot interaction fields.
... In a number of studies, this forms part of an overarching, structured questionnaire with pre-defined rating scales in order to generate both qualitative and quantitative data [45,87,102,103], with the qualitative data adding explanatory detail to support the quantitative results. Similarly, in a primarily design study, [56] collect additional insights in the form of attitudes towards robots using a short questionnaire. ...
Article
Full-text available
The field of human–robot interaction (HRI) is young and highly inter-disciplinary, and the approaches, standards and methods proper to it are still in the process of negotiation. This paper reviews the use of qualitative methods and approaches in the HRI literature in order to contribute to the development of a foundation of approaches and methodologies for these new research areas. In total, 73 papers that use qualitative methods were systematically reviewed. The review reveals that there is widespread use of qualitative methods in HRI, but very different approaches to reporting on it, and high variance in the rigour with which the approaches are applied. We also identify the key qualitative methods used. A major contribution of this paper is a taxonomy categorizing qualitative research in HRI in two dimensions: by ’study type’ and based on the specific qualitative method used.
... Spontaneous facial mimicry has been mostly studied in Human-Human Interaction (HHI), and has only recently been investigated in HRI and HAI [20,21,26,31]. The majority of research on facial mimicry focuses on whether emotional expressions are mimicked [20], under which circumstances [21], and which are the characteristics of the expresser that elicit more intense mimicry [20,30]. For example, an artificial agent that is physically present and humanlike seems to elicit stronger mimicry than its virtual non-humanlike counterpart [20,31]. ...
Chapter
Full-text available
Facial mimicry is crucial in social interactions as it communicates the intent to bond with another person. While human-human mimicry has been extensively studied, human-agent and human-robot mimicry have been addressed only recently, and the individual characteristics that affect them are still unknown. This paper explores whether the humanlikeness and embodiment of an agent affect human facial mimicry and which personality and empathy traits are related to facial mimicry of human and artificial agents. We exposed 46 participants to the six basic emotions displayed by a video-recorded human and three artificial agents (a physical robot, a video-recorded robot, and a virtual agent) differing in humanlikeness (humanlike, characterlike, and a morph between the two). We asked participants to recognize the facial expressions performed by each agent and measured their facial mimicry using automatic detection of facial action unit activation. Results showed that mimicry was affected by the agents’ embodiment, but not by their humanlikeness, and that it correlated both with individual traits denoting sociability and sympathy and with traits advantageous for emotion recognition.
... Spontaneous facial mimicry has been mostly studied in Human-Human Interaction (HHI), and has only recently been investigated in HRI and HAI [20,21,26,31]. The majority of research on facial mimicry focuses on whether emotional expressions are mimicked [20], under which circumstances [21], and which are the characteristics of the expresser that elicit more intense mimicry [20,30]. For example, an artificial agent that is physically present and humanlike seems to elicit stronger mimicry than its virtual non-humanlike counterpart [20,31]. ...
Preprint
Full-text available
Facial mimicry is crucial in social interactions as it communicates the intent to bond with another person. While human-human mimicry has been extensively studied, human-agent and human-robot mimicry have been addressed only recently, and the individual characteristics that affect them are still unknown. This paper explores whether the humanlikeness and embodiment of an agent affect human facial mimicry and which personality and empathy traits are related to facial mimicry of human and artificial agents. We exposed 46 participants to the six basic emotions displayed by a video-recorded human and three artificial agents (a physical robot, a video-recorded robot, and a virtual agent) differing in humanlikeness (humanlike, characterlike, and a morph between the two). We asked participants to recognize the facial expressions performed by each agent and measured their facial mimicry using automatic detection of facial action unit activation. Results showed that mimicry was affected by the agents' embodiment, but not by their humanlikeness, and that it correlated both with individual traits denoting sociability and sympathy and with traits advantageous for emotion recognition.
... Several researchers [5,6,17,19] defined embodied interfaces able to echo/mirror the user's gestures and expressivity (i.e., producing movements with the same dynamic qualities, e.g., speed, amplitude, etc.). ...
Conference Paper
Creative joint activity is a form of real-time dynamic problem solving in which people collaborate to reach a common creative goal (e.g., to solve a mathematical problem, to improvise a piece of music, to write a novel, to sketch a story, and so on). While there exist interfaces able to produce social, emotional, communicative signals while collaborating with single human users to go through the creative process, the design of embodied interfaces able to observe and simultaneously effectively support creative joint activity with multiple human users is still an emerging research field. We define Creative Embodied Interfaces (CEIs) such interfaces, having either anthropomorphic or non-anthropomorphic aspect, and being either physically or virtually present in the real world. We argue that CEIs will enable a novel interaction paradigm that could be exploited in several fields such as science, education, health-care, arts, entertainment, social inclusion, companionship. This paper is aimed at providing definition and a first framework of CEIs combining psychological theories of creativity and computational models of social signal analysis/synthesis in avatars.
Article
Full-text available
Prior to language, human infants are prolific imitators. Developmental science grounds infant imitation in the neural coding of actions, and highlights the use of imitation for learning from and about people. Here, we used computational modeling and a robot implementation to explore the functional value of action imitation. We report 3 experiments using a mutual imitation task between robots, adults, typically developing children, and children with Autism Spectrum Disorder. We show that a particular learning architecture - specifically one combining artificial neural nets for (i) extraction of visual features, (ii) the robot's motor internal state, (iii) posture recognition, and (iv) novelty detection - is able to learn from an interactive experience involving mutual imitation. This mutual imitation experience allowed the robot to recognize the interactive agent in a subsequent encounter. These experiments using robots as tools for modeling human cognitive development, based on developmental theory, confirm the promise of developmental robotics. Additionally, findings illustrate how person recognition may emerge through imitative experience, intercorporeal mapping, and statistical learning.
Article
Full-text available
The spontaneous mimicry of others' emotional facial expressions constitutes a rudimentary form of empathy and facilitates social understanding. Here, we show that human participants spontaneously match facial expressions of an android physically present in the room with them. This mimicry occurs even though these participants find the android unsettling and are fully aware that it lacks intentionality. Interestingly, a video of that same android elicits weaker mimicry reactions, occurring only in participants who find the android "humanlike." These findings suggest that spontaneous mimicry depends on the salience of humanlike features highlighted by face-to-face contact, emphasizing the role of presence in human-robot interaction. Further, the findings suggest that mimicry of androids can dissociate from knowledge of artificiality and experienced emotional unease. These findings have implications for theoretical debates about the mechanisms of imitation. They also inform creation of future robots that effectively build rapport and engagement with their human users.
Conference Paper
Full-text available
We present the results of a 100 participant study on the role of a robot's physical presence in a robot tutoring task. Partic-ipants were asked to solve a set of puzzles while being pro-vided occasional gameplay advice by a robot tutor. Each par-ticipant was assigned one of five conditions: (1) no advice, (2) robot providing randomized advice, (3) voice of the robot providing personalized advice, (4) video representation of the robot providing personalized advice, or (5) physically-present robot providing personalized advice. We assess the tutor's ef-fectiveness by the time it takes participants to complete the puzzles. Participants in the robot providing personalized ad-vice group solved most puzzles faster on average and improved their same-puzzle solving time significantly more than partic-ipants in any other group. Our study is the first to assess the effect of the physical presence of a robot in an automated tu-toring interaction. We conclude that physical embodiment can produce measurable learning gains.
Article
Full-text available
This study investigates the facial features of different laughter types in historic illustrations. Several conceptually different types of laughter were proposed in the historic literature, but only four types were represented in visual and verbal illustrations by four or more historic illustrators (joyful, intense, schadenfreude laughter, grinning). Study 1 examined the encoding of facial features in 18 illustrations by the Facial Action Coding System and study 2 investigated the decoding by laypeople. Illustrations of laughter involving a Duchenne Display (DD) were perceived as joyful irrespective of their initial classification. In intense laughter, the intensity of the zygomatic major muscle predicted the perception of intensity, but not the proposed changes in the upper face. In fact, "frowning" seemed to be antagonistic to the perception of joy. Schadenfreude and grinning did not have high recognition rates. Going along with the idea that schadenfreude is either a blend of a positive and negative emotion, or solely joy with attempts of masking it, it may entail additional features beyond the DD. Grinning was best represented by low intensity laughter, narrowed eye aperture and mouth prolonging actions. So far, only the DD could be reliably morphologically differentiated and recognized, supporting Darwin's proposal of joyful laughter being the laughter prototype.
Article
Full-text available
In this chapter, we first present a summary of findings from two previous studies on the limitations of using flat displays with embodied conversational agents (ECAs) in the contexts of face-to-face human-agent interaction. We then motivate the need for a three dimensional display of faces to guarantee accurate delivery of gaze and directional movements and present Furhat, a novel, simple, highly effective, and human-like back-projected robot head that utilizes computer animation to deliver facial movements, and is equipped with a pan-tilt neck. After presenting a detailed summary on why and how Furhat was built, we discuss the advantages of using optically projected animated agents for interaction. We discuss using such agents in terms of situatedness, environment, context awareness, and social, human-like face-to-face interaction with robots where subtle nonverbal and social facial signals can be communicated. At the end of the chapter, we present a recent application of Furhat as a multimodal multiparty interaction system that was presented at the London Science Museum as part of a robot festival,. We conclude the paper by discussing future developments, applications and opportunities of this technology.
Article
Full-text available
Mimicry has benefits for people in social interactions. However, evidence regarding the consequences of mimicry is incomplete. First, research on mimicry has particularly focused on effects of being mimicked. Secondly, on the side of the mimicker evidence is correlational or lacks real interaction data. The present study investigated effects for mimickers and mimickees in face-to-face interaction. Feelings towards the immediate interaction partner and the interaction in which mimicry takes place were measured after an interaction between two participants in which mimicry did or did not occur. Results revealed that mimickers and mimickees became more affectively attuned to each other due to bidirectional influences of mimicry. Additionally, both mimickers and mimickees reported more feelings of having bonded with each other and rated the interaction as smoother.
Article
Full-text available
Socially intelligent robotics is the pursuit of creating robots capable of exhibiting natural-appearing social qualities. Beyond the basic capabilities of moving and acting autonomously, the field has focused on the use of the robot's physical embodiment to communicate and interact with users in a social and engaging manner. One of its components, socially assistive robotics, focuses on helping human users through social rather than physical interaction. Early results already demonstrate the promises of socially assistive robotics, a new interdisciplinary research area with large horizons of fascinating and much needed research. Even as socially assistive robotic technology is still in its early stages of development, the next decade promises systems that will be used in hospitals, schools, and homes in therapeutic programs that monitor, encourage, and assist their users. This is an important time in the development of the field, when the board technical community and the beneficiary populations must work together to shape the field toward its intended impact on improved human quality of life