Conference PaperPDF Available

Emotion specific body movements: Studying humans to augment robots' bodily expressions

Authors:

Abstract

Robots are starting to share our social space and will continue to do so in the future. These interactive devices have the capability of facilitating various interaction modalities which could go beyond voice and screen based interactions. When developing interaction technologies, theories such as psychobiological and media compensation suggest to replicate natural ways of human interactions such as face to face. As a result, it is vital to study how people interpret robot’s physical appearance and actions. According to previous studies, designing acceptable humanoid robots is far more challenging than designing robots with less human qualities. This paper emphasizes the importance of human non-verbal communication in emotional interactions and analyzes how people use full body expression to communicate basic emotions, in particular happiness, surprise, and anger. Furthermore, this paper presents a list of emotion specific movement behaviors that can be used and applied to better design forms and movements for both humanoid or non-humanoid robots.
Emotion specific body movements
Studying humans to augment robots’ bodily expressions
Mr. Kossinna
Wasala
School of Design
Queensland University
of Technology
Brisbane, Australia
kossinna.wasala@hdr.q
ut.edu.au
Dr. Rafael Gomez
School of Design
Queensland University
of Technology
Brisbane, Australia
r.gomez@qut.edu.au
Dr. Jared Donovan
School of Design
Queensland University
of Technology
Brisbane, Australia
j.donovan@qut.edu.au
Dr. Marianella
Chamorro-Koc
School of Design
Queensland University
of Technology
Brisbane, Australia
m.chamorro@qut.edu.a
u
ABSTRACT
Robots are starting to share our social space and will continue to
do so in the future. These interactive devices have the capability
of facilitating various interaction modalities which could go
beyond voice and screen based interactions. When developing
interaction technologies, theories such as psychobiological and
media compensation suggest to replicate natural ways of human
interactions such as face to face. As a result, it is vital to study
how people interpret robot’s physical appearance and actions.
According to previous studies, designing acceptable humanoid
robots is far more challenging than designing robots with less
human qualities. This paper emphasizes the importance of human
non-verbal communication in emotional interactions and
analyzes how people use full body expression to communicate
basic emotions, in particular happiness, surprise, and anger.
Furthermore, this paper presents a list of emotion specific
movement behaviors that can be used and applied to better design
forms and movements for both humanoid or non-humanoid
robots.
CCS CONCEPTS
Human-centered computing HCI design and evaluation
methods Laboratory experiments • Interaction devices
KEYWORDS
Non-verbal communication, Emotional expressions, body
movements, body language, human robot interaction, Robots’
emotional expressions
ACM Reference format:
Kossinna Wasala, Rafael Gomez, Jared Donovan and Marianella
Chamorro-Koc. 2019. Emotion Specic Body Movements: Studying
Humans to Augment Robots Bodily Expressions. In
31ST AUSTRALIAN
CONFERENCE ON HUMAN-COM PUTER-INTERACTION (OZCHI’19),
December 25, 2019, Fremantle, WA, Australia.
ACM, New York, NY, USA,
5 pages. h"ps://doi.org/
10.1145/3369457.3369542
1 INTRODUCTION
1.1 Robots are starting to share our social space
Due to the advancement of interactive technologies, the way
people interact with devices has changed dramatically. People
interact with devices in an emotional level [1] while some
products mediates people’s emotional experiences. Social robots
can be identified as a new genre of interactive devices that will
rapidly share our everyday living spaces. Most of these social
robots are physically manifested, thus, are having greater
potentials to offer multimodal interaction capabilities which go
beyond commonly used speech and screen based interactions.
According to psychobiological model [2] and media comp ensation
theory [3] as humans, we evolved to interact f ace to face. Thus, it
is vital to develop interaction methods for social robotics which
are closer to human natural interaction modes where non-verbal
communication plays a crucial role.
1.2 Human non-verbal communication to HCI
and HRI
Nonverbal communication is an elaborate secret code that is
written nowhere, known by none, and understood by all” [3,
p.556]. Aff ective information such as feelings and attitudes are
mainly communicated by non-verbal communication such as
body movements and less by words [5]. The majority of research
on non-verbal interactions in HCI has focused on gestural inputs
form the user and automatic recognitions of user’s movements by
the computer, while limited number of research focus on gestural
or non-verbal output for computer systems or robots. Moreover,
as robots begin to share the social space with us, it is important
for robot designers to understand how robot physical actions are
interpreted by people around them [6]. Thus, this paper seeks to
Permissio n to mak e digital or hard copies o f part or all of this work for personal
or classroom use is granted without fee provided that copies are not made or
distributed for profit or commercial advantage and that copies bear this notice
and the full citation on the first page. Copyrights for third-party components of
this work must be honored. For all other uses, contact the Owner/Author.
OZCHI'19 , December 25, 2019, Fremantle, WA, Australia
© 2019 Copyright is held by the owner/author(s).
ACM ISBN 978-1-4503-7696-9/19/12.
https://doi.org/10.1145/3369457.3369542
OZCHI'19, December 25, 2019, Fremantle, WA, Australia
K. Wasala et al .
understand human non-verbal communications particularly in
the area of bodily expressions of emotion in order to improve
emotion specific movements for social robots.
1.3 Why movement is special
Humans have great sensitivity to biological motions. Prior studies
show that people are capable of identifying expression such as
attitudes and emotions by only looking at point light displays
where it shows only the locations of human joints in sp ace when
they move [6 7, 8]. Further, people assign expressions for abstract
shapes in motion [10]. For instance, one of the earliest animation
films “The dot and the line” uses a dot and a simple line to exhibit
various complex expressions [11].
1.4 Studying human non-verbal emotional
expressions
In order to make Human robot interaction (HRI) more natural, it
is vital to study how humans use non-verbal communications in
emotional expressions. This is a widely explored area in
psychology. However, the majority of those research has focused
on human facial expressions, while comparatively less on body
movements [11, 12]. Additionally, focus of these studies was
mainly to explore the human nature’ in general and less focused
on applying them to HCI. Moreover, most of the existing work
mainly pay attention to automatic recognition which is the side of
input for computer systems. Therefore, research described in this
paper mainly focuses on identifying emotion specific full body
movement features which can be used to develop expressive
movements for robots/ computer systems as outputs.
2 RELATED WORK
2.1 Expressive body movement research
Studies on expressive body movements started with the seminal
work of, “The expression of the emotions in man and animals” by
Darwin [14]. Since then, human emotion expression has attracted
much research across a variety of fields. Researches have been
able to identify that some emotions can be recognized through
facial expressions [14, 15]. Ekman, Friesen, & Ellsworth [17]
strengthened the idea of basic emotion introducing nine
characteristics which aid to discriminate basic emotions [18].
Emotion recognition research can be divided into two areas,
namely facial expressions and body movements. Research on body
movements is well behind studies on facial expressions [12, 18].
Further, in the recent review Witkower & Tracy [19] highlight the
importance of considering body movements in emotion
recognition research over facial expressions. Emotions can be
recognized even at distance [11, 19, 20], even from behind the
encoder [22], recognition rate for some emotions go beyond facial
expressions [23], can be interpreted even without facial
expressions [7], and sometime they override facial expressions
[24].
2.2 Available body movement coding systems
Literature shows several coding systems which have been
developed to study body movements. These coding systems can
be divide into subjective measurers and objective measurers.
Laban movement analysis (LMA) is a widely used movement
annotation system which falls into the subjective category. LMA
use a notation system called Labanotation to code qualitative
aspects of movements [25]. The next category objectively
measures movement qualities. Birdwhistell [26] developed a
spatiotemporal coding system which gives anatomically
categorized list of possible body movements. This coding system
was not feasible to use due to the exhaustive list of movement
behaviors. Bernese coding system is another coding system which
annotates movement objectively [26, 27]. Facial action coding
system (FACS) is one of the well-recognized coding systems
which has been developed to analyze face muscle movements [29].
Due to the success of FACS, similar attempts were made to
develop coding systems to code body movements: Body Action
Coding system [30] and Body Action and Posture (BAP) Coding
system [31]. However, these coding schemes have not been able
to attract the popularity as FACS due to the long list of movement
behaviors and time consuming coding process. Auto BAP coding
system was developed as an automatic version of BAP coding
system [32] although there is no space for semantic meaning
coding [33].
After considering pros and cons of the above mentioned coding
systems, we developed a new spaciotemporal coding system
which can analyze body movements objectively. Further, this new
coding system consists of a reduced behavior list and facilitate
multilevel iterative analysis of movements. It allows both
objective observations and subjective interpretations of
movements. Moreover, the coding system is developed to identify
suitable human body movements which can be applied to robotics
and/or computer systems.
2.3 Expressive form and movement designs for
social robots
Overall form and expressive movement capabilities are vital in a
robot to make them accepted in social interactions. The
morphology of social robots has been attracted by many
researchers while less attention has been given to their expressive
movements. Thus, the study presented in this paper focuses on
observing human expressive movements in order to help develop
expressive movement for social robots.
Social robots come in various forms and movemen t capabilities.
Duffy [34] maps these forms in the th ree different ends of a
triangle including human, iconic, and abstract. In a recent study,
Balit, Vaufreydaz, & Reignier [35] arrange a collection of social
robots in a continuum form non-humanoid forms to human or
animal like forms. In a similar vein, Parlitz, Hägele, Klein, Seifert,
& Dautenhahn [36] discuss that once the robot take human like
appearance people tend to expect it to behave like a human. If the
robot fails to reach the expected behavior people tend to dislike
Emotion specific body movements
OZCHI'19, December 25, 2019, Fremantle, WA, Australia
the interaction. Evidently, Mori’s [37] ‘uncanny valley effect’ can
be used to demonstrate the relationship between level of likeness
towards a robot’s appearance and its form factor.
From a practical point of view, these theoretical notions help
designers to develop acceptable designs for robots. In fact, most
accepted practice of designing expressive movements for these
robots is getting the involvement of professionals such as
animators and choreographers. For instance, movement designs
for Greeting machine robot and Travis robot were done by a
professional choreographer and an animator [38, 6]. To ease the
process of movement design, Hoffman & Ju [6] presents a
framework which consists of a detailed design process. Further,
Balit and the team [35] advanced Hoffman & Ju’s framework by
developing a pipeline which uses the Blender animation software.
Conversely, it is vital that robot designers need to work closely
with experts from other fields such as animation, acting, dancing,
choreography or obtain additional skills in order to design
believable expressive movements for robots. This research
attempts to fill this identified gap by identifying a list of emotion
specific human non-verbal behaviors which can be mapped in
designing movements for social robots.
3 THE CURRENT STUDY
3.1 Study overview
The study began by video recording bodily expressions for three
different emotions: Happiness, Surprise, and Anger.
Subsequently, they were analyzed and a final list of emotion
specific movement behaviors were developed. The list has been
presented at the end of the paper along with intended future steps
where we embed these movements to physical prototypes of non-
humanoid social robots.
3.2 Participants
The current study uses data corresponding to five participants. All
the participants were students of Queensland University of
Technology (QUT), Australia. Participants’ ages were ranging
from 18 to 45 years old. The participant sample consists of three
female and two male participants.
3.3 Study procedure and data collection
This study was conducted in a laboratory at QUT. Even though,
natural settings could provoke genuine emotions, participants
performed the activity in a laboratory setting due to the ethical
considerations and also to ensure internal validity. No one was
allowed to stay inside the laboratory while doing the performance
except the participant. Participants were asked to perform bodily
expressions for three different emotions: Happiness, Surprise, and
Anger in two intensities: low and high. Performances for high
intensities were taken to the analysis. Participants did their
performance to a camera located in front of them. The list of
emotions were displayed behind the camera. A keyboard which
can operate the emotion list was placed near the participant (See
Figure 1). Participant could use arrow keys to navigate the list in
order to select relevant emotion for the performance. The time
spent for each performance was decided by participates
themselves.
3.4 Data analysis
Data analysis was conducted in four steps.
In the first step, Elan software was used to analyze video data
(Figure 2). A full body movement behavior list of total 63
behaviors was developed through an iterative process. In this
iterative process, each participant’s performance for each emotion
were observed multiple times in order to identify possible
movements which could insert into the list of behaviors.
In the second step, each performance were coded using the
behaviors list developed in step one. Elan video analysis software
was used.
In the third step, data of all the five participants were compared
in order to identify commonly visible behaviors for each emotions.
In the fourth step, each participant’s expressions were re observed
in order to identify any additional features and also to compose
an emotion specific behavior list.
Figure 1: Arrangement of the laboratory to capture
participants’ performances
Figure 2: Interface of the Elan video analysis software
OZCHI'19, December 25, 2019, Fremantle, WA, Australia
K. Wasala et al .
4 FINDINGS AND DISCUSSION
Emotions specific full body movement behaviors were identified
as the main finding of the study (See Table 1). The study differs
from non-verbal communication research in psychology due to
the fact that the search for movements is mainly done with the
goal of applying them to non-humanoid robots. Thus, robot
designers will be able to directly apply these finding when
designing both form and movement for robots.
However, identified movement behaviors comply with previous
literature to a greater extent in addition to new findings (See, [19]
for a latest comprehensive review of previous research on
emotion specific body movements). In addition to previous
findings, the results show some interesting new patterns of body
movement behaviors.
For instance, happiness shows repetitive up and down hand
movements. Further, people fold and unfold fingers when
expressing happiness. Moreover, people keep lips as opened and
looks at the subject almost during the full expression time.
Surprise expands the body by horizontal and vertical hand
movements. These hand movements are symmetric. Raised
eyebrows remains during the full expression time while blinking
eyes slightly greater than average.
People shows anger by turning head down while maintaining the
eye direction towards front to stare at the subject. One or both
hands come towards front considerably with more than two
repetitions. (See the table 1 to see other common movement
behaviors)
Addition to direct findings of emotions specific movement list, the
current study helps to improve the process of designing both form
factor and movements for social robots. The list can be used for
designing both humanoids and non-humanoid robots.
Further, the emotion specific movement list facilitates robot
designers/ engineers to develop expressive movement for robots
even without collaborations with animators and choreographers.
Table 1: Emotion specific body movement behaviors for Happiness, Surprise, Anger
REFERENCES
[1] R. E. Gomez, V. Popovic, and A. L. Blackler, Emotional experience with
portable interactive devices,IASDR 2009 Proc., pp. 110, 2009.
[2] N. Kock, The psychobiological model: Towards a new theory of
computer-mediated communication based on Darwinian evolution,
Organ. Sci ., vol. 15, no. 3, pp. 327348, 2004.
[3] D. A. Hantula, N. Kock, J. P. DArcy, and D. M. DeRosa, Media
compensation theory: A Darwinian perspective on adaptation to
electronic communication and collaboration,in Evolutionary psychology
in the business sciences, Sprin ger, 2011, pp . 339363.
[4] E. Sapir, The unconscio us patterning of behavio r in society,Sel. writings
Edward Sapir, vol. 540, p. 559, 1927.
[5] A. Mehrabian, Silent Messages: Implicit Communication of Emotions and
Attitudes. W adsworth Publishing C ompany, 1981 .
[6] G. Hoffman and W. Ju, Designin g robots with movement in min d, J.
Human-Robot Interact., vol. 3, no. 1, pp . 89122, 2014.
[7] A. P. Atkinson, W. H. Dittrich, A. J. Gemmell, and A. W. Young, Emotion
perception from dynamic and static body expressions in point-light and
full-light displays,Perception, vol. 33, no. 6, pp. 717746, 2004.
[8] T. J. Clarke, M. F. Bradshaw, D. T. Field, S. E. Hampson, and D. Rose, The
perception of emotion from body movement in point-light displays of
interpersonal dialogue,Perception, vol. 34, no. 10, pp. 1 1711180, 2005 .
[9] J. M. Montepare, S. B. Goldstein, and A. Clausen, The identification of
emotions from gait information,J. Nonverbal Behav., vol. 11, no. 1, pp.
3342, 1987.
[10] F. Heider and M. Simmel, An experimental study of apparent behavior,
Am. J. Psychol., vol. 57, no. 2, p p. 243259, 1944.
[11] C. Jones, The dot and the line: A roman ce in Lower Mathematic s. Chronicle
Books, 1965.
[12] B. De Gelder, Why bodies? Twelve reasons for including bodily
expressions in affective neuroscience,Philos. Trans. R. Soc. B Biol. Sci.,
vol. 364, no. 1535, pp . 34753484, 2009.
[13] F. Noroozi, C. A. Corneanu, D. Kamińska, T. Sapiński, S. E scalera, and G.
Anbarjafari, Survey on emotional body gesture recognition,arXiv Prepr.
arXiv1801.07481, 2018.
[14] C. Darwin, The Expression of the Emotions in Man and Animals. London:
John Murray, 1872.
[15] C. E. Osgood, Dimensionality of the semantic space for communi cation
via facial expre ssions,Scand. J. Psychol., vol. 7, no. 1, pp. 130, 1966.
[16] R. S. Woodworth, Experimental psychology. New York: Henry Holt, 1938.
[17] P. Ekman, W. V Friesen, and P. Ellsworth, Emotion in the human face:
Guidelines for research and an integrat ion of findings. New York: Pergamon
Press Inc , 1972.
[18] P. Ekman et al., Are There Basic Emo tions?,Psychol. Rev., vol. 99, no. 3 ,
pp. 550553, 1992.
[19] Z. Witkower and J. L. Tracy, Bodily communication of emotion: evidence
for extrafacial behavioral expressions and available coding systems,
Emot. Rev., p. 175407391 774988 0, 2018.
[20] B. De Gelder, Emotions and the Body. Oxford University Pre ss, 2016.
[21] L. Martinez, V. B. Falvello, H. Aviezer, and A. Todorov, Contributions of
facial expressions and body language to the rapid perception of dynamic
emotions,Cogn. Emot., vol. 30, no. 5, pp. 939952, 2016.
[22] M. Coulson, Attributing emotion to static body postures: Recognition
accuracy, confusions, and viewpoint dependence,J. Nonverbal Behav.,
vol. 28, no. 2, pp. 117139, 2004.
[23] B. De Gelder and J. Van den Stock, The bodily expressive actio n stimulu s
test (BEAST). Construction and validation of a stimulus basis for
measuring perception of whole body expression of emotions,Front.
Psychol., vol. 2, p. 181 , 2011.
[24] H. Aviezer et al., Angry, disgusted, or afraid? Studies on the malleability
of emotion perception,Psychol. Sci., vol. 19, no. 7, pp. 724732, 2008.
[25] R. Laban, The mastery of movement on the stage. Macdonald & Evans, 1950.
[26] R. L. Birdwhistell, Kinesics and Context: Essays on Body Motion
Communication Conduct and Communication. Philadelphia: University of
Pennsylvania Press, 1970.
[27] S. Frey and J. Pool, A new approach to the analysis of visible behavior.
1976.
[28] S. Frey, H.-P. Hirsbrunner, A. Florin, W. Daw, and R. Crawford, A unified
approach to the investigation of no nverbal and verbal behavior in
communication research, Curr. issues E ur. Soc. Psychol., vol. 1, pp. 143
198, 1983.
[29] P. Ekm an and W. V Friesen, Facial action coding sy stem. Co nsulting
Psycholo gists Press, Stanford University, Palo Alto, 1 978.
[30] E. M. J. Huis in t Veld, G. J. M. Van Bo xtel, and B. de Gelder, Th e Bod y
Action Coding System I: Muscle a ctivations during the perception and
expression of emotion,Soc. Neurosci., vol. 9, no. 3, pp. 249264, 2 014.
[31] N. Dael, M. Mortillaro, and K. R. Scherer, Th e body action and posture
coding system (B AP): Development and reliability, J. Nonverbal Behav.,
vol. 36, no. 2, pp. 97121, 2012.
[32] E. Velloso, A. Bulling, and H. Gellersen, AutoBAP: Automati c coding of
body action and posture units from wearable sensors, in 2013 Humaine
Association Confe rence on Affective C omputing and Intelligent Interaction,
2013, pp . 135140.
[33] J. A. Hall and M. L. Knapp, Nonverbal Communication. Berlin/Boston,
GERMANY: De Gruyter, Inc., 2013.
[34] B. R. Duffy, Anthropomorphism and the social robot,Rob. Auto n. Syst.,
vol. 42, no. 34, pp. 177190, 2003.
[35] E. Balit, D. Vaufreydaz, and P. Reignier, PEAR: Prototyping Expressive
Animated Robots-A framework for social robot prototyping,in HUCAPP
2018 - 2nd International Conference on Human Computer Interaction Theory
and Applications, 2018, pp. 118.
[36] C. Parlitz, M. Hägele, P. Klein, J. Seifert, and K. Dautenhahn, Care-o-bot
3-rationale for human-robot interaction design,in Proceedings of 39th
International Symposium on Robotics (ISR), Seul, Korea, 2008, p p. 275 280.
[37] M. Mori, The uncanny valley,Energy, vol. 7, no. 4, pp . 3335, 1970.
[38] L. Anderson-Bashan et al., The Greeting Mach ine: An Abstract Robotic
Object for Opening Encounters, Robot and Human Interactive
Communication (RO-MAN), 2018 27th IEEE Internatio nal Symposium on.
IEEE, pp. 595602, 2018.
... A prominent approach in robotic EBL design is to use features extracted from human EBL. Such features can be found in studies coding postures or patterns of movement from recordings of actors performing emotional expressions [40,41,14,42,43,44], computer-generated mannequin figures [8], or human motion capture data [45]. These studies provide lists of validated features, such as body orientation, the symmetry of limbs, joint positions, force, velocity, which can be employed for the design of robotic EBL expressions. ...
Preprint
Full-text available
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration, since humans attribute, and perhaps subconsciously anticipate, such traces to perceive an agent as engaging, trustworthy, and socially present. Robotic emotional body language needs to be believable, nuanced and relevant to the context. We implemented a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions and can generate numerous new ones of similar believability and lifelikeness. The framework uses the Conditional Variational Autoencoder model and a sampling approach based on the geometric properties of the model's latent space to condition the generative process on targeted levels of valence and arousal. The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones, and the emotional conditioning was adequately differentiable between most levels except the pairs of neutral-positive valence and low-medium arousal. Furthermore, an exploratory analysis of the results reveals a possible impact of the conditioning on the perceived dominance of the robot, as well as on the participants' attention.
... This comes into full effect if a voice is the only available source of information, such as in case of virtual assistants. For embodied agents, a fit between the bodily cues and voice should be considered (Wasala et al., 2019). Especially important is emotional expressivity in artificial agents used by children or elderly people (Kory-Westlund et al., 2017;Antona et al., 2019). ...
Article
Full-text available
Background: The increasing involvement of social robots in human lives raises the question as to how humans perceive social robots. Little is known about human perception of synthesized voices. Aim: To investigate which synthesized voice parameters predict the speaker's eeriness and voice likability; to determine if individual listener characteristics (e.g., personality, attitude toward robots, age) influence synthesized voice evaluations; and to explore which paralinguistic features subjectively distinguish humans from robots/artificial agents. Methods: 95 adults (62 females) listened to randomly presented audio-clips of three categories: synthesized (Watson, IBM), humanoid (robot Sophia, Hanson Robotics), and human voices (five clips/category). Voices were rated on intelligibility, prosody, trustworthiness, confidence, enthusiasm, pleasantness, human-likeness, likability, and naturalness. Speakers were rated on appeal, credibility, human-likeness, and eeriness. Participants' personality traits, attitudes to robots, and demographics were obtained. Results: The human voice and human speaker characteristics received reliably higher scores on all dimensions except for eeriness. Synthesized voice ratings were positively related to participants' agreeableness and neuroticism. Females rated synthesized voices more positively on most dimensions. Surprisingly, interest in social robots and attitudes toward robots played almost no role in voice evaluation. Contrary to the expectations of an uncanny valley, when the ratings of human-likeness for both the voice and the speaker characteristics were higher, they seemed less eerie to the participants. Moreover, when the speaker's voice was more humanlike, it was more liked by the participants. This latter point was only applicable to one of the synthesized voices. Finally, pleasantness and trustworthiness of the synthesized voice predicted the likability of the speaker's voice. Qualitative content analysis identified intonation, sound, emotion, and imageability/embodiment as diagnostic features. Discussion: Humans clearly prefer human voices, but manipulating diagnostic speech features might increase acceptance of synthesized voices and thereby support human-robot interaction. There is limited evidence that human-likeness of a voice is negatively linked to the perceived eeriness of the speaker.
... This comes into full effect if a voice is the only available source of information, such as in case of virtual assistants. For embodied agents, a fit between the bodily cues and voice should be considered (Wasala et al., 2019). Especially important is emotional expressivity in artificial agents used by children or elderly people (Kory-Westlund et al., 2017;Antona et al., 2019). ...
Article
Full-text available
Background: The increasing involvement of social robots in human lives raises the question as to how humans perceive social robots. Little is known about human perception of synthesized voices. Aim: To investigate which synthesized voice parameters predict the speaker's eeriness and voice likability; to determine if individual listener characteristics (e.g., personality, attitude toward robots, age) influence synthesized voice evaluations; and to explore which paralinguistic features subjectively distinguish humans from robots/artificial agents. Methods: 95 adults (62 females) listened to randomly presented audio-clips of three categories: synthesized (Watson, IBM), humanoid (robot Sophia, Hanson Robotics), and human voices (five clips/category). Voices were rated on intelligibility, prosody, trustworthiness, confidence, enthusiasm, pleasantness, human-likeness, likability, and naturalness. Speakers were rated on appeal, credibility, human-likeness, and eeriness. Participants' personality traits, attitudes to robots, and demographics were obtained. Results: The human voice and human speaker characteristics received reliably higher scores on all dimensions except for eeriness. Synthesized voice ratings were positively related to participants' agreeableness and neuroticism. Females rated synthesized voices more positively on most dimensions. Surprisingly, interest in social robots and attitudes toward robots played almost no role in voice evaluation. Contrary to the expectations of an uncanny valley, when the ratings of human-likeness for both the voice and the speaker characteristics were higher, they seemed less eerie to the participants. Moreover, when the speaker's voice was more humanlike, it was more liked by the participants. This latter point was only applicable to one of the synthesized voices. Finally, pleasantness and trustworthiness of the synthesized voice predicted the likability of the speaker's voice. Qualitative content analysis identified intonation, sound, emotion, and imageability/embodiment as diagnostic features. Discussion: Humans clearly prefer human voices, but manipulating diagnostic speech features might increase acceptance of synthesized voices and thereby support human-robot interaction. There is limited evidence that human-likeness of a voice is negatively linked to the perceived eeriness of the speaker.
Conference Paper
Full-text available
Opening encounters are an integral element of everyday social interaction, and are essential for forming and maintaining social relationships between people. We present an abstract non-humanoid robotic object called the Greeting Machine, designed to communicate positive and negative social cues in the context of opening encounters. The design includes a small ball rolling on a larger dome, with a custom gear and lever mechanism that supports a variety of subtle movements. Gestures were designed with movement experts, and were evaluated using a physical first-person qualitative study. Our findings reveal that an abstract robot designed with no specific metaphor can effectively take part in opening encounters. Furthermore , a minimal brief movement, designed as an Approach or Avoid gesture, may be enough to evoke positive and negative experiences. The ability to create opening encounters with low Degree of Freedom abstract robots is promising, both due to the low complexity, low cost, and design flexibility of such devices, and due to the possible generalization of the Approach and Avoid gestures for a variety of morphologies.
Conference Paper
Full-text available
Social robots are transitioning from lab experiments to commercial products, creating new needs for proto-typing and design tools. In this paper, we present a framework to facilitate the prototyping of expressive animated robots. For this, we start by reviewing the design of existing social robots in order to define a set of basic components of social robots. We then show how to extend an existing 3D animation software to enable the animation of these components. By composing those basic components, robots of various morphologies can be prototyped and animated. We show the capabilities of the presented framework through 2 case studies.
Article
Full-text available
Automatic emotion recognition has become a trending research topic in the past decade. While works based on facial expressions or speech abound, recognizing affect from body gestures remains a less explored topic. We present a new comprehensive survey hoping to boost research in the field. We first introduce emotional body gestures as a component of what is commonly known as "body language" and comment general aspects as gender differences and culture dependence. We then define a complete framework for automatic emotional body gesture recognition. We introduce person detection and comment static and dynamic body pose estimation methods both in RGB and 3D. We then comment the recent literature related to representation learning and emotion recognition from images of emotionally expressive gestures. We also discuss multi-modal approaches that combine speech or face with body gestures for improved emotion recognition. While pre-processing methodologies (e.g. human detection and pose estimation) are nowadays mature technologies fully developed for robust large scale analysis, we show that for emotion recognition the quantity of labelled data is scarce, there is no agreement on clearly defined output spaces and the representations are shallow and largely based on naive geometrical representations.
Article
Full-text available
This paper makes the case for designing interactive robots with their expressive movement in mind. As people are highly sensitive to physical movement and spatiotemporal affordances, well-designed robot motion can communicate, engage, and offer dynamic possibilities beyond the machines’ surface appearance or pragmatic motion paths. We present techniques for movement centric design, including character animation sketches, video prototyping, interactive movement explorations, Wizard of Oz studies, and skeletal prototypes. To illustrate our design approach, we discuss four case studies: a social head for a robotic musician, a robotic speaker dock listening companion, a desktop telepresence robot, and a service robot performing assistive and communicative tasks. We then relate our approach to the design of non-anthropomorphic robots and robotic objects, a design strategy that could facilitate the feasibility of real-world human-robot interaction.
Article
Although scientists dating back to Darwin have noted the importance of the body in communicating emotion, current research on emotion communication tends to emphasize the face. In this article we review the evidence for bodily expressions of emotions—that is, the handful of emotions that are displayed and recognized from certain bodily behaviors (i.e., pride, joy, sadness, shame, embarrassment, anger, fear, and disgust). We also review the previously developed coding systems available for identifying emotions from bodily behaviors. Although no extant coding system provides an exhaustive list of bodily behaviors known to communicate a panoply of emotions, our review provides the foundation for developing such a system.
Article
Correctly perceiving emotions in others is a crucial part of social interactions. We constructed a set of dynamic stimuli to determine the relative contributions of the face and body to the accurate perception of basic emotions. We also manipulated the length of these dynamic stimuli in order to explore how much information is needed to identify emotions. The findings suggest that even a short exposure time of 250 milliseconds provided enough information to correctly identify an emotion above the chance level. Furthermore, we found that recognition patterns from the face alone and the body alone differed as a function of emotion. These findings highlight the role of the body in emotion perception and suggest an advantage for angry bodies, which, in contrast to all other emotions, were comparable to the recognition rates from the face and may be advantageous for perceiving imminent threat from a distance.
Article
The abstract for this document is available on CSA Illumina.To view the Abstract, click the Abstract button above the document title.