Content uploaded by Inas Redjem
Author content
All content in this area was uploaded by Inas Redjem on Apr 26, 2025
Content may be subject to copyright.
© 2025 IEEE. This is the author’s version of the article that has been published in the proceedings of
IEEE Virtual Reality and 3D User Interfaces Abstracts and Workshops. The final version is available at:
10.1109/VRW66409.2025.00394
When Faces Are Masked: Exploring Emotional Expression Through Body
Postures in Virtual Reality
Inas Redjem*
Univ Rennes, LP3C,
F-35000 Rennes, France
Julien Cagnoncle
Univ Rennes, Inserm, LTSI -
UMR S1099,
F-35000 Rennes, France
Arnaud Huaulm ´
e
Univ Rennes, Inserm, LTSI -
UMR S1099,
F-35000 Rennes, France
Alexandre Audinot
Univ Rennes, CNRS, Inria,
IRISA - UMR 6074,
F-35000 Rennes, France
Florian Nouviale
Univ Rennes, CNRS, Inria,
IRISA - UMR 6074,
F-35000 Rennes, France
Mathieu Risy
Univ Rennes, CNRS, Inria,
IRISA - UMR 6074,
F-35000 Rennes, France
Val´
erie Gouranton
Univ Rennes, CNRS, Inria,
IRISA - UMR 6074,
F-35000 Rennes, France
Estelle Michinov†
Univ Rennes, LP3C,
F-35000 Rennes, France
Pierre Jannin
Univ Rennes, Inserm, LTSI -
UMR S1099,
F-35000 Rennes, France
ABSTRACT
As simulation advances in healthcare training, understanding how
body-only signals convey emotions in virtual environments is cru-
cial, particularly with masked virtual agents. This study involved
41 nursing students evaluating 16 faceless fear and surprise pos-
tures to assess their realism and the emotion conveyed. While well-
recognized in 2D human representations, only three of 16 postures
were correctly identified by more than 50% of participants in a 3D
virtual agent. These results highlight the impact of virtual agent de-
sign on emotional recognition and the need for rigorous testing and
refinement to improve emotional expressiveness and realism.
Index Terms: Human-centered computing—Human computer in-
teraction (HCI)—Virtual Reality—Empirical studies in HCI
1 INTRODUCTION
Simulation is a key tool for team training in high-stakes environ-
ments [2]. Effective teamwork relies on emotional communication,
fostering collaboration, especially during crises [7]. Virtual agents
in simulation often use facial expression as the primary channel for
communication conveying emotions [9]. Studies show that virtual
characters’ emotional expressions are better recognized when com-
bining facial and body cues [5]. However, in operating room teams,
masks hinder facial expression, making the body the main expres-
sive channel [4]. This study aims to simulate faceless emotional
postures using 3D masked virtual agents, assessing their recogni-
tion and perception by nursing students.
2 METHOD
2.1 Participants
We recruited 41 nursing students (Mage = 23.18, S D = 6.46),
93% of whom were female. Among participants, 63% had prior ex-
perience with virtual reality while 73% indicated they either never
engage in video gaming or do so only a few times per year.
2.2 Procedure
After providing informed consent, participants completed a pre-
questionnaire measuring their anxiety levels followed by the virtual
*e-mail: inas.redjem@univ-rennes2.fr
†e-mail: estelle.michinov@univ-rennes2.fr
reality (VR) phase where they observed an artificial agent adopting
various emotional postures. For each posture, they identified the
expressed emotion and rated its realism. Finally, after viewing all
stimuli, participants completed post-questionnaires to assess social
presence and their impressions of the virtual agent.
2.3 Material
We developed a virtual operating room in Unity, displayed via an
HTC Vive Pro, featuring a virtual agent representing an orthope-
dic surgeon in full surgical attire, including a helmet, hood, gown,
and mask. The emotional postures were derived from the BESST
dataset [11] which include 565 frontal view images of real bodies
with blurred facial expressions depicting six emotions (happiness,
sadness, fear, anger, disgust, and surprise). We focused on fear and
surprise body postures and selected the 16 most accurately recog-
nized and highly rated for realism from the original article data.
These postures were recreated on a 3D model using a motion cap-
ture system (Xsens motion capture suit by Movella) and integrated
into the operating room in Unity.
2.4 Measures
First, participants’ pre-VR anxiety levels were measured using the
6-item State-Trait Anxiety Inventory [8] on a 4-point Likert scale (1
= “Not at all”, 4 = “Very much”). Then, participants evaluated the
virtual agent’s emotional expressiveness by identifying emotions
conveyed by each posture, choosing from six options: fear, sadness,
surprise, happiness, anger, and disgust (Figure 1). They also rated
each posture’s realism on a 7-point Likert scale ranging from “Not
at all realistic” to “Completely realistic”.
After viewing the 16 postures, social presence was assessed us-
ing a 5-item scale developed by Bailenson and colleagues [1], with
responses on a 5-point Likert scale from 1 (“Not at all”) to 5 (“To-
tally”). An example item is: “The idea that the person isn’t a real
person has often crossed my mind.” Perception of the virtual agent
was qualitatively assessed using three items from Ho and McDor-
man’s uncanny valley scale [6], evaluating attractiveness (repul-
sive, agreeable), eeriness (ordinary, weird), and humanness (nat-
ural, real) on a 7-point Likert scale.
3 RE SULTS
The results showed that only three out of the 16 postures were cor-
rectly identified by more than 50% of participants, all of which were
fear postures. Postures A was recognized by 76% of participant,
posture B by 58% (Figure 2), and the third posture by 51% only.
1
© 2025 IEEE. This is the author’s version of the article that has been published in the proceedings of IEEE Virtual Reality and
3D User Interfaces Abstracts and Workshops. The final version is available at: 10.1109/VRW66409.2025.00394
Figure 1: Emotion recognition task interface
Figure 2: Fear Body Posture A (left panel) and B (right panel)
In contrast, the highest recognition rate for surprise postures was
49%, suggesting that surprise is more difficult to convey through
body-only cues. Interestingly, among the two most recognized fear
postures, a difference in perceived realism emerged: Posture A was
rated as more realistic M= 5.17, SD = 1.73 than posture B
M= 3.97, SD = 1.68. The perception of the virtual agent was
ambivalent, being rated as synthetic (M= 3.22, SD = 1.86) and
ordinary (M= 3.68, SD = 1.69) with a neutral judgment on
repulsivness (M= 3.98, SD = 1.30). Social presence was mod-
erate (M= 2.57, SD = 1.30), suggesting the interaction did not
create a fully immersive or credible social experience. Social pres-
ence was positively correlated with perceiving the virtual agent as
more agreeable (r=.31, p < .05), ordinary (r=.52, p < .001),
and real (r=.61, p < .001). Anxiety showed no correlation with
the emotion recognition, but correct identification was negatively
correlated with perceiving the virtual agent as more real than syn-
thetic (r=−.33, p < .05).
4 CONCLUSION
This study highlights the challenges of recognizing emotions
through body-only cues on 3D virtual agents. Despite selecting
16 highly recognizable fear and surprise postures from a validated
dataset [11], recognition rates were lower on virtual agents, sug-
gesting that additional factors may impact emotional interpretation
in virtual environments. One possible explanation is that anima-
tion complexity in virtual agents may hinder emotion recognition
by introducing subtle movements that complicate interpretation. In
addition, the uncanny valley effect could contribute [6], as partici-
pants perceived the virtual agent as synthetic, limiting participants’
ability to connect with the virtual agent and accurately interpret its
emotional expressions. This is supported by the virtual agent’s lack
of credibility needed to establish a strong sense of social presence.
Although fear and surprise postures had similar recognition rates
in 2D images, surprise was harder to recognize in 3D, possibly due
to the complexity of surprise. Unlike fear, which is a basic emotion
with universal and recognizable cues, surprise may be a context-
dependent emotion or mental state [10] and may require more sub-
tle visual information to be conveyed [3]. Prior research [5] also
highlighted that surprise relies heavily on facial expressions, sup-
porting the idea that body-only cues are insufficient.
This study has limitations: while frontal postures were selected
from the original dataset, participants in the virtual reality environ-
ment viewed the virtual agent from a three-quarter profile perspec-
tive. Also, no multimodal cues (voice or text) were used. Emotional
recognition is inherently multimodal, and the absence of these sig-
nals may have reduced recognition accuracy. Future studies should
integrate other cues to enhance emotional conveyance. These find-
ings emphasize the need for a VR-specific body-language lexicon,
as body-only emotional communication is inherently more chal-
lenging to convey in virtual environments. Improving this aspect is
crucial for applications such as collaborative virtual environments
and medical training scenarios where facial cues are masked. Fu-
ture research should focus on improving the design of virtual agents
to mitigate the uncanny valley effect and incorporate voice-based
interactions or dynamic gestures to strengthen the social presence
of virtual agents.
ACKNOWLEDGMENTS
This work was supported by state aid managed by the French Na-
tional Research Agency under the France 2030 program, bearing
the reference ANR-21-DMES-0001.
REFERENCES
[1] J. N. Bailenson, J. Blascovich, A. C. Beall, and J. M. Loomis. Inter-
personal distance in immersive virtual environments. Personality &
Social Psychology Bulletin, 29(7):819–833, July 2003. doi: 10.1177/
0146167203029007002 1
[2] A. J. Carpenter. Simulation is a valuable tool for team training. The
Journal of Thoracic and Cardiovascular Surgery, 155(6):2525, June
2018. doi: 10. 1016/j.jtcvs.2018. 01.046 1
[3] J. L. Cheal and M. D. Rutherford. Context-Dependent Categorical
Perception of Surprise. Perception, 42(3):294–301, Mar. 2013. 2
[4] B. de Gelder, A. W. de Borst, and R. Watson. The perception of emo-
tion in body expressions. Wiley Interdisciplinary Reviews. Cognitive
Science, 6(2):149–158, Jan. 2015. doi: 10. 1002/wcs.1335 1
[5] C. Ennis, L. Hoyet, A. Egges, and R. McDonnell. Emotion Cap-
ture: Emotionally Expressive Characters for Games. In Proceedings
of Motion on Games, MIG ’13, pp. 53–60. Association for Computing
Machinery, New York, NY, USA, Nov. 2013. doi: 10. 1145/2522628.
2522633 1,2
[6] C.-C. Ho and K. F. MacDorman. Measuring the Uncanny Valley Ef-
fect. International Journal of Social Robotics, 9(1):129–139, Jan.
2017. doi: 10. 1007/s12369-016-0380-9 1,2
[7] S. Kaplan, K. LaPort, and M. J. Waller. The role of positive affec-
tivity in team effectiveness during crises. Journal of Organizational
Behavior, 34(4):473–491, 2013. doi: 10. 1002/job.1817 1
[8] T. M. Marteau and H. Bekker. The development of a six-item short-
form of the state scale of the Spielberger State-Trait Anxiety Inventory
(STAI). The British Journal of Clinical Psychology, 31(3):301–306,
Sept. 1992. doi: 10. 1111/j.2044-8260.1992. tb00997.x 1
[9] M. Ochs, R. Niewiadomski, and C. Pelachaud. Facial expressions of
emotions for virtual characters. In The Oxford handbook of affective
computing, pp. 261–272. Oxford University Press, New York, NY, US,
2015. doi: 10. 1093/oxfordhb/9780199942237.001.0001 1
[10] A. Ortony. Are All ”Basic Emotions” Emotions? A Problem for the
(Basic) Emotions Construct. Perspectives on Psychological Science:
A Journal of the Association for Psychological Science, 17(1):41–61,
Jan. 2022. doi: 10. 1177/1745691620985415 2
[11] P. Thoma, D. Soria Bauser, and B. Suchan. BESST (Bochum Emo-
tional Stimulus Set)–a pilot validation study of a stimulus set contain-
ing emotional bodies and faces from frontal and averted views. Psy-
chiatry Research, 209(1):98–109, Aug. 2013. doi: 10.1016/j. psychres
.2012.11.012 1,2
2