Access to this full-text is provided by Springer Nature.
Content available from Scientific Reports
This content is subject to copyright. Terms and conditions apply.
1
Vol.:(0123456789)
Scientic Reports | (2024) 14:19317 | https://doi.org/10.1038/s41598-024-70205-z
www.nature.com/scientificreports
Wireless high‑resolution surface
facial electromyography mask
for discrimination of standardized
facial expressions in healthy adults
Paul F. Funk
1,2,3, Bara Levit
2,3, Chen Bar‑Haim
2,3, Dvir Ben‑Dov
2,3, Gerd Fabian Volk
1,4,5,
Roland Grassme
6,7, Christoph Anders
6, Orlando Guntinas‑Lichius 1,4,5* & Yael Hanein
2,3,8,9
Wired high resolution surface electromyography (sEMG) using gelled electrodes is a standard
method for psycho‑physiological, neurological and medical research. Despite its widespread use
electrode placement is elaborative, time‑consuming, and the overall experimental setting is prone
to mechanical artifacts and thus oers little exibility. Wireless and easy‑to‑apply technologies
would facilitate more accessible examination in a realistic setting. To address this, a novel smart
skin technology consisting of wireless dry 16‑electrodes was tested. The soft electrode arrays were
attached to the right hemiface of 37 healthy adult participants (60% female; 20 to 57 years). The
participants performed three runs of a standard set of dierent facial expression exercises. Linear
mixed‑eects models utilizing the sEMG amplitudes as outcome measure were used to evaluate
dierences between the facial movement tasks and runs (separately for every task). The smart
electrodes showed specic activation patterns for each of the exercises. 82% of the exercises could be
dierentiated from each other with very high precision when using the average muscle action of all
electrodes. The eects were consistent during the 3 runs. Thus, it appears that wireless high‑resolution
sEMG analysis with smart skin technology successfully discriminates standard facial expressions in
research and clinical settings.
Facial electromyography (EMG) is widely used in dierent research and clinical domains. Although alternative
imaging approaches were explored in recent decades, in particular computer vision-based tools, facial EMG
remains a gold standard, especially in clinical research, as it oers quantitative and muscle specic information.
In psychophysiological and emotional research facial EMG is commonly used to study facial movements and
emotions by measuring the activation of facial muscles associated with dierent facial functional and emotional
expressions1,2. It also oers insight into the neuromuscular activities underlying speech and facial movements,
aiding in the diagnosis and monitoring of neuromuscular diseases. In medical research, facial EMG is used to
diagnose and assess neuromuscular disorders aecting facial muscles. In particular, in rehabilitation medicine,
facial EMG serves as a valuable tool for assessing the eectiveness of treatments aimed at restoring facial muscle
function following injury or in conditions such as facial palsy.
A major emphasis in neuromuscular assessments is the evaluation of functional movements such as eye
closure, or lip pursing as they determine patients’ ability to regain proper functions such as blinking and speech.
Functional facial movements like eye closure, pursing the lips or emotional expressions recruit the action of
several facial muscles3. erefore, recordings in psychological settings are usually performed on the surface of
facial muscles via multi-channel surface EMG (sEMG)4. Recently, we have shown that high resolution facial
sEMG can discriminate with high accuracy and reliability between standard facial functional movements and
OPEN
1Department of Otorhinolaryngology, Jena University Hospital, Friedrich-Schiller-University Jena, Am Klinikum
1, 07747 Jena, Germany. 2School of Electrical Engineering, Tel Aviv University, Tel Aviv, Israel. 3Tel Aviv University
Center for Nanoscience and Nanotechnology, Tel Aviv University, Tel Aviv, Israel. 4Facial-Nerve-Center Jena, Jena
University Hospital, Jena, Germany. 5Center for Rare Diseases, Jena University Hospital, Jena, Germany. 6Division
Motor Research, Pathophysiology and Biomechanics, Department of Trauma, Hand and Reconstructive Surgery,
Jena University Hospital, Friedrich-Schiller-University Jena, Jena, Germany. 7Department of Prevention,
Biomechanics, German Social Accident Insurance Institution for the Foodstus and Catering Industry, Erfurt,
Germany. 8Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel. 9X-Trodes, Herzliya, Israel. *email:
orlando.guntinas@med.uni-jena.de
Content courtesy of Springer Nature, terms of use apply. Rights reserved
2
Vol:.(1234567890)
Scientic Reports | (2024) 14:19317 | https://doi.org/10.1038/s41598-024-70205-z
www.nature.com/scientificreports/
also between the six basic emotions5–7. As this approach relies on manual placement of multiple electrodes, it is
very complex and time-consuming. Our particular setup required simultaneous application of 29 electrodes, in
specic locations, per side of the face for each examination4,8.
Recently, a smart skin technology consisting of screen-printed carbon electrode array (16 electrodes, 4mm in
diameter) was developed and tested. e arrays allow a wireless recording of facial muscle activation and mapping
of facial expression of emotion in a natural and settings9–11. e 16 multi-channel facial EMG-based system is a
dry electrode array patch and self-adhesive. is allows an easy and fast sticking to the face within a few seconds.
In this investigation, we studied the suitability of these electrodes for clinical research of facial muscles.
Specically, we tested if the electrode arrays allow a reliable discrimination of important functional movements,
and discussed if the results achieve an accuracy like conventional facial EMG. Emotional expressions were not
the central focus of the study. By demonstrating similar discrimination with the wireless system compared to
the results obtained with an elaborate positioning of many single electrode in the face we aim to demonstrate
the superiority of the electrode arrays. Such wireless adhesive electrode arrays could be used for a wide range of
applications in medical and psychological research.
Materials and methods
Healthy participants
e study included 37 healthy adult volunteers (22 female; mean age: 28years, age range: 20 to 57years). Par-
ticipants with a neurological disease or history of botulinum toxin injection in the face, facial surgery or trauma
were excluded. e experiments were conducted in accordance with relevant guidelines and regulations under
approval from the Institutional Ethics Committee Review Board at Tel Aviv University (no. 0005248-2) in accord-
ance with the Helsinki guidelines and regulations for human research. All participants gave written informed
consent to participate in the study. Individuals depicted in the gures of this manuscript have provided written
informed consent for their images to be published in an online open-access publication.
Standardization of repeated facial exercises
Participants were seated in an upright posture in front of a computer screen which provided detailed instruc-
tions regarding the examination. e examination lasted about 25min, started with a self-explanatory video
tutorial that allows for standardized and reliable instructions for performing facial movements given by a human
instructor12,13. Following the video instructions, the participants performed 11 facial expressions: Face at rest
(no movement), wrinkling of the forehead, closing the eyes normally (gentle eye closure), closing the eyes force-
fully (forceful eye closure), nose wrinkling, smiling with closed mouth, smiling with open mouth, lip puckering
(pursing lips), blowing-out the cheeks (cheek blowing), snarling, and depressing the lower lip. e 11 exercises
are shown in Supplementary Fig.S1. All facial movements were performed three times (run 1–3: R1, R2, R3).
e schematic setup is shown in Fig.1.
Wireless facial surface electromyography registration
Surface electromyography (sEMG) measurement was conducted with a monopolar montage of screen-printed
carbon dry disposable electrode masks (Fig.1; XTELC0004005RM, X-trodes Inc., Herzliya, Israel). Data were
recorded with a wireless data acquisition unit (DAU, X-trodes Inc., Herzliya, Israel) which attaches to the elec-
trode mask. e DAU saved the sEMG data to a micro-SD card and transmitted a continuous Bluetooth signal
to the controlling Android tablet application at the same time. e DAU supported up to 16 unipolar channels
(2μV noise root-mean-square (rms), 0.5–700Hz) with a sampling rate of 4000 S/s, 16-bit resolution, an input
range of ± 12.5mV and input impedance of 107 Ω. A 620 mAh battery supported the DAU operation for a dura-
tion of up to 16h.
e disposable electrode masks consisted of 16 electrodes (channels). Channels 0–14 were attached to the
right side of the face and channel number 15 to the le forehead (cf. Fig.1). An internal ground electrode was
placed on the right mastoid process. e scheme of the X-trodes electrodes is printed to a xed-size polyurethane
adhesive layer. e mask was attached to the skin rst by peeling o the adhesive lm from the center of the mask
and then applying the mask while ensuring the arrow indicating the eye points towards the eye. Next, the outer
protective lms were peeled away one by one to fully attach the mask. e mask’s design oered considerable
exibility, allowing for adjustable positioning of the combined electrodes.
To mitigate potential technical and physiological artifacts before proceeding with any further analysis, the
signals were centered and bandpass ltered within the 10 to 500Hz range. Additionally, to account for circuit
interferences, a 50Hz notch lter was applied. sEMG amplitudes were quantied as mean rms values (single
interval duration for calculation: 125ms) during the steady state contraction phases of every facial expression and
sEMG channel. e rms data for all study participants can be found in the Supplementary Data File S1. We also
applied maximum normalization to the data, aiming to equalize for both individual variability and movement-
dependent amplitude changes, thereby standardizing amplitude comparisons across dierent facial movements.
Topographical heatmaps for visualization of the sEMG activity
To demonstrate the spatial pattern of EMG activity across the face, topographical heatmaps following the meth-
odology of earlier work were generated5. In summary, this involved a modied 4-nearest neighbor interpolation
of the EMG rms values with the inverse square of the distance as weight14. Due to the non-spherical nature of
the face’s conducting surface, the weight of the most distant (fourth) electrode was progressively reduced to
zero in the vicinity of the change between two fourth neighbors to avoid spatial discontinuity15. ere are no
signicant asymmetries between the right and le sides of the face as healthy participants performed the 11
designated movements5. Data collection was uniformly conducted on the right side using the sEMG electrode
Content courtesy of Springer Nature, terms of use apply. Rights reserved
3
Vol.:(0123456789)
Scientic Reports | (2024) 14:19317 | https://doi.org/10.1038/s41598-024-70205-z
www.nature.com/scientificreports/
mask. erefore, to achieve a holistic depiction of the signal’s regional distribution throughout the entire face,
the data were mirrored (not including electrode channel 15) onto the le side as a preliminary measure prior
to interpolation.
Statistics
A linear mixed eects model (LMM) was applied for the analysis of the activation pattern of all electrodes
(channels). With such a method xed and random eects are considered, which elevates analyses for highly
complex data16,17. Initially, the sEMG amplitudes for all facial expressions were calculated as mean values with
an estimated 95% condence interval (CI). All electrodes were included in the calculation to evaluate the main
eects of the parameters “run” (movement repetition R1, R2, R3) and “movement” (the 11 exercises) together
with their interactions. “Run” and “movement” were modeled as xed eects with a random intercept per subject.
Initially, all main eects and interactions were calculated, but for the nal analysis, only the signicant main
eects together with signicant interactions remained in the calculation. Adjustment for multiple comparisons
for dierences between the tested facial movements was performed by the least signicant dierence. e sig-
nicance level was set to 5%. For the heatmaps, mean values and standard deviation were calculated. Amplitude
heatmaps are displayed in two ways: (i) Minimum to maximum scaling separately per facial movement for
Figure1. e experimental setup and the positions of the electrodes on the face: Channel 0–14 on the right
side of the face and channel 15 on the contralateral le forehead. In addition, a reference electrode is placed on
the right mastoid. Participants were sitting in front of a computer screen, followed the instructions of a video
tutorial, and imitated the shown facial expressions (Illustration: courtesy of Sonja Burger).
Content courtesy of Springer Nature, terms of use apply. Rights reserved
4
Vol:.(1234567890)
Scientic Reports | (2024) 14:19317 | https://doi.org/10.1038/s41598-024-70205-z
www.nature.com/scientificreports/
topographical pattern recognition (absolute heatmaps), (ii) Minimum to maximum scaling across all facial
expressions for intensity comparison between facial movements (relative heatmaps). To allow a comparison to
other data sets, heatmaps displaying the topographical distribution of the dimensionless coecient of variation
(CV) were calculated additionally.
Ethics statement
Written informed consent was obtained from all participants. e ethics committee of the Tel Aviv University
approved the study.
Results
Activation of the dierent electrodes of the wireless electrode masks during the facial move‑
ment exercises
Figure2 summarizes the activation of each electrode during the following facial exercises: rest (R), wrinkling
of the forehead (W), closing the eyes normally (CEN), closing the eyes forcefully (CEF), wrinkling of the nose
(WN), closed mouth smiling (CMS), open mouth smiling (OMS), lip puckering (LP), blowing-out the cheeks
(BC), snarling (S), and depressing lower lips (DLL). Details on the average activation of each channel (electrode)
during each run and for each of the 11 exercises of all subjects are shown in the Supplementary Figs.S2–S12.
e electrical activity per channel decreased (but not signicantly) from run to run for most channels and most
ch0 ch1 ch2 ch3 ch4 ch5 ch6 ch7 ch8 ch9 ch10 ch11 ch12 ch13 ch14
R 1.7 1.7 1.5 1.6 1.4 1.5 1.5 1.5 1.4 1.5 1.5 1.6 1.7 1.5 1.6
WF 2.1 1.9 1.8 2.0 1.9 1.8 1.9 1.9 2.1 2.0 1.9 2.1 5.7 5.9 8.6
CEN 1.6 1.5 1.4 1.5 1.4 1.5 1.5 1.5 1.5 1.8 1.6 1.6 1.7 1.6 1.6
CEF 2.8 2.2 2.4 2.3 2.7 2.7 4.3 4.0 9.0 10.1 8.0 10.5 8.5 8.9 5.0
WN 7.3 2.6 2.4 3.0 3.2 2.4 3.2 6.5 19.9 4.3 3.1 3.1 4.2 5.9 4.8
CMS 6.5 4.8 9.5 6.6 6.1 7.4 5.7 4.2 3.6 4.4 4.7 4.0 2.8 2.4 2.5
OMS
17.8 5.3 14.8 8.4 9.0 11.4 8.9 6.0 4.9 6.8 6.9 5.7 3.3 3.1 3.1
LP 33.3 4.3 2.7 5.7 3.8 2.6 2.9 5.2 5.9 3.2 2.6 2.3 2.5 2.6 2.4
BC 17.4 4.9 4.2 5.1 4.3 4.0 3.6 4.4 5.1 3.5 3.4 3.7 3.7 3.0 3.0
S 35.2 7.8 9.5 7.0 8.0 9.0 8.1 9.9 21.6 8.9 7.8 8.0 7.3 7.9 6.6
DLL 14.1 8.8 4.4 5.4 4.0 4.0 3.6 4.6 4.7 3.6 3.4 3.4 3.4 3.0 3.0
ch0 ch1 ch2 ch3 ch4 ch5 ch6 ch7 ch8 ch9 ch10 ch11 ch12 ch13 ch14
R 1.7 1.7 1.5 1.6 1.4 1.5 1.5 1.5 1.4 1.5 1.5 1.6 1.7 1.5 1.6
WF 2.1 1.9 1.8 2.0 1.9 1.8 1.9 1.9 2.1 2.0 1.9 2.1 5.7 5.9 8.6
CEN 1.6 1.5 1.4 1.5 1.4 1.5 1.5 1.5 1.5 1.8 1.6 1.6 1.7 1.6 1.6
CEF 2.8 2.2 2.4 2.3 2.7 2.7 4.3 4.0 9.0 10.1 8.0 10.5 8.5 8.9 5.0
WN 7.3 2.6 2.4 3.0 3.2 2.4 3.2 6.5 19.9 4.3 3.1 3.1 4.2 5.9 4.8
CMS 6.5 4.8 9.5 6.6 6.1 7.4 5.7 4.2 3.6 4.4 4.7 4.0 2.8 2.4 2.5
OMS
17.8 5.3 14.8 8.4 9.0 11.4 8.9 6.0 4.9 6.8 6.9 5.7 3.3 3.1 3.1
LP 33.3 4.3 2.7 5.7 3.8 2.6 2.9 5.2 5.9 3.2 2.6 2.3 2.5 2.6 2.4
BC 17.4 4.9 4.2 5.1 4.3 4.0 3.6 4.4 5.1 3.5 3.4 3.7 3.7 3.0 3.0
S 35.2 7.8 9.5 7.0 8.0 9.0 8.1 9.9 21.6 8.9 7.8 8.0 7.3 7.9 6.6
DLL 14.1 8.8 4.4 5.4 4.0 4.0 3.6 4.6 4.7 3.6 3.4 3.4 3.4 3.0 3.0
Min Max
Figure2. Activation of the 14 ipsilateral electrodes (ch = channel 0 to 14) during the 11 exercises. Upper panel:
Min–Max normalization per row (i.e., per facial expression). Lower panel: Min–Max normalization across all
facial expressions. Average sEMG amplitudes in µV is color-coded from low activation (black) to high activation
(yellow). R at rest, WF wrinkling of the forehead, CEN closing the eyes normally, CEF closing the eyes forcefully,
WN wrinkling of the nose, CMS closed mouth smiling, OMS open mouth smiling, LP lip puckering, BC
blowing-out the cheeks, S snarling, DLL depressing lower lips.
Content courtesy of Springer Nature, terms of use apply. Rights reserved
5
Vol.:(0123456789)
Scientic Reports | (2024) 14:19317 | https://doi.org/10.1038/s41598-024-70205-z
www.nature.com/scientificreports/
exercises. e activity did not vary much in-between the channels at rest. Dierences of the activity in-between
the channels were not compared statistically separately for each task. We describe the dierences here only
descriptively: Channels 12–14 showed pronounced activity during frowning. All channels were activated almost
uniformly during gentle eye closure. is changed during forced eye closure. Here, channels 8–13 were mainly
activated. Channel 8 was mainly activated during nose wrinkling, whereas the other channels were nearly silent.
Smiling with a closed mouth showed a maximal activation in channel 2, and the signal continuously decreasing
as the distance from channel 2 increased (with the minimal value obtained in channel 14). Channels 0 and 2
were dominant during smiling with an open mouth and the decrease in signal as the distance increased was not
as clearly as for the exercise with a closed mouth. Lip puckering and blowing the cheeks led to an exclusive acti-
vation of channel 0. e muscle activation decreased over the runs for blowing the cheeks. Snarling was mainly
dierent in the activation from blowing the cheeks by the simultaneous activation of channel 0 and channel 8.
Depressing the lower lips was dominated by the activation of channels 0 and 1.
Discrimination of the facial muscle activity during the dierent facial exercises by wireless
sEMG recordings using the entire activation pattern of all channels
e specic facial muscle activation patterns during the 11 exercises are visualized with topographic heatmaps in
Fig.3. e normalized heatmaps show localized activation with clear resemblance to the anticipated activation
region. e results of the discrimination analysis are shown in Table1. e average muscle activation of all three
runs of facial exercises was evaluated as well for each run separately. Nearly all facial exercises could be dierenti-
ated from each other: 82% of all exercises could be dierentiated by the wireless sEMG recordings when using
the average data of all three runs. 73%, 81%, and 84% of the exercises could be dierentiated from each other in
run 1, run 2, and run 3, respectively (all had p < 0.05 aer correction for multiple testing).
Focusing on averaged values of all runs, wrinkling of the forehead (frowning), normal (gentle) eye closure, and
snarling were dierent from all other exercises in the EMG activation pattern (all p < 0.05). In general, depress-
ing the lower lips, forced eye closure and wrinkling the nose were the most dicult to discriminate from other
exercises in decreasing order. Depressing the lower lips was impossible to dierentiate from nose wrinkling,
smiling with closed mouth, and blowing the cheeks (p > 0.05). A forced eye closure could not be dierentiated
from lip puckering (p > 0.05). Nose wrinkling was similar to closed mouth smiling, lip puckering, blowing the
cheeks, and depressing the lower lips (all p > 0.05). Finally, blowing cheeks could not be dierentiated from
depressing lower lips. With increasing repetition number, the discrimination from other exercises improved for
depressing the lower lips.
Dierences between the summatory facial muscle activation between the three runs of facial
movement exercises
e overall facial muscle activation, i.e., the sum of the recordings of the entire electrode array, during each of the
11 exercises is shown in Fig.4. e highest summatory facial muscle activation is seen during snarling, followed
by smiling with open mouth. e lowest activation was observed during resting and closing the eyes normally.
e electrode independent average activation decreased during the repetitions in most exercises.
Discussion
e standard for high-resolution EMG to discriminate between facial exercises or emotional expressions in psy-
chosocial and medical research is the use of standard gel electrode pairs on the skin18. Typically, 40 electrodes are
placed on the face orientated along the topographical position of the underlying facial muscles4. e electrodes
have to be placed in the direction of the muscle bers and with constant inter-electrode distance. Reaching high
accuracy in electrode placement to produce reliable results is time-consuming and requires a lot of experience.
High-density sEMG for special settings even use up to 90 electrodes19. Such high-density recordings are so com-
plex that they are not suitable for routine applications. An alternative to electrode-by-electrode placement is xed
geometric arrangement designed to match typical facial features, in a manner similar to electroencephalography
(EEG) caps8. Placement of such electrode arrays does not mandate detailed knowledge of facial muscle anatomy
nor a time-consuming process. Several recent studies have shown that the smart skin electrode array technol-
ogy used in the present study is much easier and faster to apply9–11. What is more, when repeated sessions are
required for the same individual, the arrays allow placement of a large number of electrodes at almost identical
positions. Hence, the aim of the present study was to prove if whether the recordings of such a self-adhesive EMG
foil mask are precise enough to reliably distinguish between dened facial movements.
Although we did not perform a head-to-head comparison to standard sEMG settings5,6, we assert that the
values shown here for the discrimination of dierent facial exercises are sucient for common psychosocial
or medical research settings. It has already been shown that the sEMG arrays can be used to detect emotions9.
Whether the adhesive sEMG masks are also reliable for the discrimination of imitated basic emotions that
remains to be conrmed in future studies7. e present study has not yet made use of the second advantage of
the sEMG masks: e wireless design does not restrict participants to sit upright in a chair. Demeco etal. used
four wireless sEMG electrode pairs for facial muscle recordings in patients with facial palsy20. is allowed better
quantication of a synchronously applied video movement analysis. Unfortunately, the source of the electrodes
and the usability as medical device used in the aforementioned study is not described. It remains unclear if the
wireless design in combination with lightweight of the lms will allow more natural or at least unhindered facial
movements. erefore, a head-to-head comparison with other settings and an analysis of the movement would
be needed. Recently, we have shown that it is possible to digitally remove electrodes from images and videos of
participants performing facial expressions using machine learning algorithms9,21. We assume that similar ability
would also be feasible (even easier) with the EMG masks, as they conceal less of the face (due to the lack of wires
Content courtesy of Springer Nature, terms of use apply. Rights reserved
6
Vol:.(1234567890)
Scientic Reports | (2024) 14:19317 | https://doi.org/10.1038/s41598-024-70205-z
www.nature.com/scientificreports/
Figure3. Topographical normalized heatmaps of facial muscle activation patterns during specic facial
exercises. For better visualization, the measured activation on the right side was mirrored to the le side.
e upper le map shows the localization of the electrodes (channels, also with their le-sided mirrors). e
variability of the muscle activity in µV is plotted below each heatmap. (A) Absolute mean values: minimum
to maximum scaling separately per facial movement for topographical pattern recognition. (B) Relative mean
values: minimum to maximum scaling across all facial expressions for intensity comparison between facial
movements. (C) Standard deviation of A; (D) coecient of variation (CV) of A. R at rest, WF wrinkling of
the forehead, CEN closing the eyes normally, CEF closing the eyes forcefully, WN wrinkling of the nose, CMS
closed mouth smiling, OMS open mouth smiling, LP lip puckering, BC blowing-out the cheeks, S snarling, DLL
depressing lower lips. e same color coding as in Fig.2 was used.
Content courtesy of Springer Nature, terms of use apply. Rights reserved
7
Vol.:(0123456789)
Scientic Reports | (2024) 14:19317 | https://doi.org/10.1038/s41598-024-70205-z
www.nature.com/scientificreports/
and minimal dimension of the electrodes). is would allow meaning automated image analysis of the facial
surface synchronously to sEMG recording with the sEMG arrays to combine the strengths of both methods for
an optimal discrimination of functional facial and emotional expressions22–24.
e present study has limitations. Using the video self-tutorial to demonstrate the facial movement tasks
seems to be a very reliable observer-independent instruction technique but uses an imitation design to perform
facial expressions13. A wireless application will allow more natural settings in future trials. Furthermore, the
comparison between the three sessions showed that there is some variance in the performance of the exercises
by the participants, as no feedback is implemented18,25,26. Hence, the problem in dierentiating between dierent
facial expressions may not only reect on the presented electrode array system but on the subjects.
Table 1. Discrimination between the facial muscle activity during the dierent facial exercises by the patterns
of wireless facial sEMG recordings. R at rest, WF wrinkling of the forehead, CEN closing the eyes normally, CEF
closing the eyes forcefully, WN wrinkling of the nose, CMS closed mouth smiling, OMS open mouth smiling,
LP lip puckering, BC blowing-out the cheeks, S snarling, DLL depressing lower lips, n.s. not signicant. e
comparisons for the average of the 3 runs is shown as well as the results for each of the three runs separately.
e inserted gray boxes contain a summary of the facial exercises that can be distinguished by the EMG
activity patterns. Highly signicant dierences marked in yellow (p < 0.001), and signicant values in orange
(0.001 < p < 0.05).
Average WF CENCEF WN CMSOMS LP BC SDLL
R<0.001 0.977 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001
WF <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001
CEN<0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001
CEF0.039 0.022 <0.001 0.722 0.005 <0.001 <0.001
WN 0.835 <0.001 0.090 0.473 <0.001 0.504
CMSp-valuesabsolute relative <0.001 0.056 0.607 <0.001 0.643
OMSp<0.001 39 71% <0.001 <0.001 <0.001 <0.001
LP p<0.05 6 11% 0.016 <0.001 0.018
BC n.s 10 18% <0.001 0.961
S<0.001
Run1 WF CENCEF WN CMSOMS LP BC
SD
LL
R<0.001 0.968 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001
WF <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001
CEN<0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001
CEF0.318 0.071 <0.001 0.854 0.323 <0.001 0.459
WN 0.428 <0.001 0.241 0.992 <0.001 0.798
CMSp-valuesabsolute relative <0.001 0.048 0.422 <0.001 0.294
OMSp<0.001 39 71% <0.001 <0.001 <0.001 <0.001
LP p<0.05 1 2% 0.244 <0.001 0.358
BC n.s 15 27% <0.001 0.806
S<0.001
Run2 WF CENCEF WN CMSOMS LP BC
SD
LL
R<0.001 0.972 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001
WF <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001
CEN<0.001 0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001
CEF<0.001 <0.001 0.070 <0.001 <0.001 <0.001
WN 0.407 <0.001 0.147 0.751 <0.001 0.250
CMSp-valuesabsolute relative <0.001 0.022 0.611 <0.001 0.740
OMSp<0.001 42 76% <0.001 <0.001 <0.001 <0.001
LP p<0.05 3 5% 0.077 <0.001 0.009
BC n.s 10 18% <0.001 0.404
S<0.001
Run3 WF CENCEF WN CMSOMS LP BC
SD
LL
R<0.001 0.922 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001
WF <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001
CEN<0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001
CEF0.074 0.407 <0.001 0.772 <0.001 <0.001 0.004
WN 0.344 <0.001 0.039 0.070 <0.001 0.283
CMSp-valuesabsolute relative <0.001 0.266 0.005 <0.001 0.041
OMSp<0.001 41 75% <0.001 <0.001 <0.001 <0.001
LP p<0.05 5 9% <0.001 <0.001 0.002
BC n.s 9 16% <0.001 <0.460
S
<0.001
Content courtesy of Springer Nature, terms of use apply. Rights reserved
8
Vol:.(1234567890)
Scientic Reports | (2024) 14:19317 | https://doi.org/10.1038/s41598-024-70205-z
www.nature.com/scientificreports/
Depressing the lower lips, an exercise which might be performed infrequently in daily, showed the highest
variability. e question of the optimum design, electrode positions, and the positioning of the DAU box in the
EMG mask remains unanswered. Additional electrodes on the depressor anguli oris muscle and on the mentalis
muscle would possibly help to achieve better discrimination between, for instance, the DLL and LP tasks. e
depressor anguli oris muscle is important for DLL and the mentalis muscle for LP27,28. Furthermore, perhaps
the dierence between CEN and CEF would become clearer if the pars palpebralis of the orbicularis were also
included into the recordings. Facialis midline muscles could also be better recorded. For instance, the procerus
muscle is not yet included29. e procerus muscles is important for frowning and to expression emotional
distress29. An advantage of the EMG foils is that they can, in principle, be printed in any shape for future trials.
It will be important to print a mirror-inverted version of the mask to allow bilateral recordings.
As the discrimination from other exercises became better from session to session, one might conclude that it
is advisable to take advantage of such a learning curve when using the present set of facial expression imitations
in future trials. It can be assumed that trained users show a lower variability than untrained users. It is to be
expected that an objective discrimination with the sEMG mask will then be even more precise15. It has already
be shown that the used sEMG mask was more reliable than visual analysis when discriminating facial expres-
sions between dierent sessions15. It has to be shown in future studies if this advantage holds true for repeated
sessions at a greater distance, for instance, at intervals of months.
Conclusions
Wireless high-resolution sEMG mask consisting of an adhesive electrode array lm allowed a reliable discrimi-
nation of most standardized facial expressions in healthy adults. We recommend using the wireless adhesive
multichannel sEMG system in psychosocial and medical research to also take advantage of the benets of wireless
use and the ease to attach them in settings with repetition over several days’ sessions.
Data availability
e original contributions presented in the study are included in the article/Supplementary material, further
inquiries can be directed to the corresponding author.
Received: 15 March 2024; Accepted: 13 August 2024
References
1. Hubert, W. & de Jong-Meyer, R. Psychophysiological response patterns to positive and negative lm stimuli. Biol. Psychol. 31,
73–93. https:// doi. org/ 10. 1016/ 0301- 0511(90) 90079-c (1991).
Figure4. Electrode-independent facial muscle activation during specic facial exercises performed three times
(three runs R1, R2, R3). e x-axis shows the dierent exercises. e y-axis shows the average values (± 95%
condence interval) of the root-mean-square (rms) of the sEMG amplitudes across all electrodes in µV. R at rest,
WF wrinkling of the forehead, CEN closing the eyes normally, CEF closing the eyes forcefully, WN wrinkling of
the nose, CMS closed mouth smiling, OMS open mouth smiling, LP lip puckering, BC blowing-out the cheeks,
S snarling, DLL depressing lower lips. Asterisks (R1 vs. R2), dots (R1 vs. R3) and triangles (R2 vs. R3) indicate
signicant dierences between the respective runs.
Content courtesy of Springer Nature, terms of use apply. Rights reserved
9
Vol.:(0123456789)
Scientic Reports | (2024) 14:19317 | https://doi.org/10.1038/s41598-024-70205-z
www.nature.com/scientificreports/
2. Hoing, T. T. A., Gerdes, A. B. M., Fohl, U. & Alpers, G. W. Read my face: Automatic facial coding versus psychophysiological
indicators of emotional valence and arousal. Front. Psychol. 11, 1388. https:// doi. org/ 10. 3389/ fpsyg. 2020. 01388 (2020).
3. Schumann, N. P., Bongers, K., Scholle, H. C. & Guntinas-Lichius, O. Atlas of voluntary facial muscle activation: Visualization of
surface electromyographic activities of facial muscles during mimic exercises. PLoS ONE 16, e0254932. https:// doi. org/ 10. 1371/
journ al. pone. 02549 32 (2021).
4. Fridlund, A. J. & Cacioppo, J. T. Guidelines for human electromyographic research. Psychophysiology 23, 567–589. https:// doi. org/
10. 1111/j. 1469- 8986. 1986. tb006 76.x (1986).
5. Mueller, N. et al. High-resolution surface electromyographic activities of facial muscles during mimic movements in healthy adults:
A prospective observational study. Front. Hum. Neurosci. https:// doi. org/ 10. 3389/ fnhum. 2022. 10294 15 (2022).
6. Trentzsch, V. et al. Test–retest reliability of high-resolution surface electromyographic activities of facial muscles during facial
expressions in healthy adults: A prospective observational study. Front. Hum. Neurosci. 17, 1126336. https:// doi. org/ 10. 3389/
fnhum. 2023. 11263 36 (2023).
7. Guntinas-Lichius, O. et al. High-resolution surface electromyographic activities of facial muscles during the six basic emotional
expressions in healthy adults: A prospective observational study. Sci. Rep. 13, 19214. https:// doi. org/ 10. 1038/ s41598- 023- 45779-9
(2023).
8. Kuramoto, E., Yoshinaga, S., Nakao, H., Nemoto, S. & Ishida, Y. Characteristics of facial muscle activity during voluntary facial
expressions: Imaging analysis of facial expressions based on myogenic potential data. Neuropsychopharmacol. Rep. 39, 183–193.
https:// doi. org/ 10. 1002/ npr2. 12059 (2019).
9. Inzelberg, L., Rand, D., Steinberg, S., David-Pur, M. & Hanein, Y. A wearable high-resolution facial electromyography for long
term recordings in freely behaving humans. Sci. Rep. 8, 2058. https:// doi. org/ 10. 1038/ s41598- 018- 20567-y (2018).
10. Inzelberg, L., David-Pur, M., Gur, E. & Hanein, Y. Multi-channel electromyography-based mapping of spontaneous smiles. J.
Neural Eng. 17, 026025. https:// doi. org/ 10. 1088/ 1741- 2552/ ab7c18 (2020).
11. Gat, L., Gerston, A., Shikun, L., Inzelberg, L. & Hanein, Y. Similarities and disparities between visual analysis and high-resolution
electromyography of facial expressions. PLoS ONE 17, e0262286. https:// doi. org/ 10. 1371/ journ al. pone. 02622 86 (2022).
12. Schaede, R. A. et al. Video instruction for synchronous video recording of mimic movement of patients with facial palsy. Laryngo
Rhino Otol. https:// doi. org/ 10. 1055/s- 0043- 101699 (2017).
13. Volk, G. F. et al. Reliability of grading of facial palsy using a video tutorial with synchronous video recording. e Laryngoscope
129, 2274–2279. https:// doi. org/ 10. 1002/ lary. 27739 (2019).
14. Jäger, J., Klein, A., Buhmann, M. & Skrandies, W. Reconstruction of electroencephalographic data using radial basis functions.
Clin. Neurophysiol. 127, 1978–1983. https:// doi. org/ 10. 1016/j. clinph. 2016. 01. 003 (2016).
15. Soong, A. C., Lind, J. C., Shaw, G. R. & Koles, Z. J. Systematic comparisons of interpolation techniques in topographic brain map-
ping. Electroencephalogr. Clin. Neurophysiol. 87, 185–195. https:// doi. org/ 10. 1016/ 0013- 4694(93) 90018-q (1993).
16. Verbeke, G. & Molenberghs, G. Linear Mixed Models for Longitudinal Data. Springer Series in Statistics (Springer, 2000).
17. Habets, L. E. et al. Enhanced low-threshold motor unit capacity during endurance tasks in patients with spinal muscular atrophy
using pyridostigmine. Clin. Neurophysiol. 154, 100–106 (2023).
18. Hess, U. et al. Reliability of surface facial electromyography. Psychophysiology 54, 12–23. https:// do i . o r g/ 10. 1111/ psyp. 12676 (2017).
19. Cui, H. et al. Comparison of facial muscle activation patterns between healthy and Bell’s palsy subjects using high-density surface
electromyography. Front. Hum. Neurosci. 14, 618985. https:// doi. org/ 10. 3389/ fnhum. 2020. 618985 (2020).
20. Demeco, A. et al. Quantitative analysis of movements in facial nerve palsy with surface electromyography and kinematic analysis.
J. Electromyogr. Kinesiol. 56, 102485. https:// doi. org/ 10. 1016/j. jelek in. 2020. 102485 (2021).
21. Büchner, T. et al. Let’s Get the FACS Straight: Reconstructing Obstructed Facial Features 727–736. https:// doi. org/ 10. 5220/ 00116
19900 003417 (2023).
22. Buchner, T., Sickert, S., Volk, G. F., Guntinas-Lichius, O. & Denzler, J. Automatic objective severity grading of peripheral facial
palsy using 3D radial curves extracted from point clouds. Stud. Health Technol. Inform. 294, 179–183. https:// doi. org/ 10. 3233/
SHTI2 20433 (2022).
23. Küntzler, T., Höing, T. T. A. & Alpers, G. W. Automatic facial expression recognition in standardized and non-standardized
emotional expressions. Front. Psychol. 12, 627561 (2021).
24. Kim, H., Küster, D., Girard, J. M. & Krumhuber, E. G. Human and machine recognition of dynamic and static facial expressions:
Prototypicality, ambiguity, and complexity. Front. Psychol. 14, 1221081 (2023).
25. O’Dwyer, N. J., Quinn, P. T., Guitar, B. E., Andrews, G. & Neilson, P. D. Procedures for verication of electrode placement in EMG
studies of orofacial and mandibular muscles. J. Speech Hear. Res. 24, 273–288. https:// doi. org/ 10. 1044/ jshr. 2402. 273 (1981).
26. Jung, J. K. & Im, Y. G. Can the subject reliably reproduce maximum voluntary contraction of temporalis and masseter muscles in
surface EMG? Cranio. https:// doi. org/ 10. 1080/ 08869 634. 2022. 21422 34 (2022).
27. Vejbrink Kildal, V. et al. Anatomical features in lower-lip depressor muscles for optimization of myectomies in marginal mandibular
nerve palsy. J. Craniofac. Surg. 32(6), 2230–2232. https:// doi. org/ 10. 1097/ SCS. 00000 00000 007622 (2021).
28. Hur, M. S. et al. Morphology of the mentalis muscle and its relationship with the orbicularis oris and incisivus labii inferioris
muscles. J. Craniofac. Surg. 24, 602–604. https:// doi. org/ 10. 1097/ SCS. 0b013 e3182 67bcc5 (2013).
29. Hur, M. S. Anatomical relationships of the procerus with the nasal ala and the nasal muscles: Transverse part of the nasalis and
levator labii superioris alaeque nasi. Surg. Radiol. Anat. 39, 865–869 (2017).
Author contributions
OGL, GFV, CA, YH: conceptualization. OGL and CA: rst dra preparation. PF, BL, CBH, DBD, GFV: Data
acquisition. PF, RG and CA: data analysis. OGL and YH: supervision. All authors contributed to the article and
approved the nal version.
Funding
Open Access funding enabled and organized by Projekt DEAL. Orlando Guntinas-Lichius acknowledges support
by the Deutsche Forschungsgemeinscha (DFG), Grant No. GU-463/12-1. Yael Hanein acknowledges support
by the Israel Science Foundation (ISF) Grant No. 538/22 and the European Research Council (ERC), Grant
Outer-Ret—101053186.
Competing interests
Yael Hanein declares nancial interest in X-trodes Ltd., which holds the licensing rights of the EMG skin tech-
nology cited in this paper. is does not alter her adherence to scientic and publication policies on sharing
data and materials. All other authors have no conict of interest to declare: e other authors declare that the
research was conducted in the absence of any commercial or nancial relationships that could be construed as
a potential conict of interest.
Content courtesy of Springer Nature, terms of use apply. Rights reserved
10
Vol:.(1234567890)
Scientic Reports | (2024) 14:19317 | https://doi.org/10.1038/s41598-024-70205-z
www.nature.com/scientificreports/
Additional information
Supplementary Information e online version contains supplementary material available at https:// doi. org/
10. 1038/ s41598- 024- 70205-z.
Correspondence and requests for materials should be addressed to O.G.-L.
Reprints and permissions information is available at www.nature.com/reprints.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional aliations.
Open Access is article is licensed under a Creative Commons Attribution 4.0 International License, which
permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give
appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and
indicate if changes were made. e images or other third party material in this article are included in the article’s
Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included
in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or
exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy
of this licence, visit http://creativecommons.org/licenses/by/4.0/.
© e Author(s) 2024
Content courtesy of Springer Nature, terms of use apply. Rights reserved
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers and authorised users (“Users”), for small-
scale personal, non-commercial use provided that all copyright, trade and service marks and other proprietary notices are maintained. By
accessing, sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of use (“Terms”). For these
purposes, Springer Nature considers academic use (by researchers and students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and conditions, a relevant site licence or a personal
subscription. These Terms will prevail over any conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription
(to the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of the Creative Commons license used will
apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may also use these personal data internally within
ResearchGate and Springer Nature and as agreed share it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not
otherwise disclose your personal data outside the ResearchGate or the Springer Nature group of companies unless we have your permission as
detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial use, it is important to note that Users may
not:
use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at
onlineservice@springernature.com