ArticlePDF Available

Wireless high-resolution surface facial electromyography mask for discrimination of standardized facial expressions in healthy adults

Springer Nature
Scientific Reports
Authors:

Abstract and Figures

Wired high resolution surface electromyography (sEMG) using gelled electrodes is a standard method for psycho-physiological, neurological and medical research. Despite its widespread use electrode placement is elaborative, time-consuming, and the overall experimental setting is prone to mechanical artifacts and thus offers little flexibility. Wireless and easy-to-apply technologies would facilitate more accessible examination in a realistic setting. To address this, a novel smart skin technology consisting of wireless dry 16-electrodes was tested. The soft electrode arrays were attached to the right hemiface of 37 healthy adult participants (60% female; 20 to 57 years). The participants performed three runs of a standard set of different facial expression exercises. Linear mixed-effects models utilizing the sEMG amplitudes as outcome measure were used to evaluate differences between the facial movement tasks and runs (separately for every task). The smart electrodes showed specific activation patterns for each of the exercises. 82% of the exercises could be differentiated from each other with very high precision when using the average muscle action of all electrodes. The effects were consistent during the 3 runs. Thus, it appears that wireless high-resolution sEMG analysis with smart skin technology successfully discriminates standard facial expressions in research and clinical settings.
This content is subject to copyright. Terms and conditions apply.
1
Vol.:(0123456789)
Scientic Reports | (2024) 14:19317 | https://doi.org/10.1038/s41598-024-70205-z
www.nature.com/scientificreports
Wireless high‑resolution surface
facial electromyography mask
for discrimination of standardized
facial expressions in healthy adults
Paul F. Funk
1,2,3, Bara Levit
2,3, Chen Bar‑Haim
2,3, Dvir Ben‑Dov
2,3, Gerd Fabian Volk
1,4,5,
Roland Grassme
6,7, Christoph Anders
6, Orlando Guntinas‑Lichius 1,4,5* & Yael Hanein
2,3,8,9
Wired high resolution surface electromyography (sEMG) using gelled electrodes is a standard
method for psycho‑physiological, neurological and medical research. Despite its widespread use
electrode placement is elaborative, time‑consuming, and the overall experimental setting is prone
to mechanical artifacts and thus oers little exibility. Wireless and easy‑to‑apply technologies
would facilitate more accessible examination in a realistic setting. To address this, a novel smart
skin technology consisting of wireless dry 16‑electrodes was tested. The soft electrode arrays were
attached to the right hemiface of 37 healthy adult participants (60% female; 20 to 57 years). The
participants performed three runs of a standard set of dierent facial expression exercises. Linear
mixed‑eects models utilizing the sEMG amplitudes as outcome measure were used to evaluate
dierences between the facial movement tasks and runs (separately for every task). The smart
electrodes showed specic activation patterns for each of the exercises. 82% of the exercises could be
dierentiated from each other with very high precision when using the average muscle action of all
electrodes. The eects were consistent during the 3 runs. Thus, it appears that wireless high‑resolution
sEMG analysis with smart skin technology successfully discriminates standard facial expressions in
research and clinical settings.
Facial electromyography (EMG) is widely used in dierent research and clinical domains. Although alternative
imaging approaches were explored in recent decades, in particular computer vision-based tools, facial EMG
remains a gold standard, especially in clinical research, as it oers quantitative and muscle specic information.
In psychophysiological and emotional research facial EMG is commonly used to study facial movements and
emotions by measuring the activation of facial muscles associated with dierent facial functional and emotional
expressions1,2. It also oers insight into the neuromuscular activities underlying speech and facial movements,
aiding in the diagnosis and monitoring of neuromuscular diseases. In medical research, facial EMG is used to
diagnose and assess neuromuscular disorders aecting facial muscles. In particular, in rehabilitation medicine,
facial EMG serves as a valuable tool for assessing the eectiveness of treatments aimed at restoring facial muscle
function following injury or in conditions such as facial palsy.
A major emphasis in neuromuscular assessments is the evaluation of functional movements such as eye
closure, or lip pursing as they determine patients’ ability to regain proper functions such as blinking and speech.
Functional facial movements like eye closure, pursing the lips or emotional expressions recruit the action of
several facial muscles3. erefore, recordings in psychological settings are usually performed on the surface of
facial muscles via multi-channel surface EMG (sEMG)4. Recently, we have shown that high resolution facial
sEMG can discriminate with high accuracy and reliability between standard facial functional movements and
OPEN
1Department of Otorhinolaryngology, Jena University Hospital, Friedrich-Schiller-University Jena, Am Klinikum
1, 07747 Jena, Germany. 2School of Electrical Engineering, Tel Aviv University, Tel Aviv, Israel. 3Tel Aviv University
Center for Nanoscience and Nanotechnology, Tel Aviv University, Tel Aviv, Israel. 4Facial-Nerve-Center Jena, Jena
University Hospital, Jena, Germany. 5Center for Rare Diseases, Jena University Hospital, Jena, Germany. 6Division
Motor Research, Pathophysiology and Biomechanics, Department of Trauma, Hand and Reconstructive Surgery,
Jena University Hospital, Friedrich-Schiller-University Jena, Jena, Germany. 7Department of Prevention,
Biomechanics, German Social Accident Insurance Institution for the Foodstus and Catering Industry, Erfurt,
Germany. 8Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel. 9X-Trodes, Herzliya, Israel. *email:
orlando.guntinas@med.uni-jena.de
Content courtesy of Springer Nature, terms of use apply. Rights reserved
2
Vol:.(1234567890)
Scientic Reports | (2024) 14:19317 | https://doi.org/10.1038/s41598-024-70205-z
www.nature.com/scientificreports/
also between the six basic emotions57. As this approach relies on manual placement of multiple electrodes, it is
very complex and time-consuming. Our particular setup required simultaneous application of 29 electrodes, in
specic locations, per side of the face for each examination4,8.
Recently, a smart skin technology consisting of screen-printed carbon electrode array (16 electrodes, 4mm in
diameter) was developed and tested. e arrays allow a wireless recording of facial muscle activation and mapping
of facial expression of emotion in a natural and settings911. e 16 multi-channel facial EMG-based system is a
dry electrode array patch and self-adhesive. is allows an easy and fast sticking to the face within a few seconds.
In this investigation, we studied the suitability of these electrodes for clinical research of facial muscles.
Specically, we tested if the electrode arrays allow a reliable discrimination of important functional movements,
and discussed if the results achieve an accuracy like conventional facial EMG. Emotional expressions were not
the central focus of the study. By demonstrating similar discrimination with the wireless system compared to
the results obtained with an elaborate positioning of many single electrode in the face we aim to demonstrate
the superiority of the electrode arrays. Such wireless adhesive electrode arrays could be used for a wide range of
applications in medical and psychological research.
Materials and methods
Healthy participants
e study included 37 healthy adult volunteers (22 female; mean age: 28years, age range: 20 to 57years). Par-
ticipants with a neurological disease or history of botulinum toxin injection in the face, facial surgery or trauma
were excluded. e experiments were conducted in accordance with relevant guidelines and regulations under
approval from the Institutional Ethics Committee Review Board at Tel Aviv University (no. 0005248-2) in accord-
ance with the Helsinki guidelines and regulations for human research. All participants gave written informed
consent to participate in the study. Individuals depicted in the gures of this manuscript have provided written
informed consent for their images to be published in an online open-access publication.
Standardization of repeated facial exercises
Participants were seated in an upright posture in front of a computer screen which provided detailed instruc-
tions regarding the examination. e examination lasted about 25min, started with a self-explanatory video
tutorial that allows for standardized and reliable instructions for performing facial movements given by a human
instructor12,13. Following the video instructions, the participants performed 11 facial expressions: Face at rest
(no movement), wrinkling of the forehead, closing the eyes normally (gentle eye closure), closing the eyes force-
fully (forceful eye closure), nose wrinkling, smiling with closed mouth, smiling with open mouth, lip puckering
(pursing lips), blowing-out the cheeks (cheek blowing), snarling, and depressing the lower lip. e 11 exercises
are shown in Supplementary Fig.S1. All facial movements were performed three times (run 1–3: R1, R2, R3).
e schematic setup is shown in Fig.1.
Wireless facial surface electromyography registration
Surface electromyography (sEMG) measurement was conducted with a monopolar montage of screen-printed
carbon dry disposable electrode masks (Fig.1; XTELC0004005RM, X-trodes Inc., Herzliya, Israel). Data were
recorded with a wireless data acquisition unit (DAU, X-trodes Inc., Herzliya, Israel) which attaches to the elec-
trode mask. e DAU saved the sEMG data to a micro-SD card and transmitted a continuous Bluetooth signal
to the controlling Android tablet application at the same time. e DAU supported up to 16 unipolar channels
(2μV noise root-mean-square (rms), 0.5–700Hz) with a sampling rate of 4000 S/s, 16-bit resolution, an input
range of ± 12.5mV and input impedance of 107 Ω. A 620 mAh battery supported the DAU operation for a dura-
tion of up to 16h.
e disposable electrode masks consisted of 16 electrodes (channels). Channels 0–14 were attached to the
right side of the face and channel number 15 to the le forehead (cf. Fig.1). An internal ground electrode was
placed on the right mastoid process. e scheme of the X-trodes electrodes is printed to a xed-size polyurethane
adhesive layer. e mask was attached to the skin rst by peeling o the adhesive lm from the center of the mask
and then applying the mask while ensuring the arrow indicating the eye points towards the eye. Next, the outer
protective lms were peeled away one by one to fully attach the mask. e mask’s design oered considerable
exibility, allowing for adjustable positioning of the combined electrodes.
To mitigate potential technical and physiological artifacts before proceeding with any further analysis, the
signals were centered and bandpass ltered within the 10 to 500Hz range. Additionally, to account for circuit
interferences, a 50Hz notch lter was applied. sEMG amplitudes were quantied as mean rms values (single
interval duration for calculation: 125ms) during the steady state contraction phases of every facial expression and
sEMG channel. e rms data for all study participants can be found in the Supplementary Data File S1. We also
applied maximum normalization to the data, aiming to equalize for both individual variability and movement-
dependent amplitude changes, thereby standardizing amplitude comparisons across dierent facial movements.
Topographical heatmaps for visualization of the sEMG activity
To demonstrate the spatial pattern of EMG activity across the face, topographical heatmaps following the meth-
odology of earlier work were generated5. In summary, this involved a modied 4-nearest neighbor interpolation
of the EMG rms values with the inverse square of the distance as weight14. Due to the non-spherical nature of
the face’s conducting surface, the weight of the most distant (fourth) electrode was progressively reduced to
zero in the vicinity of the change between two fourth neighbors to avoid spatial discontinuity15. ere are no
signicant asymmetries between the right and le sides of the face as healthy participants performed the 11
designated movements5. Data collection was uniformly conducted on the right side using the sEMG electrode
Content courtesy of Springer Nature, terms of use apply. Rights reserved
3
Vol.:(0123456789)
Scientic Reports | (2024) 14:19317 | https://doi.org/10.1038/s41598-024-70205-z
www.nature.com/scientificreports/
mask. erefore, to achieve a holistic depiction of the signal’s regional distribution throughout the entire face,
the data were mirrored (not including electrode channel 15) onto the le side as a preliminary measure prior
to interpolation.
Statistics
A linear mixed eects model (LMM) was applied for the analysis of the activation pattern of all electrodes
(channels). With such a method xed and random eects are considered, which elevates analyses for highly
complex data16,17. Initially, the sEMG amplitudes for all facial expressions were calculated as mean values with
an estimated 95% condence interval (CI). All electrodes were included in the calculation to evaluate the main
eects of the parameters “run” (movement repetition R1, R2, R3) and “movement” (the 11 exercises) together
with their interactions. “Run” and “movement” were modeled as xed eects with a random intercept per subject.
Initially, all main eects and interactions were calculated, but for the nal analysis, only the signicant main
eects together with signicant interactions remained in the calculation. Adjustment for multiple comparisons
for dierences between the tested facial movements was performed by the least signicant dierence. e sig-
nicance level was set to 5%. For the heatmaps, mean values and standard deviation were calculated. Amplitude
heatmaps are displayed in two ways: (i) Minimum to maximum scaling separately per facial movement for
Figure1. e experimental setup and the positions of the electrodes on the face: Channel 0–14 on the right
side of the face and channel 15 on the contralateral le forehead. In addition, a reference electrode is placed on
the right mastoid. Participants were sitting in front of a computer screen, followed the instructions of a video
tutorial, and imitated the shown facial expressions (Illustration: courtesy of Sonja Burger).
Content courtesy of Springer Nature, terms of use apply. Rights reserved
4
Vol:.(1234567890)
Scientic Reports | (2024) 14:19317 | https://doi.org/10.1038/s41598-024-70205-z
www.nature.com/scientificreports/
topographical pattern recognition (absolute heatmaps), (ii) Minimum to maximum scaling across all facial
expressions for intensity comparison between facial movements (relative heatmaps). To allow a comparison to
other data sets, heatmaps displaying the topographical distribution of the dimensionless coecient of variation
(CV) were calculated additionally.
Ethics statement
Written informed consent was obtained from all participants. e ethics committee of the Tel Aviv University
approved the study.
Results
Activation of the dierent electrodes of the wireless electrode masks during the facial move‑
ment exercises
Figure2 summarizes the activation of each electrode during the following facial exercises: rest (R), wrinkling
of the forehead (W), closing the eyes normally (CEN), closing the eyes forcefully (CEF), wrinkling of the nose
(WN), closed mouth smiling (CMS), open mouth smiling (OMS), lip puckering (LP), blowing-out the cheeks
(BC), snarling (S), and depressing lower lips (DLL). Details on the average activation of each channel (electrode)
during each run and for each of the 11 exercises of all subjects are shown in the Supplementary Figs.S2–S12.
e electrical activity per channel decreased (but not signicantly) from run to run for most channels and most
ch0 ch1 ch2 ch3 ch4 ch5 ch6 ch7 ch8 ch9 ch10 ch11 ch12 ch13 ch14
R 1.7 1.7 1.5 1.6 1.4 1.5 1.5 1.5 1.4 1.5 1.5 1.6 1.7 1.5 1.6
WF 2.1 1.9 1.8 2.0 1.9 1.8 1.9 1.9 2.1 2.0 1.9 2.1 5.7 5.9 8.6
CEN 1.6 1.5 1.4 1.5 1.4 1.5 1.5 1.5 1.5 1.8 1.6 1.6 1.7 1.6 1.6
CEF 2.8 2.2 2.4 2.3 2.7 2.7 4.3 4.0 9.0 10.1 8.0 10.5 8.5 8.9 5.0
WN 7.3 2.6 2.4 3.0 3.2 2.4 3.2 6.5 19.9 4.3 3.1 3.1 4.2 5.9 4.8
CMS 6.5 4.8 9.5 6.6 6.1 7.4 5.7 4.2 3.6 4.4 4.7 4.0 2.8 2.4 2.5
OMS
17.8 5.3 14.8 8.4 9.0 11.4 8.9 6.0 4.9 6.8 6.9 5.7 3.3 3.1 3.1
LP 33.3 4.3 2.7 5.7 3.8 2.6 2.9 5.2 5.9 3.2 2.6 2.3 2.5 2.6 2.4
BC 17.4 4.9 4.2 5.1 4.3 4.0 3.6 4.4 5.1 3.5 3.4 3.7 3.7 3.0 3.0
S 35.2 7.8 9.5 7.0 8.0 9.0 8.1 9.9 21.6 8.9 7.8 8.0 7.3 7.9 6.6
DLL 14.1 8.8 4.4 5.4 4.0 4.0 3.6 4.6 4.7 3.6 3.4 3.4 3.4 3.0 3.0
ch0 ch1 ch2 ch3 ch4 ch5 ch6 ch7 ch8 ch9 ch10 ch11 ch12 ch13 ch14
R 1.7 1.7 1.5 1.6 1.4 1.5 1.5 1.5 1.4 1.5 1.5 1.6 1.7 1.5 1.6
WF 2.1 1.9 1.8 2.0 1.9 1.8 1.9 1.9 2.1 2.0 1.9 2.1 5.7 5.9 8.6
CEN 1.6 1.5 1.4 1.5 1.4 1.5 1.5 1.5 1.5 1.8 1.6 1.6 1.7 1.6 1.6
CEF 2.8 2.2 2.4 2.3 2.7 2.7 4.3 4.0 9.0 10.1 8.0 10.5 8.5 8.9 5.0
WN 7.3 2.6 2.4 3.0 3.2 2.4 3.2 6.5 19.9 4.3 3.1 3.1 4.2 5.9 4.8
CMS 6.5 4.8 9.5 6.6 6.1 7.4 5.7 4.2 3.6 4.4 4.7 4.0 2.8 2.4 2.5
OMS
17.8 5.3 14.8 8.4 9.0 11.4 8.9 6.0 4.9 6.8 6.9 5.7 3.3 3.1 3.1
LP 33.3 4.3 2.7 5.7 3.8 2.6 2.9 5.2 5.9 3.2 2.6 2.3 2.5 2.6 2.4
BC 17.4 4.9 4.2 5.1 4.3 4.0 3.6 4.4 5.1 3.5 3.4 3.7 3.7 3.0 3.0
S 35.2 7.8 9.5 7.0 8.0 9.0 8.1 9.9 21.6 8.9 7.8 8.0 7.3 7.9 6.6
DLL 14.1 8.8 4.4 5.4 4.0 4.0 3.6 4.6 4.7 3.6 3.4 3.4 3.4 3.0 3.0
Min Max
Figure2. Activation of the 14 ipsilateral electrodes (ch = channel 0 to 14) during the 11 exercises. Upper panel:
Min–Max normalization per row (i.e., per facial expression). Lower panel: Min–Max normalization across all
facial expressions. Average sEMG amplitudes in µV is color-coded from low activation (black) to high activation
(yellow). R at rest, WF wrinkling of the forehead, CEN closing the eyes normally, CEF closing the eyes forcefully,
WN wrinkling of the nose, CMS closed mouth smiling, OMS open mouth smiling, LP lip puckering, BC
blowing-out the cheeks, S snarling, DLL depressing lower lips.
Content courtesy of Springer Nature, terms of use apply. Rights reserved
5
Vol.:(0123456789)
Scientic Reports | (2024) 14:19317 | https://doi.org/10.1038/s41598-024-70205-z
www.nature.com/scientificreports/
exercises. e activity did not vary much in-between the channels at rest. Dierences of the activity in-between
the channels were not compared statistically separately for each task. We describe the dierences here only
descriptively: Channels 12–14 showed pronounced activity during frowning. All channels were activated almost
uniformly during gentle eye closure. is changed during forced eye closure. Here, channels 8–13 were mainly
activated. Channel 8 was mainly activated during nose wrinkling, whereas the other channels were nearly silent.
Smiling with a closed mouth showed a maximal activation in channel 2, and the signal continuously decreasing
as the distance from channel 2 increased (with the minimal value obtained in channel 14). Channels 0 and 2
were dominant during smiling with an open mouth and the decrease in signal as the distance increased was not
as clearly as for the exercise with a closed mouth. Lip puckering and blowing the cheeks led to an exclusive acti-
vation of channel 0. e muscle activation decreased over the runs for blowing the cheeks. Snarling was mainly
dierent in the activation from blowing the cheeks by the simultaneous activation of channel 0 and channel 8.
Depressing the lower lips was dominated by the activation of channels 0 and 1.
Discrimination of the facial muscle activity during the dierent facial exercises by wireless
sEMG recordings using the entire activation pattern of all channels
e specic facial muscle activation patterns during the 11 exercises are visualized with topographic heatmaps in
Fig.3. e normalized heatmaps show localized activation with clear resemblance to the anticipated activation
region. e results of the discrimination analysis are shown in Table1. e average muscle activation of all three
runs of facial exercises was evaluated as well for each run separately. Nearly all facial exercises could be dierenti-
ated from each other: 82% of all exercises could be dierentiated by the wireless sEMG recordings when using
the average data of all three runs. 73%, 81%, and 84% of the exercises could be dierentiated from each other in
run 1, run 2, and run 3, respectively (all had p < 0.05 aer correction for multiple testing).
Focusing on averaged values of all runs, wrinkling of the forehead (frowning), normal (gentle) eye closure, and
snarling were dierent from all other exercises in the EMG activation pattern (all p < 0.05). In general, depress-
ing the lower lips, forced eye closure and wrinkling the nose were the most dicult to discriminate from other
exercises in decreasing order. Depressing the lower lips was impossible to dierentiate from nose wrinkling,
smiling with closed mouth, and blowing the cheeks (p > 0.05). A forced eye closure could not be dierentiated
from lip puckering (p > 0.05). Nose wrinkling was similar to closed mouth smiling, lip puckering, blowing the
cheeks, and depressing the lower lips (all p > 0.05). Finally, blowing cheeks could not be dierentiated from
depressing lower lips. With increasing repetition number, the discrimination from other exercises improved for
depressing the lower lips.
Dierences between the summatory facial muscle activation between the three runs of facial
movement exercises
e overall facial muscle activation, i.e., the sum of the recordings of the entire electrode array, during each of the
11 exercises is shown in Fig.4. e highest summatory facial muscle activation is seen during snarling, followed
by smiling with open mouth. e lowest activation was observed during resting and closing the eyes normally.
e electrode independent average activation decreased during the repetitions in most exercises.
Discussion
e standard for high-resolution EMG to discriminate between facial exercises or emotional expressions in psy-
chosocial and medical research is the use of standard gel electrode pairs on the skin18. Typically, 40 electrodes are
placed on the face orientated along the topographical position of the underlying facial muscles4. e electrodes
have to be placed in the direction of the muscle bers and with constant inter-electrode distance. Reaching high
accuracy in electrode placement to produce reliable results is time-consuming and requires a lot of experience.
High-density sEMG for special settings even use up to 90 electrodes19. Such high-density recordings are so com-
plex that they are not suitable for routine applications. An alternative to electrode-by-electrode placement is xed
geometric arrangement designed to match typical facial features, in a manner similar to electroencephalography
(EEG) caps8. Placement of such electrode arrays does not mandate detailed knowledge of facial muscle anatomy
nor a time-consuming process. Several recent studies have shown that the smart skin electrode array technol-
ogy used in the present study is much easier and faster to apply911. What is more, when repeated sessions are
required for the same individual, the arrays allow placement of a large number of electrodes at almost identical
positions. Hence, the aim of the present study was to prove if whether the recordings of such a self-adhesive EMG
foil mask are precise enough to reliably distinguish between dened facial movements.
Although we did not perform a head-to-head comparison to standard sEMG settings5,6, we assert that the
values shown here for the discrimination of dierent facial exercises are sucient for common psychosocial
or medical research settings. It has already been shown that the sEMG arrays can be used to detect emotions9.
Whether the adhesive sEMG masks are also reliable for the discrimination of imitated basic emotions that
remains to be conrmed in future studies7. e present study has not yet made use of the second advantage of
the sEMG masks: e wireless design does not restrict participants to sit upright in a chair. Demeco etal. used
four wireless sEMG electrode pairs for facial muscle recordings in patients with facial palsy20. is allowed better
quantication of a synchronously applied video movement analysis. Unfortunately, the source of the electrodes
and the usability as medical device used in the aforementioned study is not described. It remains unclear if the
wireless design in combination with lightweight of the lms will allow more natural or at least unhindered facial
movements. erefore, a head-to-head comparison with other settings and an analysis of the movement would
be needed. Recently, we have shown that it is possible to digitally remove electrodes from images and videos of
participants performing facial expressions using machine learning algorithms9,21. We assume that similar ability
would also be feasible (even easier) with the EMG masks, as they conceal less of the face (due to the lack of wires
Content courtesy of Springer Nature, terms of use apply. Rights reserved
6
Vol:.(1234567890)
Scientic Reports | (2024) 14:19317 | https://doi.org/10.1038/s41598-024-70205-z
www.nature.com/scientificreports/
Figure3. Topographical normalized heatmaps of facial muscle activation patterns during specic facial
exercises. For better visualization, the measured activation on the right side was mirrored to the le side.
e upper le map shows the localization of the electrodes (channels, also with their le-sided mirrors). e
variability of the muscle activity in µV is plotted below each heatmap. (A) Absolute mean values: minimum
to maximum scaling separately per facial movement for topographical pattern recognition. (B) Relative mean
values: minimum to maximum scaling across all facial expressions for intensity comparison between facial
movements. (C) Standard deviation of A; (D) coecient of variation (CV) of A. R at rest, WF wrinkling of
the forehead, CEN closing the eyes normally, CEF closing the eyes forcefully, WN wrinkling of the nose, CMS
closed mouth smiling, OMS open mouth smiling, LP lip puckering, BC blowing-out the cheeks, S snarling, DLL
depressing lower lips. e same color coding as in Fig.2 was used.
Content courtesy of Springer Nature, terms of use apply. Rights reserved
7
Vol.:(0123456789)
Scientic Reports | (2024) 14:19317 | https://doi.org/10.1038/s41598-024-70205-z
www.nature.com/scientificreports/
and minimal dimension of the electrodes). is would allow meaning automated image analysis of the facial
surface synchronously to sEMG recording with the sEMG arrays to combine the strengths of both methods for
an optimal discrimination of functional facial and emotional expressions2224.
e present study has limitations. Using the video self-tutorial to demonstrate the facial movement tasks
seems to be a very reliable observer-independent instruction technique but uses an imitation design to perform
facial expressions13. A wireless application will allow more natural settings in future trials. Furthermore, the
comparison between the three sessions showed that there is some variance in the performance of the exercises
by the participants, as no feedback is implemented18,25,26. Hence, the problem in dierentiating between dierent
facial expressions may not only reect on the presented electrode array system but on the subjects.
Table 1. Discrimination between the facial muscle activity during the dierent facial exercises by the patterns
of wireless facial sEMG recordings. R at rest, WF wrinkling of the forehead, CEN closing the eyes normally, CEF
closing the eyes forcefully, WN wrinkling of the nose, CMS closed mouth smiling, OMS open mouth smiling,
LP lip puckering, BC blowing-out the cheeks, S snarling, DLL depressing lower lips, n.s. not signicant. e
comparisons for the average of the 3 runs is shown as well as the results for each of the three runs separately.
e inserted gray boxes contain a summary of the facial exercises that can be distinguished by the EMG
activity patterns. Highly signicant dierences marked in yellow (p < 0.001), and signicant values in orange
(0.001 < p < 0.05).
Average WF CENCEF WN CMSOMS LP BC SDLL
R<0.001 0.977 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001
WF <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001
CEN<0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001
CEF0.039 0.022 <0.001 0.722 0.005 <0.001 <0.001
WN 0.835 <0.001 0.090 0.473 <0.001 0.504
CMSp-valuesabsolute relative <0.001 0.056 0.607 <0.001 0.643
OMSp<0.001 39 71% <0.001 <0.001 <0.001 <0.001
LP p<0.05 6 11% 0.016 <0.001 0.018
BC n.s 10 18% <0.001 0.961
S<0.001
Run1 WF CENCEF WN CMSOMS LP BC
SD
LL
R<0.001 0.968 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001
WF <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001
CEN<0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001
CEF0.318 0.071 <0.001 0.854 0.323 <0.001 0.459
WN 0.428 <0.001 0.241 0.992 <0.001 0.798
CMSp-valuesabsolute relative <0.001 0.048 0.422 <0.001 0.294
OMSp<0.001 39 71% <0.001 <0.001 <0.001 <0.001
LP p<0.05 1 2% 0.244 <0.001 0.358
BC n.s 15 27% <0.001 0.806
S<0.001
Run2 WF CENCEF WN CMSOMS LP BC
SD
LL
R<0.001 0.972 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001
WF <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001
CEN<0.001 0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001
CEF<0.001 <0.001 0.070 <0.001 <0.001 <0.001
WN 0.407 <0.001 0.147 0.751 <0.001 0.250
CMSp-valuesabsolute relative <0.001 0.022 0.611 <0.001 0.740
OMSp<0.001 42 76% <0.001 <0.001 <0.001 <0.001
LP p<0.05 3 5% 0.077 <0.001 0.009
BC n.s 10 18% <0.001 0.404
S<0.001
Run3 WF CENCEF WN CMSOMS LP BC
SD
LL
R<0.001 0.922 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001
WF <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001
CEN<0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001
CEF0.074 0.407 <0.001 0.772 <0.001 <0.001 0.004
WN 0.344 <0.001 0.039 0.070 <0.001 0.283
CMSp-valuesabsolute relative <0.001 0.266 0.005 <0.001 0.041
OMSp<0.001 41 75% <0.001 <0.001 <0.001 <0.001
LP p<0.05 5 9% <0.001 <0.001 0.002
BC n.s 9 16% <0.001 <0.460
S
<0.001
Content courtesy of Springer Nature, terms of use apply. Rights reserved
8
Vol:.(1234567890)
Scientic Reports | (2024) 14:19317 | https://doi.org/10.1038/s41598-024-70205-z
www.nature.com/scientificreports/
Depressing the lower lips, an exercise which might be performed infrequently in daily, showed the highest
variability. e question of the optimum design, electrode positions, and the positioning of the DAU box in the
EMG mask remains unanswered. Additional electrodes on the depressor anguli oris muscle and on the mentalis
muscle would possibly help to achieve better discrimination between, for instance, the DLL and LP tasks. e
depressor anguli oris muscle is important for DLL and the mentalis muscle for LP27,28. Furthermore, perhaps
the dierence between CEN and CEF would become clearer if the pars palpebralis of the orbicularis were also
included into the recordings. Facialis midline muscles could also be better recorded. For instance, the procerus
muscle is not yet included29. e procerus muscles is important for frowning and to expression emotional
distress29. An advantage of the EMG foils is that they can, in principle, be printed in any shape for future trials.
It will be important to print a mirror-inverted version of the mask to allow bilateral recordings.
As the discrimination from other exercises became better from session to session, one might conclude that it
is advisable to take advantage of such a learning curve when using the present set of facial expression imitations
in future trials. It can be assumed that trained users show a lower variability than untrained users. It is to be
expected that an objective discrimination with the sEMG mask will then be even more precise15. It has already
be shown that the used sEMG mask was more reliable than visual analysis when discriminating facial expres-
sions between dierent sessions15. It has to be shown in future studies if this advantage holds true for repeated
sessions at a greater distance, for instance, at intervals of months.
Conclusions
Wireless high-resolution sEMG mask consisting of an adhesive electrode array lm allowed a reliable discrimi-
nation of most standardized facial expressions in healthy adults. We recommend using the wireless adhesive
multichannel sEMG system in psychosocial and medical research to also take advantage of the benets of wireless
use and the ease to attach them in settings with repetition over several days’ sessions.
Data availability
e original contributions presented in the study are included in the article/Supplementary material, further
inquiries can be directed to the corresponding author.
Received: 15 March 2024; Accepted: 13 August 2024
References
1. Hubert, W. & de Jong-Meyer, R. Psychophysiological response patterns to positive and negative lm stimuli. Biol. Psychol. 31,
73–93. https:// doi. org/ 10. 1016/ 0301- 0511(90) 90079-c (1991).
Figure4. Electrode-independent facial muscle activation during specic facial exercises performed three times
(three runs R1, R2, R3). e x-axis shows the dierent exercises. e y-axis shows the average values (± 95%
condence interval) of the root-mean-square (rms) of the sEMG amplitudes across all electrodes in µV. R at rest,
WF wrinkling of the forehead, CEN closing the eyes normally, CEF closing the eyes forcefully, WN wrinkling of
the nose, CMS closed mouth smiling, OMS open mouth smiling, LP lip puckering, BC blowing-out the cheeks,
S snarling, DLL depressing lower lips. Asterisks (R1 vs. R2), dots (R1 vs. R3) and triangles (R2 vs. R3) indicate
signicant dierences between the respective runs.
Content courtesy of Springer Nature, terms of use apply. Rights reserved
9
Vol.:(0123456789)
Scientic Reports | (2024) 14:19317 | https://doi.org/10.1038/s41598-024-70205-z
www.nature.com/scientificreports/
2. Hoing, T. T. A., Gerdes, A. B. M., Fohl, U. & Alpers, G. W. Read my face: Automatic facial coding versus psychophysiological
indicators of emotional valence and arousal. Front. Psychol. 11, 1388. https:// doi. org/ 10. 3389/ fpsyg. 2020. 01388 (2020).
3. Schumann, N. P., Bongers, K., Scholle, H. C. & Guntinas-Lichius, O. Atlas of voluntary facial muscle activation: Visualization of
surface electromyographic activities of facial muscles during mimic exercises. PLoS ONE 16, e0254932. https:// doi. org/ 10. 1371/
journ al. pone. 02549 32 (2021).
4. Fridlund, A. J. & Cacioppo, J. T. Guidelines for human electromyographic research. Psychophysiology 23, 567–589. https:// doi. org/
10. 1111/j. 1469- 8986. 1986. tb006 76.x (1986).
5. Mueller, N. et al. High-resolution surface electromyographic activities of facial muscles during mimic movements in healthy adults:
A prospective observational study. Front. Hum. Neurosci. https:// doi. org/ 10. 3389/ fnhum. 2022. 10294 15 (2022).
6. Trentzsch, V. et al. Test–retest reliability of high-resolution surface electromyographic activities of facial muscles during facial
expressions in healthy adults: A prospective observational study. Front. Hum. Neurosci. 17, 1126336. https:// doi. org/ 10. 3389/
fnhum. 2023. 11263 36 (2023).
7. Guntinas-Lichius, O. et al. High-resolution surface electromyographic activities of facial muscles during the six basic emotional
expressions in healthy adults: A prospective observational study. Sci. Rep. 13, 19214. https:// doi. org/ 10. 1038/ s41598- 023- 45779-9
(2023).
8. Kuramoto, E., Yoshinaga, S., Nakao, H., Nemoto, S. & Ishida, Y. Characteristics of facial muscle activity during voluntary facial
expressions: Imaging analysis of facial expressions based on myogenic potential data. Neuropsychopharmacol. Rep. 39, 183–193.
https:// doi. org/ 10. 1002/ npr2. 12059 (2019).
9. Inzelberg, L., Rand, D., Steinberg, S., David-Pur, M. & Hanein, Y. A wearable high-resolution facial electromyography for long
term recordings in freely behaving humans. Sci. Rep. 8, 2058. https:// doi. org/ 10. 1038/ s41598- 018- 20567-y (2018).
10. Inzelberg, L., David-Pur, M., Gur, E. & Hanein, Y. Multi-channel electromyography-based mapping of spontaneous smiles. J.
Neural Eng. 17, 026025. https:// doi. org/ 10. 1088/ 1741- 2552/ ab7c18 (2020).
11. Gat, L., Gerston, A., Shikun, L., Inzelberg, L. & Hanein, Y. Similarities and disparities between visual analysis and high-resolution
electromyography of facial expressions. PLoS ONE 17, e0262286. https:// doi. org/ 10. 1371/ journ al. pone. 02622 86 (2022).
12. Schaede, R. A. et al. Video instruction for synchronous video recording of mimic movement of patients with facial palsy. Laryngo
Rhino Otol. https:// doi. org/ 10. 1055/s- 0043- 101699 (2017).
13. Volk, G. F. et al. Reliability of grading of facial palsy using a video tutorial with synchronous video recording. e Laryngoscope
129, 2274–2279. https:// doi. org/ 10. 1002/ lary. 27739 (2019).
14. Jäger, J., Klein, A., Buhmann, M. & Skrandies, W. Reconstruction of electroencephalographic data using radial basis functions.
Clin. Neurophysiol. 127, 1978–1983. https:// doi. org/ 10. 1016/j. clinph. 2016. 01. 003 (2016).
15. Soong, A. C., Lind, J. C., Shaw, G. R. & Koles, Z. J. Systematic comparisons of interpolation techniques in topographic brain map-
ping. Electroencephalogr. Clin. Neurophysiol. 87, 185–195. https:// doi. org/ 10. 1016/ 0013- 4694(93) 90018-q (1993).
16. Verbeke, G. & Molenberghs, G. Linear Mixed Models for Longitudinal Data. Springer Series in Statistics (Springer, 2000).
17. Habets, L. E. et al. Enhanced low-threshold motor unit capacity during endurance tasks in patients with spinal muscular atrophy
using pyridostigmine. Clin. Neurophysiol. 154, 100–106 (2023).
18. Hess, U. et al. Reliability of surface facial electromyography. Psychophysiology 54, 12–23. https:// do i . o r g/ 10. 1111/ psyp. 12676 (2017).
19. Cui, H. et al. Comparison of facial muscle activation patterns between healthy and Bell’s palsy subjects using high-density surface
electromyography. Front. Hum. Neurosci. 14, 618985. https:// doi. org/ 10. 3389/ fnhum. 2020. 618985 (2020).
20. Demeco, A. et al. Quantitative analysis of movements in facial nerve palsy with surface electromyography and kinematic analysis.
J. Electromyogr. Kinesiol. 56, 102485. https:// doi. org/ 10. 1016/j. jelek in. 2020. 102485 (2021).
21. Büchner, T. et al. Let’s Get the FACS Straight: Reconstructing Obstructed Facial Features 727–736. https:// doi. org/ 10. 5220/ 00116
19900 003417 (2023).
22. Buchner, T., Sickert, S., Volk, G. F., Guntinas-Lichius, O. & Denzler, J. Automatic objective severity grading of peripheral facial
palsy using 3D radial curves extracted from point clouds. Stud. Health Technol. Inform. 294, 179–183. https:// doi. org/ 10. 3233/
SHTI2 20433 (2022).
23. Küntzler, T., Höing, T. T. A. & Alpers, G. W. Automatic facial expression recognition in standardized and non-standardized
emotional expressions. Front. Psychol. 12, 627561 (2021).
24. Kim, H., Küster, D., Girard, J. M. & Krumhuber, E. G. Human and machine recognition of dynamic and static facial expressions:
Prototypicality, ambiguity, and complexity. Front. Psychol. 14, 1221081 (2023).
25. O’Dwyer, N. J., Quinn, P. T., Guitar, B. E., Andrews, G. & Neilson, P. D. Procedures for verication of electrode placement in EMG
studies of orofacial and mandibular muscles. J. Speech Hear. Res. 24, 273–288. https:// doi. org/ 10. 1044/ jshr. 2402. 273 (1981).
26. Jung, J. K. & Im, Y. G. Can the subject reliably reproduce maximum voluntary contraction of temporalis and masseter muscles in
surface EMG? Cranio. https:// doi. org/ 10. 1080/ 08869 634. 2022. 21422 34 (2022).
27. Vejbrink Kildal, V. et al. Anatomical features in lower-lip depressor muscles for optimization of myectomies in marginal mandibular
nerve palsy. J. Craniofac. Surg. 32(6), 2230–2232. https:// doi. org/ 10. 1097/ SCS. 00000 00000 007622 (2021).
28. Hur, M. S. et al. Morphology of the mentalis muscle and its relationship with the orbicularis oris and incisivus labii inferioris
muscles. J. Craniofac. Surg. 24, 602–604. https:// doi. org/ 10. 1097/ SCS. 0b013 e3182 67bcc5 (2013).
29. Hur, M. S. Anatomical relationships of the procerus with the nasal ala and the nasal muscles: Transverse part of the nasalis and
levator labii superioris alaeque nasi. Surg. Radiol. Anat. 39, 865–869 (2017).
Author contributions
OGL, GFV, CA, YH: conceptualization. OGL and CA: rst dra preparation. PF, BL, CBH, DBD, GFV: Data
acquisition. PF, RG and CA: data analysis. OGL and YH: supervision. All authors contributed to the article and
approved the nal version.
Funding
Open Access funding enabled and organized by Projekt DEAL. Orlando Guntinas-Lichius acknowledges support
by the Deutsche Forschungsgemeinscha (DFG), Grant No. GU-463/12-1. Yael Hanein acknowledges support
by the Israel Science Foundation (ISF) Grant No. 538/22 and the European Research Council (ERC), Grant
Outer-Ret—101053186.
Competing interests
Yael Hanein declares nancial interest in X-trodes Ltd., which holds the licensing rights of the EMG skin tech-
nology cited in this paper. is does not alter her adherence to scientic and publication policies on sharing
data and materials. All other authors have no conict of interest to declare: e other authors declare that the
research was conducted in the absence of any commercial or nancial relationships that could be construed as
a potential conict of interest.
Content courtesy of Springer Nature, terms of use apply. Rights reserved
10
Vol:.(1234567890)
Scientic Reports | (2024) 14:19317 | https://doi.org/10.1038/s41598-024-70205-z
www.nature.com/scientificreports/
Additional information
Supplementary Information e online version contains supplementary material available at https:// doi. org/
10. 1038/ s41598- 024- 70205-z.
Correspondence and requests for materials should be addressed to O.G.-L.
Reprints and permissions information is available at www.nature.com/reprints.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional aliations.
Open Access is article is licensed under a Creative Commons Attribution 4.0 International License, which
permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give
appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and
indicate if changes were made. e images or other third party material in this article are included in the articles
Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included
in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or
exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy
of this licence, visit http://creativecommons.org/licenses/by/4.0/.
© e Author(s) 2024
Content courtesy of Springer Nature, terms of use apply. Rights reserved
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers and authorised users (“Users”), for small-
scale personal, non-commercial use provided that all copyright, trade and service marks and other proprietary notices are maintained. By
accessing, sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of use (“Terms”). For these
purposes, Springer Nature considers academic use (by researchers and students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and conditions, a relevant site licence or a personal
subscription. These Terms will prevail over any conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription
(to the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of the Creative Commons license used will
apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may also use these personal data internally within
ResearchGate and Springer Nature and as agreed share it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not
otherwise disclose your personal data outside the ResearchGate or the Springer Nature group of companies unless we have your permission as
detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial use, it is important to note that Users may
not:
use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at
onlineservice@springernature.com
... Facial sEMG measurements using wired electrodes are limited, typically confined to a laboratory environment, and the occurrence of motion artifacts, which restricts their effectiveness in natural settings 14 . Additionally, the precise positioning of many sEMG electrodes is timeconsuming and is therefore not suitable for clinical routine 6,15 . Screen-printed electrode arrays on 2/24 soft support offer an alternative to the cumbersome gelled sEMG electrodes 16 . ...
... Screen-printed electrode arrays on 2/24 soft support offer an alternative to the cumbersome gelled sEMG electrodes 16 . These electrodes exhibit ease of operation, fast placement, convenience to the patient, and, as recently established, high-quality data comparable to gelled electrodes in facial EMG applications 15,17 . ...
... Improving the interface with easy-to-use and stable electrode technology 15 is an important step toward advancing sEMG technology, but significant challenges remain across all EMG electrode types. One of the most demanding aspects is the precise extraction and interpretation of meaningful data from sEMG signals. ...
Preprint
Full-text available
Facial muscles are unique in their attachment to the skin, dense innervation, and complex co-activation patterns, enabling fine motor control in various physiological tasks. Facial surface Electromyography (sEMG) is a valuable tool for assessing muscle function, yet traditional setups remain restrictive, requiring meticulous electrode placement and limiting mobility due to susceptibility to mechanical artifacts. Additionally, sEMG signal extraction is hindered by noise and cross-talk from adjacent muscles. Owing to these limitations, associating facial muscle activity with facial expressions has been challenging. Here, we leverage a novel 16-channel conformal sEMG system to extract meaningful electrophysiological data. By applying denoising and source separation techniques, we separated data from 32 healthy participants into independent sources and clustered them based on spatial distribution to create a facial muscle Atlas. Furthermore, we established a functional mapping between these clusters and specific muscle units, providing a comprehensive framework for understanding facial muscle activation patterns. Using this foundation, we demonstrated a participant-specific deep-learning model capable of predicting facial expressions from sEMG signals. This novel approach opens new avenues for facial muscle monitoring, with potential applications in rehabilitation in the medicine and psychological fields, where a precise understanding of facial muscle functions is crucial.
... At the moment we are evaluating if the EMG recordings of the biofeedback training itself, so far only used for the real-time visualization during the training, also could be used for facial muscle hypertonicity evaluation 33 . Recently, we have shown that facial HR-sEMG can also be recorded with a smart skin technology consisting of wireless dry 16-electrodes applied as self-adhesive foils 34 . Therefore, it is planned to apply these innovative EMG foils also for the analysis of the effects of the presented biofeedback training. ...
Article
Full-text available
Facial aberrant reinnervation after unilateral facial paralysis is characterized by facial synkinesis and global facial muscle hypertonicity. Therefore, therapy effort is directed on improved facial symmetry by reducing facial synkinesis and the elevated muscle tone. There are no established methods to confirm these aims objectively. Therefore the aim of the present study was to verify if high-resolution surface electromyography (HR-sEMG) mapping of the entire face during standardized facial movements is one such sought-after method. Bilateral HR-sEMG facial mapping was performed in 36 patients (81% women; age range: 24–70 years) with a postparalytic facial nerve syndrome. Participants performed a standard set of standardized facial movement tasks before start (T0) and after nine days of training (T9). A linear mixed-effects model was used to evaluate differences between the facial movement tasks in-between the synkinetic side and the contralateral side at T0 and T9. The overall facial muscle activity was higher on the synkinetic side compared to the contralateral side at T0 (p < 0.001) and also at T9, but with reduced difference between sides (p ≤ 0.002). The overall muscle activity decreased on the synkinetic side and on the contralateral side (both p < 0.001). These effects were also verifiable for almost every investigated muscle. HR-sEMG facial mapping proved its suitability as an objective method to confirm facial feedback training effects: A combined visual and EMG-based facial biofeedback training seemed to reduce the facial muscle activity on both facial sides, but markedly more effective on the synkinetic side.
... However, the mimic musculature consists of >15 muscles, so interference phenomena and inaccuracies may occur due to our electrode setup. More precise statements about muscle function might be possible using a high-density EMG [48][49][50][51]. At the same time, the many electrodes may contribute to reduced mobility in the facial area. ...
Article
Full-text available
Highlights What are the main findings? Facial surface EMG can assess facial palsy severity. Biofeedback in facial palsy can be facilitated by appropriate EMG parameters. What are the implications of the main findings? Motion classification (movement vs. rest) by sEMG is feasible even in severe cases of facial palsy. The results constitute the foundation for further studies on biofeedback algorithms needed for EMG biofeedback in facial palsy. Abstract Facial palsy (FP) significantly impacts patients’ quality of life. The accurate classification of FP severity is crucial for personalized treatment planning. Additionally, electromyographic (EMG)-based biofeedback shows promising results in improving recovery outcomes. This prospective study aims to identify EMG time series features that can both classify FP and facilitate biofeedback. Therefore, it investigated surface EMG in FP patients and healthy controls during three different facial movements. Repeated-measures ANOVAs (rmANOVA) were conducted to examine the effects of MOTION (move/rest), SIDE (healthy/lesioned) and the House–Brackmann score (HB), across 20 distinct EMG parameters. Correlation analysis was performed between HB and the asymmetry index of EMG parameters, complemented by Fisher score calculations to assess feature relevance in distinguishing between HB levels. Overall, 55 subjects (51.2 ± 14.73 years, 35 female) were included in the study. RmANOVAs revealed a highly significant effect of MOTION across almost all movement types (p < 0.001). Integrating the findings from rmANOVA, the correlation analysis and Fisher score analysis, at least 5/20 EMG parameters were determined to be robust indicators for assessing the degree of paresis and guiding biofeedback. This study demonstrates that EMG can reliably determine severity and guide effective biofeedback in FP, and in severe cases. Our findings support the integration of EMG into personalized rehabilitation strategies. However, further studies are mandatory to improve recovery outcomes.
Preprint
Full-text available
The relationship between muscle activity and resulting facial expressions is crucial for various fields, including psychology, medicine, and entertainment. The synchronous recording of facial mimicry and muscular activity via surface electromyography (sEMG) provides a unique window into these complex dynamics. Unfortunately, existing methods for facial analysis cannot handle electrode occlusion, rendering them ineffective. Even with occlusion-free reference images of the same person, variations in expression intensity and execution are unmatchable. Our electromyography-informed facial expression reconstruction (EIFER) approach is a novel method to restore faces under sEMG occlusion faithfully in an adversarial manner. We decouple facial geometry and visual appearance (e.g., skin texture, lighting, electrodes) by combining a 3D Morphable Model (3DMM) with neural unpaired image-to-image translation via reference recordings. Then, EIFER learns a bidirectional mapping between 3DMM expression parameters and muscle activity, establishing correspondence between the two domains. We validate the effectiveness of our approach through experiments on a dataset of synchronized sEMG recordings and facial mimicry, demonstrating faithful geometry and appearance reconstruction. Further, we synthesize expressions based on muscle activity and how observed expressions can predict dynamic muscle activity. Consequently, EIFER introduces a new paradigm for facial electromyography, which could be extended to other forms of multi-modal face recordings.
Article
Full-text available
High-resolution facial surface electromyography (HR-sEMG) is suited to discriminate between different facial movements. Whether HR-sEMG also allows a discrimination among the six basic emotions of facial expression is unclear. 36 healthy participants (53% female, 18–67 years) were included for four sessions. Electromyograms were recorded from both sides of the face using a muscle-position oriented electrode application (Fridlund scheme) and by a landmark-oriented, muscle unrelated symmetrical electrode arrangement (Kuramoto scheme) simultaneously on the face. In each session, participants expressed the six basic emotions in response to standardized facial images expressing the corresponding emotions. This was repeated once on the same day. Both sessions were repeated two weeks later to assess repetition effects. HR-sEMG characteristics showed systematic regional distribution patterns of emotional muscle activation for both schemes with very low interindividual variability. Statistical discrimination between the different HR-sEMG patterns was good for both schemes for most but not all basic emotions (ranging from p > 0.05 to mostly p < 0.001) when using HR-sEMG of the entire face. When using information only from the lower face, the Kuramoto scheme allowed a more reliable discrimination of all six emotions (all p < 0.001). A landmark-oriented HR-sEMG recording allows specific discrimination of facial muscle activity patterns during basic emotional expressions.
Article
Full-text available
A growing body of research suggests that movement aids facial expression recognition. However, less is known about the conditions under which the dynamic advantage occurs. The aim of this research was to test emotion recognition in static and dynamic facial expressions, thereby exploring the role of three featural parameters (prototypicality, amsbiguity, and complexity) in human and machine analysis. In two studies, facial expression videos and corresponding images depicting the peak of the target and non-target emotion were presented to human observers and the machine classifier (FACET). Results revealed higher recognition rates for dynamic stimuli compared to non-target images. Such benefit disappeared in the context of target-emotion images which were similarly well (or even better) recognised than videos, and more prototypical, less ambiguous, and more complex in appearance than non-target images. While prototypicality and ambiguity exerted more predictive power in machine performance, complexity was more indicative of human emotion recognition. Interestingly, recognition performance by the machine was found to be superior to humans for both target and non-target images. Together, the findings point towards a compensatory role of dynamic information, particularly when static-based stimuli lack relevant features of the target emotion. Implications for research using automatic facial expression analysis (AFEA) are discussed.
Article
Full-text available
Objectives Surface electromyography (sEMG) is a standard method for psycho-physiological research to evaluate emotional expressions or in a clinical setting to analyze facial muscle function. High-resolution sEMG shows the best results to discriminate between different facial expressions. Nevertheless, the test-retest reliability of high-resolution facial sEMG is not analyzed in detail yet, as good reliability is a necessary prerequisite for its repeated clinical application. Methods Thirty-six healthy adult participants (53% female, 18–67 years) were included. Electromyograms were recorded from both sides of the face using an arrangement of electrodes oriented by the underlying topography of the facial muscles (Fridlund scheme) and simultaneously by a geometric and symmetrical arrangement on the face (Kuramoto scheme). In one session, participants performed three trials of a standard set of different facial expression tasks. On one day, two sessions were performed. The two sessions were repeated two weeks later. Intraclass correlation coefficient (ICC) and coefficient of variation statistics were used to analyze the intra-session, intra-day, and between-day reliability. Results Fridlund scheme, mean ICCs per electrode position: Intra-session: excellent (0.935–0.994), intra-day: moderate to good (0.674–0.881), between-day: poor to moderate (0.095–0.730). Mean ICC’s per facial expression: Intra-session: excellent (0.933–0.991), intra-day: good to moderate (0.674–0.903), between-day: poor to moderate (0.385–0.679). Kuramoto scheme, mean ICC’s per electrode position: Intra-session: excellent (0.957–0.970), intra-day: good (0.751–0.908), between-day: moderate (0.643–0.742). Mean ICC’s per facial expression: Intra-session: excellent (0.927–0.991), intra-day: good to excellent (0.762–0.973), between-day: poor to good (0.235–0.868). The intra-session reliability of both schemes were equal. Compared to the Fridlund scheme, the ICCs for intra-day and between-day reliability were always better for the Kuramoto scheme. Conclusion For repeated facial sEMG measurements of facial expressions, we recommend the Kuramoto scheme.
Article
Full-text available
Objectives Surface electromyography (sEMG) is a standard tool in clinical routine and clinical or psychosocial experiments also including speech research and orthodontics to measure the activity of selected facial muscles to objectify facial movements during specific facial exercises or experiments with emotional expressions. Such muscle-specific approaches neglect that facial muscles act more as an interconnected network than as single facial muscles for specific movements. What is missing is an optimal sEMG setting allowing a synchronous measurement of the activity of all facial muscles as a whole. Methods A total of 36 healthy adult participants (53% women, 18–67 years) were included. Electromyograms were recorded from both sides of the face using an arrangement of electrodes oriented by the underlying topography of the facial muscles (Fridlund scheme) and simultaneously by a geometric and symmetrical arrangement on the face (Kuramoto scheme). The participants performed a standard set of different facial movement tasks. Linear mixed-effects models and adjustment for multiple comparisons were used to evaluate differences between the facial movement tasks, separately for both applied schemes. Data analysis utilized sEMG amplitudes and also their maximum-normalized values to account for amplitude differences between the different facial movements. Results Surface electromyography activation characteristics showed systematic regional distribution patterns of facial muscle activation for both schemes with very low interindividual variability. The statistical significance to discriminate between the different sEMG patterns was good for both schemes (significant comparisons for sEMG amplitudes: 87.3%, both schemes, normalized values: 90.9%, Fridlund scheme, 94.5% Kuramoto scheme), but the Kuramoto scheme performed considerably superior. Conclusion Facial movement tasks evoke specific patterns in the complex network of facial muscles rather than activating single muscles. A geometric and symmetrical sEMG recording from the entire face seems to allow more specific detection of facial muscle activity patterns during facial movement tasks. Such sEMG patterns should be explored in more clinical and psychological experiments in the future.
Chapter
Full-text available
Peripheral facial palsy is an illness in which a one-sided ipsilateral paralysis of the facial muscles occurs due to nerve damage. Medical experts utilize visual severity grading methods to estimate this damage. Our algorithm-based method provides an objective grading using 3D point clouds. We extract from static 3D recordings facial radial curves to measure volumetric differences between both sides of the face. We analyze five patients with chronic complete peripheral facial palsy to evaluate our method by comparing changes over several recording sessions. We show that our proposed method allows an objective assessment official palsy.
Article
Full-text available
Computer vision (CV) is widely used in the investigation of facial expressions. Applications range from psychological evaluation to neurology, to name just two examples. CV for identifying facial expressions may suffer from several shortcomings: CV provides indirect information about muscle activation, it is insensitive to activations that do not involve visible deformations, such as jaw clenching. Moreover, it relies on high-resolution and unobstructed visuals. High density surface electromyography (sEMG) recordings with soft electrode array is an alternative approach which provides direct information about muscle activation, even from freely behaving humans. In this investigation, we compare CV and sEMG analysis of facial muscle activation. We used independent component analysis (ICA) and multiple linear regression (MLR) to quantify the similarity and disparity between the two approaches for posed muscle activations. The comparison reveals similarity in event detection, but discrepancies and inconsistencies in source identification. Specifically, the correspondence between sEMG and action unit (AU)-based analyses, the most widely used basis of CV muscle activation prediction, appears to vary between participants and sessions. We also show a comparison between AU and sEMG data of spontaneous smiles, highlighting the differences between the two approaches. The data presented in this paper suggests that the use of AU-based analysis should consider its limited ability to reliably compare between different sessions and individuals and highlight the advantages of high-resolution sEMG for facial expression analysis.
Article
Full-text available
Complex facial muscle movements are essential for many motoric and emotional functions. Facial muscles are unique in the musculoskeletal system as they are interwoven, so that the contraction of one muscle influences the contractility characteristic of other mimic muscles. The facial muscles act more as a whole than as single facial muscle movements. The standard for clinical and psychosocial experiments to detect these complex interactions is surface electromyography (sEMG). What is missing, is an atlas showing which facial muscles are activated during specific tasks. Based on high-resolution sEMG data of 10 facial muscles of both sides of the face simultaneously recorded during 29 different facial muscle tasks, an atlas visualizing voluntary facial muscle activation was developed. For each task, the mean normalized EMG amplitudes of the examined facial muscles were visualized by colors. The colors were spread between the lowest and highest EMG activity. Gray shades represent no to very low EMG activities, light and dark brown shades represent low to medium EMG activities and red shades represent high to very high EMG activities relatively with respect to each task. The present atlas should become a helpful tool to design sEMG experiments not only for clinical trials and psychological experiments, but also for speech therapy and orofacial rehabilitation studies.
Article
Full-text available
Emotional facial expressions can inform researchers about an individual's emotional state. Recent technological advances open up new avenues to automatic Facial Expression Recognition (FER). Based on machine learning, such technology can tremendously increase the amount of processed data. FER is now easily accessible and has been validated for the classification of standardized prototypical facial expressions. However, applicability to more naturalistic facial expressions still remains uncertain. Hence, we test and compare performance of three different FER systems (Azure Face API, Microsoft; Face++, Megvii Technology; FaceReader, Noldus Information Technology) with human emotion recognition (A) for standardized posed facial expressions (from prototypical inventories) and (B) for non-standardized acted facial expressions (extracted from emotional movie scenes). For the standardized images, all three systems classify basic emotions accurately (FaceReader is most accurate) and they are mostly on par with human raters. For the non-standardized stimuli, performance drops remarkably for all three systems, but Azure still performs similarly to humans. In addition, all systems and humans alike tend to misclassify some of the non-standardized emotional facial expressions as neutral. In sum, emotion recognition by automated facial expression recognition can be an attractive alternative to human emotion recognition for standardized and non-standardized emotional facial expressions. However, we also found limitations in accuracy for specific facial expressions; clearly there is need for thorough empirical evaluation to guide future developments in computer vision of emotional facial expressions.
Article
Objective: To investigate the electrophysiological basis of pyridostigmine enhancement of endurance performance documented earlier in patients with spinal muscular atrophy (SMA). Methods: We recorded surface electromyography (sEMG) in four upper extremity muscles of 31 patients with SMA types 2 and 3 performing endurance shuttle tests (EST) and maximal voluntary contraction (MVC) measurements during a randomized, double blind, cross-over, phase II trial. Linear mixed effect models (LMM) were used to assess the effect of pyridostigmine on (i) time courses of median frequencies and of root mean square (RMS) amplitudes of sEMG signals and (ii) maximal RMS amplitudes during MVC measurements. These sEMG changes over time indicate levels of peripheral muscle fatigue and recruitment of new motor units, respectively. Results: In comparison to a placebo, patients with SMA using pyridostigmine had fourfold smaller decreases in frequency and twofold smaller increases in amplitudes of sEMG signals in some muscles, recorded during ESTs (p < 0.05). We found no effect of pyridostigmine on MVC RMS amplitudes. Conclusions: sEMG parameters indicate enhanced low-threshold (LT) motor unit (MU) function in upper-extremity muscles of patients with SMA treated with pyridostigmine. This may underlie their improved endurance. Significance: Our results suggest that enhancing LT MU function may constitute a therapeutic strategy to reduce fatigability in patients with SMA.
Article
Objective: To evaluate the repeatability of the surface EMG variables of myoelectric signals from the masseter and temporalis muscles according to three methods to induce maximum voluntary contraction (MVC) in healthy adults. Methods: Thirty healthy young subjects performed the following three MVC tasks three times each in three sessions on the same day without replacing surface electrodes: clenching the teeth (MVCIC) and biting down on two cotton rolls bilaterally with the posterior teeth (MVCBP) or first molars (MVCB6). Results: The intra-class correlation coefficient (ICC) of the amplitudes during the three MVC tasks ranged from 65 to 79%. The ICCs of the spectral variables ranged from 78 to 86%. The ICCs of the asymmetry index of the masseter ranged from 77 to 86%, and those of the activity index ranged from 68 to 90%. Conclusion: The surface EMG measurements according to the three MVC methods exhibited good to excellent reproducibility.