ArticlePDF Available

Abstract and Figures

Numerous studies have shown that humans automatically react with congruent facial reactions, i.e., facial mimicry, when seeing a vis-á-vis' facial expressions. The current experiment is the first investigating the neuronal structures responsible for differences in the occurrence of such facial mimicry reactions by simultaneously measuring BOLD and facial EMG in an MRI scanner. Therefore, 20 female students viewed emotional facial expressions (happy, sad, and angry) of male and female avatar characters. During picture presentation, the BOLD signal as well as M. zygomaticus major and M. corrugator supercilii activity were recorded simultaneously. Results show prototypical patterns of facial mimicry after correction for MR-related artifacts: enhanced M. zygomaticus major activity in response to happy and enhanced M. corrugator supercilii activity in response to sad and angry expressions. Regression analyses show that these congruent facial reactions correlate significantly with activations in the IFG, SMA, and cerebellum. Stronger zygomaticus reactions to happy faces were further associated to increased activities in the caudate, MTG, and PCC. Corrugator reactions to angry expressions were further correlated with the hippocampus, insula, and STS. Results are discussed in relation to core and extended models of the mirror neuron system (MNS).
Content may be subject to copyright.
published: 26 July 2012
doi: 10.3389/fnhum.2012.00214
Facial mimicry and the mirror neuron system:
simultaneous acquisition of facial electromyography
and functional magnetic resonance imaging
Katja U. Likowski , Andreas Mühlberger ,Antje B. M. Gerdes ,Matthias J. Wieser ,Paul Pauli and
Peter Weyers *
Department of Psychology, University of Würzburg, Germany
Edited by:
John J. Foxe, Albert Einstein College
of Medicine, USA
Reviewed by:
Matthew R. Longo, University of
London, UK
Yin Wang, The University of
Nottingham, UK
Peter Weyers, Department of
Psychology, Julius-Maximilians-
University Würzburg, Marcusstr.
9-11, 97070 Würzburg, Germany.
e-mail: weyers@psychologie.
Present address:
Department of Psychology,
University of Mannheim,
Mannheim, Germany.
Numerous studies have shown that humans automatically react with congruent facial
reactions, i.e., facial mimicry, when seeing a vis-á-vis’ facial expressions. The current
experiment is the first investigating the neuronal structures responsible for differences
in the occurrence of such facial mimicry reactions by simultaneously measuring BOLD
and facial EMG in an MRI scanner. Therefore, 20 female students viewed emotional
facial expressions (happy, sad, and angry) of male and female avatar characters. During
picture presentation, the BOLD signal as well as M. zygomaticus major and M. corrugator
supercilii activity were recorded simultaneously. Results show prototypical patterns of
facial mimicry after correction for MR-related artifacts: enhanced M. zygomaticus major
activity in response to happy and enhanced M. corrugator supercilii activity in response
to sad and angry expressions. Regression analyses show that these congruent facial
reactions correlate significantly with activations in the IFG, SMA, and cerebellum. Stronger
zygomaticus reactions to happy faces were further associated to increased activities in
the caudate, MTG, and PCC. Corrugator reactions to angry expressions were further
correlated with the hippocampus, insula, and STS. Results are discussed in relation to
core and extended models of the mirror neuron system (MNS).
Keywords: mimicry, EMG, fMRI, mirror neuron system
Humans tend to react with congruent facial expressions when
looking at an emotional face (Dimberg, 1982). They react, for
example, with enhanced activity of the M. zygomaticus major (the
muscle responsible for smiling) when seeing a happy expression
of a vis-á-vis’ person or with an increase in M. corrugator super-
cilii (the muscle involved in frowning) activity in response to a
sad face. Such facial mimicry reactions occur spontaneously and
rapidly already after 300–400 ms (Dimberg and Thunberg, 1998)
and even in minimal social contexts (Dimberg, 1982; Likowski
et al., 2008). They appear to be automatic and unconscious,
because they occur without awareness or conscious control and
cannot be completely suppressed (Dimberg and Lundqvist, 1990;
Dimberg et al., 2002); they even occur in response to subliminally
presented emotional expressions (Dimberg et al., 2000). However,
there is up to now no experimental empirical evidence answering
the question about the neuronal structures involved in the occur-
rence of such automatic, spontaneous facial mimicry reactions.
The present study is a first approach to fill this lack of research
by simultaneously acquiring facial electromyography (EMG) and
functional magnetic resonance imaging (fMRI).
According to current literature, the neuronal base of (facial)
mimicry is presumably the “mirror neuron system” (MNS)
(Blakemore and Frith, 2005; Iacoboni and Dapretto, 2006;
Niedenthal, 2007). The discovery of mirror neurons dates from
studies in the macaque where Giacomo Rizzolatti and colleagues
came across a system of cortical neurons in area F5 (premotor
cortex in the macaque)and PF [part of the inferior parietal lobule
(IPL)] that responded not only when the monkey performed an
action, but also when the monkey watched the experimenter per-
forming the same action (di Pellegrino et al., 1992; Gallese et al.,
2002). They named their system of neurons the MNS because
it appeared that the observed action was reflected or internally
simulated within the monkey’s own motor system.
There is now evidence that an equivalent system exists in
humans. According to a review by Iacoboni and Dapretto (2006),
the human MNS should comprise the ventral premotor cortex
(vPMC, i.e., the human homolog of the monkey F5 region),
the inferior frontal gyrus (IFG) and the IPL. These regions fit
nicely to the macaque’s MNS. Further mirror neuron activity has
been detected in the superior temporal sulcus (STS) (Iacoboni
and Dapretto, 2006) which is seen as the main visual input to
the human MNS. However, recent studies reveal a slightly more
complex picture of the brain areas that show shared activity dur-
ing observation and execution of the same behavior. In an fMRI
study with unsmoothed single subject data, Gazzola and Keysers
(2009) examined shared voxels that show increased BOLD activ-
ity both during observing and executing an action and found a
wide range of areas containing such shared voxels. Those were
classical mirroring regions like the vPMC (BA6/44) and the IPL,
but also areas beside the MNS like the dorsal premotor cortex
(dPMC), supplementary motor area (SMA), middle cingulate
cortex (MCC), somatosensory cortex (BA2/3), superior parietal
lobule (SPL), middle temporal gyrus (MTG) and the cerebellum.
Frontiers in Human Neuroscience July 2012 | Volume 6 | Article 214 |1
Likowski et al. Facial mimicry and the MNS
Additionally, Mukamel et al. (2010) reported mirror activities in
further brain regions, namely the hippocampus and the parahip-
pocampal gyrus. Yet, Molenberghs et al. (2012) concluded in their
broad review of 125 human MNS studies that consistent activa-
tions could be found in the classical regions like the IFG, IPL,
SPL, and vPMC. They termed these regions the “core network”.
However, they also identified activations in other areas depend-
ing on the respective modality of the task and stimuli, e.g., for
emotional facial expressions enhanced activity in regions known
to be involved in emotional processing like the amygdala, insula,
and cingulate gyrus.
There are several studies supporting the assumption that the
human MNS is involved in facial mimicry. Accordingly, there
is evidence for activation in Brodmann area 44 when partici-
pants deliberately imitate other people’s facial expressions (Carr
et al., 2003). van der Gaag et al. (2007) could further show com-
mon activations in the IFG and IPL (both termed “classical”
MNS sites) as well as the STS, MTG, insula, amygdala, SMA,
and somatosensory cortex (called the “extended” MNS) during
both the observation and execution (i.e., conscious imitation) of
emotional facial expressions. Further studies could show similar
relationships between the conscious imitation of facial expres-
sions and activity of parts of the MNS (Leslie et al., 2004; Dapretto
et al., 2006; Lee et al., 2006).
Whereas all these studies examined conscious imitation of
facial expressions, other authors are interested in the relation-
ship between the MNS and unconscious facial mimicry. In a TMS
study, Enticott et al. (2008) could show that accuracy in facial
emotion recognition was significantly associated with increased
motor-evoked potentials during perception of the respective facial
expressions. Because facial mimicry is supposed to be related to
emotion recognition (Niedenthal et al., 2001; Oberman et al.,
2007) the authors interpret this enhanced activation of the MNS
as connected to an internal simulation of the observed expres-
sion comparable to facial mimicry. On the other hand, Jabbi and
Keysers (2008) interpret similar results in a different fashion. They
found a causal connection of a prominent part of the MNS, i.e.,
the IFG, with a region encompassing the anterior insula and the
frontal operculum which is known to be responsible for the expe-
rience and sharing of emotions like disgust. The authors conclude
that this finding reflects a fast and covert motor simulation of
perceived facial expressions by the MNS and that this covert sim-
ulation might be sufficient to trigger emotional sharing without
the need for overt facial mimicry.
These results, however, provide only indirect evidence for or
against a relation between the MNS and unconscious mimicry.
So far, there is only one study directly examining the neuronal
correlates of unconscious and spontaneous facial reactions to
facial expressions. Studies examining conscious mimicry usually
instruct their participants to imitate a seen facial expression delib-
erately and compare reactions in that condition with those from
a passive viewing condition. However, in such a passive viewing
condition participants should also show mimicry, i.e. uncon-
scious facial mimicry. Hence, Schilbach et al. (2008) assessed
spontaneous facial muscular reactions via EMG and blood oxygen
level dependent (BOLD) responses to dynamic facial expres-
sions of virtual characters via fMRI in two separate experiments.
Participants in both of their experiments were instructed to just
passively view the presented expressions. They found enhanced
activity of the precentral cortex, precuneus, hippocampus, and
cingulate gyrus in the time window in which non-conscious facial
mimicry occurred. Unfortunately, Schilbach et al. (2008)didnot
assess muscular activity and BOLD response in the same partici-
pants and at the same point in time. Thus, there is up to now no
certain empirical evidence about the neuronal structures involved
in automatic, spontaneous mimicry.
Therefore, the present study is a first approach to investi-
gate whether the MNS is indeed responsible for differences in
unconscious and spontaneous facial mimicry reactions. Following
the studies by Gazzola and Keysers (2009), Molenberghs et al.
(2012), Mukamel et al. (2010), Schilbach et al. (2008), and van
der Gaag et al. (2007)weconstructedasingleMNS-regionof
interest (ROI) for the current experiment consisting of follow-
ing parts of the MNS: IFG, vPMC, IPL, SMA, cingulate cortex,
SPL, MTG, cerebellum, somatosensory cortex, STS, hippocam-
pus, parahippocampal gyrus, precentral gyrus, precuneus, insula,
amygdala, caudate, and putamen. Activity in this region will be
related to participants’ congruent facial muscular reactions to
examine which parts of the MNS show significant co-activations
with the respective facial mimicry.
This question shall be answered via the simultaneous mea-
surement of facial muscular activity via (EMG) and the BOLD
response via fMRI. To our knowledge, until now no study with
such a design has been published. In a first approach, Heller
et al. (2011)measuredM. corrugator supercilii activity in response
to affective pictures between interleaved scan acquisitions; that
means that they analyzed muscle activity only for time periods
in which no echoplanar imaging (EPI) sequences were collected
because EPI collection produces intense electromagnetic noise.
However, with this method it is only possible to measure the
neuronal activity before and after the EMG recordings but not
in exactly the same time window in which the facial reactions
occur. Furthermore, with such a sequential recording BOLD and
EMG are measured in two different contexts. Especially the noise
that differs between EPI and non-EPI sequences but also other
influences like repeated presentations or the quality of the pre-
ceding stimulus are significant differences between the BOLD
and the EMG recording phases that hamper a valid detection of
connections between brain activations and muscular reactions.
Therefore, in the present study we will measure muscular activ-
ity and BOLD simultaneously, i.e., during the collection of EPI
Twenty-three right-handed female participants were investigated.
Only female subjects were tested because women show more pro-
nounced, but not qualitatively different mimicry effects than male
subjects (Dimberg and Lundqvist, 1990). Informed consent was
obtained from all subjects prior to participation and is archived
by the authors. All participants received 12C allowance. Three
participants had to be excluded from the analysis due to incom-
plete recordings or insufficient quality of the MRI data. Therefore,
analyses were performed for 20 participants, aged between 20 and
Frontiers in Human Neuroscience July 2012 | Volume 6 | Article 214 |2
Likowski et al. Facial mimicry and the MNS
30 years (M=23.50, SD =3.05). The experimental protocol was
approved by the institution’s ethics committee and conforms to
the Declaration of Helsinki.
Facial stimuli
As facial stimuli avatar facial emotional expressions are used.
Avatars (i.e., virtual persons or graphic substitutes for real per-
sons) provide a useful tool for research in emotion and social
interactions (Blascovich et al., 2002), because they allow better
control over the facial expression and its dynamics, e.g., its inten-
sity and temporal course, than pictures of humans (Krumhuber
and Kappas, 2005). Furthermore, due to the possibility to use the
same prototypical faces for all types of characters there is no need
to control for differences in liking and attractiveness between
the conditions and a reduced amount of error variance can be
assumed. How successfully avatars can be used as a research tool
for studying interactions has been demonstrated by Bailenson
and Yee (2005). Subjects rated a digital chameleon, i.e., an avatar
which mimics behavior, more favorably even though they were
not aware of the mimicry. Thus, an avatar’s mimicry created liking
comparable to real individuals (Chartrand and Bargh, 1999).
Stimuli were created with Poser software (Curious Labs,Santa
Cruz, CA) and the software extension offered by Spencer-Smith
et al. (2001) to manipulate action units separately according to the
facial action coding system (Ekman and Friesen, 1978). Notably,
Spencer-Smith et al. (2001) could show that ratings of quality and
intensity of the avatar emotional expressions were comparable
to those of human expressions from the Pictures of Facial Affect
(Ekman and Friesen, 1976).
The stimuli were presented on a light gray background
via MRI-compatible goggles (VisuaStim; Magnetic Resonance
Technologies, Northridge, CA). Four facial expressions were cre-
ated from a prototypic female and a prototypic male face: a
neutral, a happy, a sad and an angry expression (for details see
Spencer-Smith et al., 2001). Each male and female emotional
expression was then combined with three types of hairstyles
(blond, brown, and black hair), resulting in twenty-four stimuli
(for examples see Figure 1).
Facial EMG
Activity of the M. zygomaticus major (the muscle involved in
smiling) and the M. corrugator supercilii (the muscle respon-
sible for frowning) was recorded on the left side of the face
using bipolar placements of MRI-compatible electrodes (MES
Medizinelektronik GmbH, Munich, Germany) according to the
guidelines established by Fridlund and Cacioppo (1986). In order
to cover the recording of muscular activity participants were
told that skin conductance would be recorded (see e.g., Dimberg
et al., 2000). The EMG raw signal was measured with an MRI-
compatible BrainAmp ExG MR amplifier (Brain Products Inc.,
Gilching, Germany), digitalized by a 16-bit analogue-to-digital
converter, and stored on a personal computer with a sampling
frequency of 5000 Hz. The EMG data were post-processed offline
using Vision Analyzer software (Version 2.01, Brain Products
Inc., Gilching, Germany). EMG data recorded in the MR scan-
ner is contaminated with scan-pulse artifacts, originating from
the switching of the radio-frequency gradients. To remove these
artifacts the software applies a modified version of the aver-
age artifact subtraction method (AAS) described by Allen et al.
(2000). This MRI-artifact correction has originally been devel-
oped for combined EEG/fMRI recordings (for applications see
e.g., Jann et al., 2008; Musso et al., 2010)andcannowalso
be applied for EMG data. Thereby, a gradient artifact template
is subtracted from the EMG using a baseline corrected average
of all MR-intervals. Data were then down-sampled to 1000 Hz.
Following gradient artifact correction raw data were rectified and
filtered with a 30 Hz low cutoff filter, a 500 Hz high cutoff fil-
ter, a 50 Hz notch filter, and a 125 ms moving average filter. The
EMG scores are expressed as change in activity from the pre-
stimulus level, defined as the mean activity during the last second
before stimulus onset. Trials with an EMG activity above 8 μV
during the baseline period and above 30 μV during the stimuli
presentation were excluded (less than 5%). Before statistical anal-
ysis, EMG data were collapsed over the 12 trials with the same
emotional expression, and reactions were averaged over the 4 s
of stimulus exposure. An example snapshot of the raw and the
filtered zygomaticus and corrugator EMG data can be seen in
Figure 2.
FIGURE 1 | Examples of avatars with different emotional facial expressions (happy, neutral, sad, angry).
Frontiers in Human Neuroscience July 2012 | Volume 6 | Article 214 |3
Likowski et al. Facial mimicry and the MNS
FIGURE 2 | Representative snapshot of raw zygomaticus and corrugatorEMG data acquired simultaneously with fMRI. (A) Top panel is raw, unfiltered
EMG data. (B) Bottom panel shows filtered EMG data.
Image acquisition followed the standard procedure in our lab
(Gerdes et al., 2010; Mühlberger et al., 2011): Functional and
structural MRI was performed with a Siemens 1.5 T MRI whole
body scanner (SIEMENS Avanto) using a standard 12-channel
head coil and an integrated head holder to reduce head move-
ment. Functional images were obtained using a T2—weighted
single-shot gradient EPI sequence (TR: 2500 ms, TE: 30 ms, 90
flip angle, FOV: 200mm, matrix: 64 ×64, voxel size: 3.1×
3.1 ×5mm
3). Each EPI volume contained 25 axial slices (thick-
ness 5 mm, 1 mm gap), acquired in interleaved order, covering
the whole brain. The orientation of the axial slices was paral-
lel to the AC–PC line. Each session contained 475 functional
images. The first eight volumes of each session were discarded
Frontiers in Human Neuroscience July 2012 | Volume 6 | Article 214 |4
Likowski et al. Facial mimicry and the MNS
to allow for T1 equilibration. In addition, a high-resolution T1-
weighted magnetization-prepared rapid gradient-echo imaging
(MP-RAGE) 3D MRI sequence was obtained from each subject
(TR: 2250 ms, TE: 3.93 ms, 8flip angle, FOV: 256 mm, matrix:
256 ×256, voxel size: 1 ×1×1mm
Data were analyzed by using Statistical Parametric Mapping soft-
ware (SPM8; Wellcome Department of Imaging Neuroscience,
London, UK) implemented in Matlab R2010a (Mathworks Inc.,
Sherborn, MA, USA). Functional images were slice-time cor-
rected and realignment (b-spline interpolation) was performed
(Ashburner and Friston, 2003). To allow localization of functional
activation on the subjects’ structural MRIs, T1-scans were coreg-
istered to each subject’s mean image of the realigned functional
images. Coregistered T1 images were then segmented (Ashburner
and Friston, 2005) and in the next step, EPI images were spatially
normalized into the standard Montreal Neurological Institute
(MNI) space using the normalization parameters obtained from
the segmentation procedure (voxel size 2 ×2×2mm
spatially smoothed with an 8 mm full-width-half-maximum
(FWHM) Gaussian kernel. Each experimental condition (happy,
neutral, sad, and angry) and the fixation periods were modeled
by a delta function at stimulus onset convolved with a canonical
hemodynamic response function. Parameter estimates were sub-
sequently calculated for each voxel using weighted least squares
to provide maximum likelihood estimates based on the non-
sphericity assumption of the data in order to get identical and
independently distributed error terms. Realignment parameters
for each session were included to account for residual movement
related variance. Parameter estimation was corrected for temporal
autocorrelations using a first-order autoregressive model.
For each subject, the following t-contrasts were computed:
“happy >fixation cross”, “sad >fixation cross”, “angry >fixation
cross”, “happy +sad +angry >fixation cross”, “happy >neu-
tral”, “sad >neutral” and “angry >neutral”. We did not analyze
the contrast “neutral >fixation cross” because no facial mimicry
reactions are expected in response to neutral faces and thus no
neural correlates of facial mimicry can be computed. For a ran-
dom effect analysis, the individual contrast images (first-level)
were used in a second-level analysis. FMRI data were analyzed
specifically for the ROI (MNS-ROI, see above). To investigate the
brain activity in relation to the facial muscular reactions, we per-
formed six regression analyses with estimated BOLD responses
of individual first-level contrast images (“happy >fixation cross”,
“happy >neutral”, “sad >fixation cross, “sad >neutral”, “angry
>fixation cross”, “angry >neutral”) as dependent variable and
the according congruent facial reactions (zygomaticus to happy
expressions, corrugator to sad expressions, corrugator to angry
expressions) as predictors.
The WFU Pickatlas software (Version 2.4, Wake Forest
University, School of Medicine, NC) was used to conduct the
small volume correction with pre-defined masks in MNI-space
(Tzourio-Mazoyer et al., 2002; Maldjian et al., 2003, 2004). For
the ROI analysis, alpha was set to p=0.05 on voxel-level, cor-
rected for multiple comparisons (family-wise error–FWE) and
meaningful clusters exceeding 5 significant voxels.
After arriving at the laboratory, participants were informed about
the procedure of the experiment and were asked to give informed
consent. They were told that the experiment was designed to
study the avatars’ suitability for a future computer game to cover
the true purpose of the experiment in order to avoid deliberate
manipulation of the facial reactions. The EMG electrodes were
then attached and participants were placed in the MRI scanner.
Following this the functional MRI session started. Each of the four
expressions was repeated 24 times, i.e., a total of 96 facial stim-
uli were presented in a randomized order. Faces were displayed
for 4000 ms after a fixation-cross had been presented for 2000 ms
to ensure that participants were focusing on the center of the
screen. The inter-trial interval varied randomly between 8750 and
11,250 ms. Participants were instructed to simply view the pic-
tures without any further task. After the functional MRI the struc-
tural MRI (MP-RAGE) was recorded. Then, participants were
taken out of the scanner and electrodes were detached. Finally
participants completed a questionnaire regarding demographic
data, were debriefed, paid and thanked.
A repeated measures analysis of variance with the within-subject
factors muscle (M. zygomaticus major vs. M. corrugator super-
cilii) and emotion (happy vs. neutral vs. sad vs. angry) was
conducted. A main effect of emotion, F(3,17)=4.17, p=0.02,
p=0.20, and a significant Muscle ×Emotion effect, F(3,17)=
9.38, p<0.01, η2
p=0.33, occurred. The main effect muscle did
not gain significance, p>0.36. To further specify the Muscle
×Emotion interaction, separate follow up ANOVAs for the
M. zygomaticus major and the M. corrugator supercilii were
M. zygomaticus major
As predicted, activity in M. zygomaticus major was larger to happy
compared to neutral, sad, and angry faces (see Figure 3). This was
verified by a significant emotion effect, [F(3,17)=3.91, p=0.04,
p=0.176]. Following t-tests revealed a significant difference
FIGURE 3 | Mean EMG change from baseline in µVforM. zygomaticus
major in response to happy, neutral and sad faces. Error bars indicate
standard errors of the means.
Frontiers in Human Neuroscience July 2012 | Volume 6 | Article 214 |5
Likowski et al. Facial mimicry and the MNS
between M. zygomaticus major reactions to happy faces (M=
0.17) as compared to neutral (M=0.02), t(19)=2.64, p=0.02,
sad (M=−0.02), t(19)=2.09, p=0.05, and angry expres-
sions (M=0.01), t(19)=3.57, p<0.01. No other significant
differences were observed, all ps>0.41.
M. corrugator supercilii
As predicted, activity in M. corrugator supercilii was larger to
sad and angry faces as compared to neutral and positive faces
(see Figure 4). This was verified by a significant emotion effect,
[F(3,17)=7.58, p<0.01, η2
p=0.28]. Following t-tests revealed
a significant difference between M. corrugator supercilii reactions
to sad faces (M=0.32) as compared to happy (M=−0.31),
t(19)=3.12, p<0.01, and neutral expressions (M=0.05),
t(19)=2.56, p=0.02. In a similar vein, reactions to angry
faces (M=0.50) differed from reactions to happy, t(19)=2.91,
p<0.01, and neutral faces, t(19)=2.41, p=0.03. Furthermore,
M. corrugator supercilii reactions in response to happy expres-
sions differed from reactions to neutral faces, t(19)=2.53,
p=0.02. Reactions to sad and angry faces did not differ,
Additionally, one-sample t-tests against zero revealed that the
M. zygomaticus major reaction to happy faces was indeed an
increase in activity, t(19)=2.13, p=0.04. Furthermore, the
M. corrugator supercilii reaction to happy expressions was a
significant decrease in activity, t(19)=2.33, p=0.03, whereas
reactions to sad and angry faces both occurred to be signifi-
cant activity increases, t(19)=2.35, p=0.03 and t(19)=2.19,
p=0.04. Therefore, all these reactions can be seen as congruent
facial reactions. All other reactions did not differ from zero, all
ROI analyses were performed for the contrasts comparing the
brain activation during viewing of emotional expressions with
the activation during the fixation crosses, i.e., “expression>
“fixation cross”. These analyses revealed for all expression con-
trasts (“happy >fixation cross”, “sad >fixation cross”, “angry >
fixation cross”) significant activations (FWE-corrected, p<0.05,
FIGURE 4 | Mean EMG change from baseline in µVforM. corru gator
supercilii in response to happy, neutral, and sad faces. Error bars
indicate standard errors of the means.
minimum cluster size of k=5 voxels) in numerous classical
(core) as well as extended parts of the MNS. Those were IFG,
IPL, MTG, STS, precentral gyrus, cerebellum, hippocampus,
amygdala, caudate, putamen, insula, and posterior cingulate cor-
tex (PCC). Additionally, the contrast “happy >fixation cross”
revealed activations in the MCC, the parahippocampal gyrus,
the precuneus and the SMA. The contrast “sad >fixation cross”
revealed further significant activations in the precuneus. ROI
analyses for the contrast “happy +sad +angry >fixation cross”
as well as all contrasts comparing the emotional expressions with
activation during the neutral expression (“happy >neutral”, “sad
>neutral” and “angry >neutral”) did not reveal any signifi-
cant clusters (FWE-corrected, p<0.05, minimum cluster size of
Regression analyses
Regression analyses with the contrasts “expression >fixation
cross” as dependent and the respective congruent facial reactions,
measured simultaneously via EMG, as predictor variable were
computed to investigate which brain activations were related to
the occurrence of facial mimicry. The corresponding ROI regres-
sion analysis with BOLD contrast “happy >fixation cross” as
dependent variable and zygomaticus reactions to happy expres-
sions as predictor revealed significant co-activations in the cau-
date, cerebellum, IFG, PCC, SMA, and MTG (see Figure 5).
ROI regression analysis with BOLD contrast “sad >fixation
cross” as dependent and corrugator reactions to sad expressions
as predictor variable revealed no significant co-activations. ROI
regression analysis with the BOLD contrast “angry >fixation
cross” as dependent variable and the corrugator reactions to
angry expressions as predictor variable revealed significant co-
activations in the cerebellum, IFG, hippocampus, insula, SMA,
and STS (see Figure 6).
Finally, the three ROI regression analyses with BOLD contrasts
“emotional expression >neutral expression (“happy >neutral”,
“sad >neutral”, “angry >neutral”) as dependent and the accord-
ing congruent facial reactions as predictors revealed no significant
The present experiment is a first approach revealing the neu-
ronal structures responsible for differences in automatic and
spontaneous facial mimicry reactions in a clear and experimen-
tal fashion. In a first step it was shown that a broad network
of regions with mirroring properties is active during the per-
ception of emotional facial expressions. This network included
for all expressions the IFG, IPL, MTG, STS, precentral gyrus,
cerebellum, hippocampus, amygdala, caudate, putamen, insula,
and PCC as well as for happy expressions the MCC, the parahip-
pocampal gyrus, the precuneus and the SMA, and for sad expres-
sions additionally the precuneus. These findings replicate earlier
studies showing an involvement of both classical and “extended”
mirror neuron regions in the observation and execution of (facial)
movements (e.g., van der Gaag et al., 2007; Molenberghs et al.,
More importantly, in a second step we explored which of
these brain regions show a direct relation with the individual
Frontiers in Human Neuroscience July 2012 | Volume 6 | Article 214 |6
Likowski et al. Facial mimicry and the MNS
FIGURE 5 | Statistical parametric maps for the ROI regression analyses
with BOLD-contrast “happy >fixation cross” as dependent variable and
zygomaticus reactions to happy expressions as predictor. FWE-corrected,
alpha =0.05, k5 voxels. Coordinates x, y, and zare given in MNI space.
Color bars represent the T-values. (A) Significant co-activation in the right
caudate, (x=16, y=12, z=16 ; t=4.55; k=6voxel).(B) Significant
co-activation in the left cerebellum, (x=−14, y=−52, z=−42; t=4.58;
k=29 voxel). (C) Significant co-activation in the right inferior frontal gyrus,
(x=40, y=38, z=2; t=5.82; k=18 voxel). (D) Significant co-activation in
the left posterior cingulate cortex, (x=−14, y=−60, z=14; t=5.03; k=6
voxel). (E) Significant co-activation in the right supplementary motor area,
(x=14, y=8, z=70; t=6.27; k=6voxel).(F) Significant co-activation in
the right middle temporal cortex, (x=60, y=−58, z=2; t=5.44; k=5
strength of facial mimicry reactions by regressing the BOLD
data on the simultaneously measured facial EMG reactions.
The EMG measurement proved to deliver reliable and signifi-
cant data comparable to earlier studies on attitude effects on
facial mimicry (Likowski et al., 2008). It was found that both
zygomaticus reactions to happy expressions and corrugator reac-
tions to angry faces correlate significantly with activations in
the right IFG, right SMA, and left cerebellum. Stronger zygo-
maticus reactions to happy faces were further associated with
an increase in activity in the right caudate, the right MTG as
well as the left PCC. Corrugator reactions to angry expres-
sions were also correlated with the right hippocampus, the right
insula, and the right STS. This shows that although a wide range
of regions assumed to belong to the core and the extended
MNS is active during the observation of emotional facial expres-
sions only a small number actually seems to be related to the
observed strength of facial mimicry. The correlated regions are
on the one hand regions concerned with the perception and
execution of facial movements and their action representations.
For example, the STS codes the visual perception, the MTG is
responsible for the sensory representation (Gazzola and Keysers,
2009), the IFG is responsible for coding the goal of the action
(Gallese et al., 1996), and the SMA is concerned with the execu-
tion of the movement (Cunnington et al., 2005). On the other
hand, we also observed associations of mimicry and regions
involved in emotional processing. We found co-activations in
the insula which connects the regions for action representa-
tion with the limbic system (Carr et al., 2003)andthecaudate
Frontiers in Human Neuroscience July 2012 | Volume 6 | Article 214 |7
Likowski et al. Facial mimicry and the MNS
FIGURE 6 | Statistical parametric maps for the ROI regression analyses
with BOLD-contrast “angry >fixation cross” as dependent variable and
corrugator reactions to angry expressions as predictor. FWE-corrected,
alpha =0.05, k5 voxels. Coordinates x, y, and zare given in MNI space.
Color bars represent the T-values. (A) Significant co-activation in the left
cerebellum, (x=−10, y=−48, z=−32; t=6.24; k=43 voxel).
(B) Significant co-activation in the right inferior frontal gyrus, (x=42, y=40,
z=0; t=6.25; k=25 voxel). (C) Significant co-activation in the right
hippocampus, (x=30, y=−34, z=−6; t=5.91; k=8voxel).
(D) Significant co-activation in the right insula, (x=42, y=8, z=2; t=6.54;
k=19 voxel). (E) Significant co-activation in the right supplementar y motor
area, (x=14, y=6, z=70; t=5.26; k=5voxel).(F) Significant
co-activation in the right superior temporal sulcus, (x=58, y=−32, z=12;
t=5.65; k=35 voxel).
and the cingulate cortex which are involved in processing pos-
itive and negative emotional content (Mobbs et al., 2003; Vogt,
These results fit nicely with assumptions of the MNS. It is
widely assumed that the function of the MNS is to decode
and to understand other people’s actions (Carr et al., 2003;
Rizzolatti and Craighero, 2004; Iacoboni and Dapretto, 2006;but
see Decety, 2010; Hickok and Hauser, 2010 for a discussion).
Accordingly, Carr et al. (2003) suggest that the activation of areas
concerned with action representation and emotional content
helps to resonate, simulate and thereby recognize the emotional
expression and to empathize with the sender. This assump-
tion overlaps with theories on the purpose of facial mimicry.
According to embodiment theories congruent facial reactions
are part of the reenactment of the experience of another per-
son’s state (Niedenthal, 2007). Specifically, embodiment theories
assume that during an initial emotional experience all the sen-
sory, affective and motor neural systems are activated together.
This experience leads to interconnections between the involved
groups of neurons. Later on, when one is just thinking about the
event or perceiving a related emotional stimulus, the activated
neurons in one system spread their activity through the inter-
connections that were active during the original experience to all
the other systems. Thereby the whole original state or at least the
most salient parts of the network can be reactivated (Niedenthal,
2007; Oberman et al., 2007; Niedenthal et al., 2009). Embodiment
theories state that looking at an emotional facial expression
means reliving past experience associated with that kind of face.
Frontiers in Human Neuroscience July 2012 | Volume 6 | Article 214 |8
Likowski et al. Facial mimicry and the MNS
Thus, perceiving an angry face can lead to tension in the muscles
used to strike, a rise in blood pressure or the enervation of facial
muscles involved in frowning (Niedenthal, 2007). Accordingly,
congruent facial reactions reflect an internal simulation of the
perceived emotional expression. The suggested purpose of such
simulation is like for mirror neurons understanding the actor’s
emotion (Wallbott, 1991; Niedenthal et al., 2001; Atkinson and
Adolphs, 2005).
Contrary to expectations, no correlations of MNS activities
and facial mimicry were found in response to sad expressions.
The reason for that is unclear. We observed proper mimicry reac-
tions in the corrugator muscle, comparable to those to angry
expressions. Also the number of significant clusters and their
respective sizes were comparable for all emotional expressions.
Maybe the low arousal of sad facial expressions (see e.g., Russell
and Bullock, 1985) compared to other negative stimuli ham-
pered the detection of co-activations in this case. However,
this is pure speculation and should be investigated in further
The contrasts “emotional expression >neutral expression” as
well as the regression analyses with these contrasts revealed no
significant clusters in the reported ROIs. We attribute this to the
finding that many of the regions involved in processing the emo-
tional expressions (happy, sad, angry) are also activated during
perception of the neutral expressions (as revealed by the contrast
“neutral >fixation”). Such overlapping clusters probably reflect
activations of general face processing and might be responsible
for the lower contrast effects and thereby also for lower variances
which presumably prevented our regressions from showing valid
and significant effects. One might now argue that the overlap in
activations in response to emotional as well as neutral expres-
sions suggests that we just observed general and unspecific face
processing regions. Importantly, we can proof that this is not the
case. The fact that our regression results are only significant for
the congruent pairings of BOLD and muscular activation but not
for incongruent pairings (like e.g., BOLD to happy expressions
and corrugator activity to sad expressions) clearly shows that we
observed specific relations of regions with mirror properties and
facial muscularreactions. Furthermore, we canconclude from the
non-significant contrast “happy + sad + angry >fixation cross”
that the effects of the three separate contrasts “happy >fixa-
tion cross”, “sad >fixation cross” and “angry >fixation cross”
appear to be rather specific regarding the locations of the relevant
Taken together, the results of this experiment are the first to
show successful simultaneous recording of facial EMG and func-
tional MRI. Thus, it was possible to examine which specific parts
of the MNS were associated with differences in the occurrence
of facial mimicry, i.e., the strength of congruent facial muscu-
lar reactions in response to emotional facial expressions. It was
found that mimicry reactions correlated significantly with promi-
nent parts of the classic MNS as well as with areas responsible
for emotional processing. These results and the here introduced
methods for simultaneous measurement may provide a promis-
ing starting point for further investigations on moderators and
mediators of facial mimicry.
This research was supported by the German Research Foundation
(DFG Research Group “Emotion and Behavior” FOR605, DFG
WE2930/2-2). The publication was funded by the German
Research Foundation (DFG) and the University of Würzburg
within the funding program Open Access Publishing. We are
grateful to the editor John J. Foxe and the two reviewers Matthew
R. Longo and Yin Wang for their fruitful comments on earlier
drafts of this paper.
Allen, P. J., Josephs, O., and Turner,
R. (2000). A method for remov-
ing imaging artifact from continu-
ous EE G recorded during f unctional
MRI. Neuro ima ge 12, 230–239.
Ashburner, J., and Friston, K. J. (2003).
“Rigid body registration,” in Human
Brain Function, eds S. Zeki, J. T.
Ashburner, W. D. Penny, R. S.
J. Frackowiak, K. J. Friston, C.
D. Fri th, R. J. Dolan, and C .
J. Price (Oxford, UK: Academic
Press), 635–653.
Ashburner, J., and Friston, K. J. (2005).
Unified segmentation. Neu roim age
26, 839–851.
Atkinson, A. P., and Adolphs, R. (2005).
“Visual emotion perception: mech-
anisms and processes,” in Emotion
and Consciousness, edsL.F.Barrett,
P. M. Niedenthal and P. Winkielman
(New York, NY: Guilford Press),
Bailenson, J. N., and Yee, N. (2005).
Digital chameleons: automatic
assimilation of nonverbal gestures
in immersive virtual environments.
Psychol. Sci. 16, 814–819.
Blakemore, S. J., and Frith, C. (2005).
The role of motor contagion
in the prediction of action.
Neuropsychologia 43, 260–267.
Blascovich, J., Loomis, J., Beall, A. C.,
Bailenson, J. N. (2002). Immersive
virtual environment technol-
ogy as a methodological tool for
social psychology. Psychol. Inq. 13,
Carr, L., Iacoboni, M., Dubeau, M.
L. (2003). Neural mechanisms of
empathy in humans: a relay from
neural systems for imitation to lim-
bic areas. Proc. Natl. Acad. Sci.
U.S.A. 100, 5497–5502.
Chartrand, T. L., and Bargh, J. A.
(1999). The chameleon effect: the
perception-behavior link and social
interaction. J. Pers. Soc. Psychol. 76,
Cunnington, R., Windischberger, C.,
and Moser, E. (2005). Premovement
activity of the pre-supplementary
motor area and the readiness for
action: studies of time-resolved
event-related functional MRI. Hum.
Mov. Sci. 24, 644–656.
Dapretto, M., Davies, M. S., Pfeifer,
J. H., Scott, A. A., Sigman, M.,
Bookheimer, S. Y., and Iacoboni,
M. (2006). Understanding emotions
in others: mirror neuron dysfunc-
tion in children with autism spec-
trum disorders. Nat. Neurosci. 9,
Decety, J. (2010). The neurodevelop-
ment of empathy in humans. Dev.
Neuro sci. 32, 257–267.
di Pellegrino, G., Fadiga, L., Fogassi,
(1992). Understanding motor
events: a neurophysiological study.
Exp. Brain Res. 91, 176–180.
Dimberg, U. (1982). Facial reactions to
facial expressions. Psychophysiology
19, 643–647.
Dimberg, U., and Lundqvist, L.-O.
(1990). Gender differences in facial
reactions to facial expressions. Biol.
Psychol. 30, 151–159.
Dimberg, U., and Thunberg, M. (1998).
Rapid facial reactions to emotion
facial expressions. Scand. J. Psychol.
39, 39–46.
Dimberg, U., Thunberg, M., and
Elmehed, K. (2000). Unconscious
facial reactions to emotional facial
expressions. Psychol. Sci. 11, 86–89.
Dimberg, U., Thunberg, M., and
Grunedal, S. (2002). Facial reactions
to emotional stimuli: automatically
controlled emotional responses.
Cogn. Emotion 16, 449–472.
Ekman, P., and Friesen, W. V. (1976).
Pictures of Facial Affect. Palo Alto,
CA: Consulting Psychologists Press.
Ekman, P., and Friesen, W. V.
(1978). The Facial Action Coding
System. Palo Alto, CA: Consulting
Psychologists Press.
Enticott, P. G., Johnston, P. J., Herring,
Frontiers in Human Neuroscience July 2012 | Volume 6 | Article 214 |9
Likowski et al. Facial mimicry and the MNS
P. B. (2008). Mirror neuron activa-
tion is associated with facial emo-
tion processing. Neuropsychologia
46, 2851–2854.
(1986). Guidelines for human
electromyographic research.
Psychophysiology 23, 567–589.
Gallese, V., Fadiga, L., Fogassi, L., and
Rizzolatti, G. (2002). “Action repre-
sentation and the inferior parietal
lobule,” in Attention Performance
XIX Common Mechanisms in
Perception and Action, eds W. Prinz
and B. Hommel (Oxford University
Press), 247–266.
Gallese, V., Fadiga, L., Fogassi, L., and
Rizzolatti, G. (1996). Action recog-
nition in the premotor cortex. Brain
119, 593–609.
Gazzola, V., and Keysers, C. (2009).
The observation and execu-
tion of actions share motor and
somatosensory voxels in all tested
subjects: Single-subject analyses
of unsmoothed fMRI data. Cereb.
Cortex 19, 1239–1255.
Mühlberger, A., Weyers, P., Alpers,
G. W., Plichta, M. M., Breuer, F., and
Pauli, P. (2010). Brain activations to
emotional pictures are differentially
associated with valence and arousal
ratings. Front. Hum. Neurosci. 4:175.
doi: 10.3389/fnhum.2010.00175
J. (2011). Simultaneous acquisition
of corrugator electromyography
and functional magnetic resonance
imaging: a new method for objec-
tively measuring affect and neural
activity concurrently. Neuro image
58, 930–934.
Hickok, G., and Hauser, M. (2010).
(Mis)understanding mirror
neurons. Curr. Biol. 20, R593-R594.
Iacoboni, M., and Dapretto, M. (2006).
The mirror neuron system and the
consequences of its dysfunction.
Nat. Rev. Neurosci. 7, 942–951.
Jabbi, M., and Keysers, C. (2008).
Inferior frontal gyrus activity
triggers anterior insula response
to emotional facial expressions.
Emotion 8, 775–780.
K., Boesch, C., Mathis, J., Schroth,
G., Dierks, T., and Koenig, T. (2008).
BOLD correlates of continuously
fluctuating epileptic activi ty isolated
by independent component analy-
sis. Neuroima ge 42, 635–648.
Krumhuber, E., and Kappas, A. (2005).
Moving smiles: the role of dynamic
components for the perception
of the genuineness of smiles. J.
Nonverbal Behav. 29, 3–24.
Lee, T.-W., Josephs, O., Dolan, R. J., and
Critchley, H. D. (2006). Imitating
expressions: Emotion-specific neu-
ral substrates in facial mimicry. Soc.
Cogn. Affect. Neurosci. 1, 122–135.
Grafton, S. T. (2004). Functional
imaging of face and hand imitation:
towards a motor theory of empathy.
Neuro ima ge 21, 601–607.
B., Pauli, P., and Weyers, P. (2008).
Modulation of facial mimicry by
attitudes. J. Exp. Soc. Psychol. 44,
Burdette, J. H. (2004). Precentral
gyrus discrepancy in electronic
versions of the Talairach atlas.
Neuro ima ge 21, 450–455.
Maldjian, J. A., Laurienti, P. J.,
Kraft, R. A., and Burdette, J. H.
(2003). An automated method for
neuroanatomic and cytoarchitec-
tonic atlas-based interrogation of
fMRI data sets. Neu roim age 19,
Azim, E., Menon, V., and Reiss, A.
L. (2003). Humor modulates the
mesolimbic reward centers. Neuro n
40, 1041–1048.
Molenberghs, P., Cunnington, R.,
and Mattingley, J. B. (2012). Brain
regions w ith mirror properties: a
meta-analysis of 125 human fMRI
studies. Neu ros ci. Biobeh av. Re v. 36,
Mühlberger, A., Wieser, M. J., Gerdes,
Pauli, P. (2011). Stop looking angry
and smile, please: start and stop
of the very same facial expression
differentially activate threat- and
reward-related brain networks.
Soc. Cogn. Affect. Neurosci. 6,
Mukamel, R., Ekstrom, A. D., Kaplan,
J., Iacoboni, M., and Fried, I. (2010).
Single-neuron responses in humans
during execution and observa-
tion of actions. Cur r. Biol. 20,
Musso, F., Brinkmeyer, J., Mobascher,
A., Warbrick, T., and Winterer,
G. (2010). Spontaneous brain
activity and EEG microstates. A
novel EEG/fMRI analysis approach
to explore resting-state networks.
Neuro ima ge 52, 1149–1161.
Niedenthal, P. M. (2007). Embodying
emotion. Science 316, 1002–1005.
Niedenthal, P. M., Brauer, M.,
Halberstadt, J. B., and Innes-
Ker, A. H. (2001). When did her
smile drop? Facial mimicr y and the
influences of emotional state on the
detection of change in emotional
expression. Cogn. Emotion 15,
Niedenthal, P. M., Winkielman, P.,
Mondillon, L., and Vermeulen, N.
(2009). Embodiment of emotion
concepts. J. Pers. Soc. Psychol. 96,
Oberman, L. M., Winkielman, P., and
Ramachandran, V. S. (2007). Face
to face: Blocking facial mimicry
can selectively impair recognition
of emotional expressions. Soc.
Neuro sci. 2, 167–178.
Rizzolatti, G., and Craighero, L. (2004).
The mirror-neuron system. Annu.
Rev. Neurosci. 27, 169–192.
Russell, J. A., and Bullock, M. (1985).
Multidimensional scaling of
emotional facial expressions:
similarity from preschoolers to
adults. J. Pers. Soc. Psychol. 48,
Schilbach, L., Eickhoff, S. B., Mojzisch,
A., and Vogeley, K. (2008). What’s in
a smile? Neural correlates of facial
embodiment during social interac-
tion. Soc. Neurosci. 3, 37–50.
Spencer-Smith, J., Wild, H., Innes-
(2001). Making faces: creating
three-dimensional parameterized
models of facial expression. Behav.
Res. Methods Inst rum. Comput. 33,
Tzourio-Mazoyer, N., Landeau, B.,
Papathanassiou, D., Crivello, F.,
Etard, O., Delcroix, N., Mazoyer, B.,
and Joliot, M. (2002). Automated
anatomical labeling of activations in
SPM using a macroscopic anatom-
ical parcellation of the MNI MRI
single-subject brain. Neuroimage
15, 273–289.
van der Gaag, C., Minderaa, R. B., and
Keysers, C. (2007). Facial expres-
sions: what the mirror neuron sys-
tem can and cannot tell us. Soc.
Neuro sci. 2, 179–222.
Vogt, B. A. (2005). Pain and emotion
interactions in subregions of the
cingulate gyrus. Nat. Rev. Neurosci.
6, 533–544.
Wallbott, H. G. (1991). Recognition of
emotion from facial expression via
imitation? Some indirect evidence
for an old theory. Br.J.Soc.Psychol.
30, 207–219.
Conflict of Interest Statement: The
authors declare that the research
was conducted in the absence of any
commercial or financial relationships
that could be construed as a potential
conflict of interest.
Received: 24 April 2012; acce pted: 02 July
2012; published online: 2 6 July 2012.
Citation: Likowski KU, Mühlberger A,
Gerdes ABM, Wieser MJ, Pauli P and
Weyers P (2012) Facial mimicry and
the mirror neuron system: simultane-
ous acquisition of facial electromyogra-
phy and functional magnetic resonance
imaging. Front. Hum. Neurosci. 6:214.
doi: 10.3389/fnhum.2012.00214
Copyright © 2012 Likowski,
Mühlberger, Gerdes, Wieser, Pauli
and Weyers. This is an open-access
article distributed under the terms of the
Creative Commons Attribution License,
which permits use, dist ribution and
reproduction in other forums, provided
the original authors and source are cred-
ited and subject to any copyright notices
concerning any third-party graphics etc.
Frontiers in Human Neuroscience July 2012 | Volume 6 | Article 214 |10
... However, an absence of mimicry has sometimes been observed whilst using pictures of faces [15,[22][23]. Considering the nature of the emotion, the congruence between the stimulus emotion and the one expressed by contagion is robust for anger and joy [2,[9][10][12][13][17][18]21,[24][25][26]. Data for sadness and surprise are scarcer [14], or even diverging for fear and disgust [11,[14][15][17][18]. Several variables may impact the presence of mimicry as measured by EMG (e.g. ...
... These studies have several limitations, however. Even though the influence of the emotion emitter's sex has been reported, most studies only include women [9,18,24,26,[40][41][42][43]. The stimuli are often very selective, conveyed by a single material such as images, sounds or videos, and focus on a limited number of emotions. ...
... Most of them are emotionally intense, presented in a static manner [2,9,[12][13]14,25,44] and selected among the "images of facial affect" [45]. In an attempt to overcome some of these drawbacks, some studies used more artificial stimuli, such as avatars [22,26,42] or morphed images [10,20,23,46]. The effect of the task to be accomplished has seldom been considered [47][48][49], the stimuli being often processed in a passive manner, without any specific instructions. ...
... This basic matching mechanism underlie perception of disgust in self and others (Wicker et al., 2003), as well as pain (Timmers et al., 2018), laughter and joy (Caruana et al., 2017). Thus, perceiving others' facial expressions activates motor and somatosensory areas involved in the execution of the same facial behavior (Schilbach et al., 2008;Likowski et al., 2012). ...
... Interestingly, quite robust evidence suggests that the MNS is causally involved in phenomena of facial mimicry and emotional contagion (Hogeveen et al., 2015;Kraaijenvanger et al., 2017;Paz et al., 2022). This has been shown in the last decades through studies that have inquired simultaneously into the activity of the brain with more than one neuroscientific tool (Likowski et al., 2012). The simultaneous use of different neuroscience techniques with different direction of bias is often employed to disambiguate controversial results about causal questions (Tramacere, 2021). ...
Full-text available
In philosophical and psychological accounts alike, it has been claimed that mirror gazing is like looking at ourselves as others. Social neuroscience and social psychology offer support for this view by showing that we use similar brain and cognitive mechanisms during perception of both others’ and our own face. I analyse these premises to investigate the factors affecting the perception of one’s own mirror image. I analyse mechanisms and processes involved in face perception, mimicry, and emotion recognition, and defend the following argument: because perception of others’ face is affected by our feelings toward them, it is likely that feelings toward ourselves affect our responses to the mirror image. One implication is that negative self-feelings can affect mirror gazing instantiating a vicious cycle where the negative emotional response reflects a previously acquired attitude toward oneself. I conclude by discussing implications of this view for psychology and social studies.
... The emotional expression would be imitated and experienced to some or other extent by the perceiver. For example, the perception of a smile on another person's face might trigger an image of the situation generating that emotion, which includes the affective experience, the physiological response, and the activation of the facial muscles (e.g., Achaibou et al., 2008;Likowski et al., 2012). As noted, this phenomenon is referred to as "facial mimicry" (Hess & Fischer, 2014). ...
Sometimes we advise others persons on the decisions they should make, and we accept risks that would be modulated by cognitive and emotional variables. In order to analyze the role of the expressed emotion in this type of interactions, an experiment was conducted that consisted of manipulating the type of emotion (facial expression: happiness vs. sadness) and the type of advice (health vs. financial) to test its impact on risk-taking and the confidence in the response. The subjects accepted less risk when the facial expression was sadness (vs. happiness) in the financial situations. The findings are discussed as part of the reciprocity process in social interaction, where emotional information could play an important modulating role.
... These neural mechanisms allow us to understand other people, and, therefore, play a crucial role in social behavior (Gallese, 2003). Consequently, the discovery of mirror neurons offers a promising physiological explanation for social abilities of humans needed in everyday life, such as feeling empathy for others (Gallese, 2003), observational learning (Petrosini et al., 2003), or (unconsciously) mimicking others (Likowski et al., 2012). ...
Full-text available
Virtual reality allows users to experience a sense of ownership of a virtual body-a phenomenon commonly known as the body ownership illusion. Researchers and designers aim at inducing a body ownership illusion and creating embodied experiences using avatars-virtual characters that represent the user in the digital world. In accordance with the real world where humans own a body and interact via the body with the environment, avatars thereby enable users to interact with virtual worlds in a natural and intuitive fashion. Interestingly, previous work revealed that the appearance of an avatar can change the behavior, attitude, and perception of the embodying user. For example, research found that users who embodied attractive or tall avatars behaved more confidently in a virtual environment than those who embodied less attractive or smaller avatars. Alluding to the versatility of the Greek God Proteus who was said to be able to change his shape at will, this phenomenon was termed the Proteus effect. For designers and researchers of virtual reality applications, the Proteus effect is therefore an interesting and promising phenomenon to positively affect users during interaction in virtual environments. They can benefit from the limitless design space provided by virtual reality and create avatars with certain features that improve the users' interaction and performance in virtual environments. To utilize this phenomenon, it is crucial to understand how to design such avatars and their characteristics to create more effective virtual reality applications and enhanced experiences. Hence, this work explores the Proteus effect and the underlying mechanisms with the aim to learn about avatar embodiment and the design of effective avatars. This dissertation presents the results of five user studies focusing on the body ownership of avatars, and how certain characteristics can be harnessed to make users perform better in virtual environments than they would in casual embodiments. Hence, we explore methods for inducing a sensation of body ownership of avatars and learn about perceptual and physiological consequences for the real body. Furthermore, we investigate whether and how an avatar's realism and altered body structures affect the experience. This knowledge is then used to induce body ownership of avatars with features connected with high performance in physical and cognitive tasks. Hence, we aim at enhancing the users' performance in physically and cognitively demanding tasks in virtual reality. We found that muscular and athletic avatars can increase physical performance during exertion in virtual reality. We also found that an Einstein avatar can increase the cognitive performance of another user sharing the same virtual environment. This thesis concludes with design guidelines and implications for the utilization of the Proteus effect in the context of human-computer interaction and virtual reality.
... mehreren Interakti-onspartner_innen, wobei Signale oftmals schon nach 0,3-0,4 Sekunden nach deren Auftreten gespiegelt werden. Die vielfach erforschte positive Wirkung von Mimikry auf die frühkindliche Entwicklung, auf zwischenmenschliche Bindungen, auf die Persönlichkeitsentwicklung und auf das Verständnis und Einfühlungsvermögen für das Gegenüber hat in der Wissenschaft das Interesse an diesem nonverbalen Phänomen geweckt (Ashenfelter et al. 2009;Holler 2011;Isabella und Belsky 1991;Likowski et al. 2012;Ramseyer und Tschacher 2008). Auch in der Emotionsforschung fand das Phänomen der Mimikry Einzug. ...
Full-text available
Zusammenfassung In der vorliegenden naturalistisch angelegten Studie wurde das körperliche Zusammenspiel von Therapieteilnehmer_innen im Kontext von systemischen Psychotherapiesitzungen im Mehrpersonensetting untersucht. Ziel der Studie war die Beantwortung der Frage, ob die subjektive Einschätzung der Beziehungsqualität der Therapieteilnehmer_innen mit der Anzahl, Dauer und Intensität von beobachtbarem (synchronem) nonverbalen Verhalten der anwesenden Personen korreliert. Dafür wurden fünf Therapiesitzungen im Mehrpersonensetting auf Video aufgezeichnet. Die zuvor von einer Interpret_innengemeinschaft ausgewählten Schlüsselszenen wurden in einem manuellen Annotationstool auf fünf körpersprachliche Kategorien (Mimik, Augenkontakt, Kopfbewegungen, Gestik und Körperhaltung) hin untersucht. Statistische Analysen zeigen, dass mehrere Variablen nonverbalen Verhaltens – und hier insbesondere die erforschten mimisch-affektiven Verhaltensweisen – mit der subjektiven Einschätzung der Stärke der therapeutischen Allianz korrelieren. Wie bereits in vorrangegangenen wissenschaftlichen Arbeiten beschrieben, legen die Ergebnisse nahe, dass die nonverbale Kommunikation einen wichtigen Aspekt der therapeutischen Allianz ausmacht. Darüber hinaus zeigt die vorliegende Studie aber auch, dass das Phänomen der Mimikry auch zwischen drei und mehr Therapieteilnehmer_innen auftritt und in Zusammenhang mit der Stärke der therapeutischen Allianz steht. Die vorliegende Studie beleuchtet die Rolle der Mimikry aus systemischer Perspektive und erläutert die körpersprachliche Mitgestaltung auf Beziehungsebene in der Psychotherapie.
Facial electromyography (EMG) allows to detect and quantify overt as well as subtle covert contractions of striatal facial muscles. Subjective and partly implicit affective experiences, such as the hedonic pleasure felt when consuming an exquisite meal, can thus be revealed and objectively quantified. Further, facial EMG is a convenient tool for translational research, as it can be combined with other research techniques, and therefore help to unravel brain mechanisms underlying the processing of different types of rewards. In this chapter we aim to provide step-by-step guidelines for the acquisition of facial EMG in food research in humans, using noninvasive surface electrodes. Implementations of facial EMG in behavioral settings and in combination with functional magnetic resonance imaging are discussed.Key wordsFacial EMGFacial expressionHedonic reactionFood anticipationFood consumption
Emotional facial expressions are primary media for human emotional communication. However, the psychological and neural mechanisms underpinning the processing of such facial expressions remain unclear. This article reviews the findings of our psychological and neuroscientific studies, which demonstrated the following: (1) that the emotional processing of facial expressions is accomplished unconsciously and is associated with amygdala activity at about 100 ms ; (2) that the perception of emotional facial expressions is more rapid than that of neutral expressions and is associated with enhanced activity in the visual cortices at about 200 ms ; and (3) that facial expressions automatically elicit facial mimicry and that this motor processing is related to activity in the inferior frontal gyrus at approximately 300 ms. These data suggest that few hundred ms needed to process processing of emotional facial expressions involve multiple psychological dimensions, including feeling, seeing, and mimicking, as well as widespread neural activities in the amygdala, visual cortices, and inferior frontal gyrus.
In everyday life we actively react to the emotion expressions of others, responding by showing matching, or sometimes contrasting, expressions. Emotional mimicry has important social functions such as signalling affiliative intent and fostering rapport and is considered one of the cornerstones of successful interactions. This book provides a multidisciplinary overview of research into emotional mimicry and empathy and explores when, how and why emotional mimicry occurs. Focusing on recent developments in the field, the chapters cover a variety of approaches and research questions, such as the role of literature in empathy and emotional mimicry, the most important brain areas involved in the mimicry of emotions, the effects of specific psychopathologies on mimicry, why smiling may be a special case in mimicry, whether we can also mimic vocal emotional expressions, individual differences in mimicry and the role of social contexts in mimicry.
Conference Paper
Spontaneous muscular activities can be studied by simultaneous recordings of surface electromyography (sEMG) and diffusion-weighted magnetic resonance imaging (DW-MRI). For reliable assessment of the spontaneous activity rate in sEMG data during active MR imaging, it is necessary to have a decent gradient artifact (GA) correction algorithm enabling the detection of small spontaneous activities with an amplitude of few microvolts. In this work, a neural network with weak label annotations during the training process is utilized for enhanced correction of GA residuals in the sEMG recordings. Based on sEMG signal decomposition and class-activation maps from the neural network classification, the amount of GA residuals is iteratively decreased in the sEMG signal. This leads to a reduction of the false-positive rate in automated spontaneous activity detection. Quality of GA residual correction is therefore estimated by using a specialized second neural network model. Clinical relevance- This work establishes an improved GA residual correction for simultaneously recorded sEMG data during MRI to enhance the ability for small spontaneous activity detection.
In this chapter, after having clarified which definition of emotion we followed, starting from Darwin and evolutionary psychology, we tried to examine the main mechanisms of emotional recognition from a behavioral and cerebral point of view: emotional contagion and cognitive empathy. The link between these skills and social cognition has been discussed. We tried to understand through the description of comparative studies on animals, studies on populations with cerebellar lesions in animals and humans, neurostimulation studies, and studies on neuropsychiatric pathologies with alterations to the cerebellar networks the possible involvement of the cerebellum in these mechanisms, also investigating its possible causal role. The evidence, even if mainly of a correlational type, is numerous and robust enough to be able to affirm the existence of significant involvement of the cerebellum in social cognition and in the recognition of negative emotions, especially fear.
Full-text available
Historically, at least 3 methodological problems have dogged experimental social psychology: the experimental control-mundane realism trade-off, lack of replication, and unrepresentative sampling. We argue that immersive virtual environment technology (IVET) can help ameliorate, if not solve, these methodological problems and, thus, holds promise as a new social psychological research tool. In this article, we first present an overview of IVET and review IVET-based research within psychology and other fields. Next, we propose a general model of social influence within immersive virtual environments and present some preliminary findings regarding its utility for social psychology. Finally, we present a new paradigm for experimental social psychology that may enable researchers to unravel the very fabric of social interaction.
We recorded electrical activity from 532 neurons in the rostral part of inferior area 6 (area F5) of two macaque monkeys. Previous data had shown that neurons of this area discharge during goal-directed hand and mouth movements. We describe here the properties of a newly discovered set of F5 neurons ("mirror neurons', n = 92) all of which became active both when the monkey performed a given action and when it observed a similar action performed by the experimenter. Mirror neurons, in order to be visually triggered, required an interaction between the agent of the action and the object of it. The sight of the agent alone or of the object alone (three-dimensional objects, food) were ineffective. Hand and the mouth were by far the most effective agents. The actions most represented among those activating mirror neurons were grasping, manipulating and placing. In most mirror neurons (92%) there was a clear relation between the visual action they responded to and the motor response they coded. In approximately 30% of mirror neurons the congruence was very strict and the effective observed and executed actions corresponded both in terms of general action (e.g. grasping) and in terms of the way in which that action was executed (e.g. precision grip). We conclude by proposing that mirror neurons form a system for matching observation and execution of motor actions. We discuss the possible role of this system in action recognition and, given the proposed homology between F5 and human Brocca's region, we posit that a matching system, similar to that of mirror neurons exists in humans and could be involved in recognition of actions as well as phonetic gestures.
The chameleon effect refers to nonconscious mimicry of the postures, mannerisms, facial expressions, and other behaviors of one's interaction partners, such that one's behavior passively rind unintentionally changes to match that of others in one's current social environment. The authors suggest that the mechanism involved is the perception-behavior link, the recently documented finding (e.g., J. A. Bargh, M. Chen, & L. Burrows, 1996) that the mere perception of another' s behavior automatically increases the likelihood of engaging in that behavior oneself Experiment 1 showed that the motor behavior of participants unintentionally matched that of strangers with whom they worked on a task. Experiment 2 had confederates mimic the posture and movements of participants and showed that mimicry facilitates the smoothness of interactions and increases liking between interaction partners. Experiment 3 showed that dispositionally empathic individuals exhibit the chameleon effect to a greater extent than do other people.
Based on a model in which the facial muscles can be both automatically/ involuntarily controlled and voluntarily controlled by conscious processes, we explore whether spontaneously evoked facial reactions can be evaluated in terms of criteria for what characterises an automatic process. In three experiments subjects were instructed to not react with their facial muscles, or to react as quickly as possible by wrinkling the eyebrows (frowning) or elevating the cheeks (smiling) when exposed to pictures of negative or positive emotional stimuli, while EMG activity was measured from the corrugator supercilii and zygomatic major muscle regions. Consistent with the proposition that facial reactions are automatically controlled, the results showed that the corrugator muscle reaction was facilitated to negative stimuli and the zygomatic muscle reaction was facilitated to positive stimuli. The results further showed that, despite the fact that subjects were required to not react with their facial muscles at all, they could not avoid producing a facial reaction that corresponded to the negative and positive stimuli.