Conference PaperPDF Available

From the eye to the heart: Eye contact triggers emotion simulation


Abstract and Figures

Smiles are complex facial expressions that carry multiple meanings. Recent literature suggests that deep processing of smiles via embodied simulation can be triggered by achieved eye contact. Three studies supported this prediction. In Study 1, participants rated the emotional impact of portraits, which varied in eye contact and smiling. Smiling portraits that achieved eye contact were more emotionally impactful than smiling portraits that did not achieve eye contact. In Study 2, participants saw photographs of smiles in which eye contact was manipulated. The same smile of the same individual caused more positive emotion and higher ratings of authenticity when eye contact was achieved than when it was not. In Study 3, participants' facial EMG was recorded. Activity over the zygomatic major (i.e. smile) muscle was greater when participants observed smiles that achieved eye contact compared to smiles that did not. These results support the role of eye contact as a trigger of embodied simulation. Implications for human-machine interactions are discussed.
Content may be subject to copyright.
From the Eye to the Heart: Eye Contact Triggers
Emotion Simulation
Magdalena Rychlowska
Department of Psychology
Clermont Université, France
34 av Carnot
63037 Clermont-Ferrand
Leah Zinner
Department of Psychology
Oglethorpe University,
4484 Peachtree Rd NE, Atlanta,
GA, 30319
Serban C. Musca
Université Rennes 2, France
Place Recteur Henri Le Moal
35043 Rennes Cedex
Paula M. Niedenthal
Department of Psychology
University of Wisconsin-Madison
1202 W Johnson St
Madison, WI, 53706-1611
Smiles are complex facial expressions that carry multiple
meanings. Recent literature suggests that deep processing of
smiles via embodied simulation can be triggered by achieved
eye contact. Three studies supported this prediction. In Study
1, participants rated the emotional impact of portraits, which
varied in eye contact and smiling. Smiling portraits that
achieved eye contact were more emotionally impactful than
smiling portraits that did not achieve eye contact. In Study 2,
participants saw photographs of smiles in which eye contact
was manipulated. The same smile of the same individual
caused more positive emotion and higher ratings of authenticity
when eye contact was achieved than when it was not. In Study
3, participants’ facial EMG was recorded. Activity over the
zygomatic major (i.e. smile) muscle was greater when
participants observed smiles that achieved eye contact
compared to smiles that did not. These results support the role
of eye contact as a trigger of embodied simulation. Implications
for human-machine interactions are discussed.
ACM Classification Keywords
H.1.2 Models and Principles: User/Machine SystemsHuman
factors; H.5.2 Information Systems: Information Interfaces and
Presentation: User-centered design; H.5.3 Information Systems:
Information Interfaces and PresentationSynchronous
General Terms
Experimentation, Human Factors
Eye contact, smile, facial expression, embodied simulation
There is a road from the eye to the heart that does not go
through the intellect.
-G. K. Chesterton
Understanding the subtle meaning of facial expression is a daily
challenge, and the smile might be the most challenging of
expressions. While it is true that prototypical smiles are
universally recognized as signs of joy [11, 15, 22], suggesting
that this expression is easily interpreted, other research [1, 13]
attests to its complexity.
How do people understand a smile? This question is addressed
in the Simulation of Smiles Model (SIMS), recently proposed
by Niedenthal, Mermillod, Maringer, and Hess [30]. The
present research was conducted in order to test a specific
hypothesis generated by the SIMS, namely that eye contact is a
sufficient trigger for embodied simulation of smiles.
1.1 The Simulation of Smiles (SIMS) Model
The SIMS model integrates social psychological research with
recent findings in neuroscience in order to propose how the
specific meaning of a smile is arrived at. According to the
SIMS, three operations can be used to process smiles:
perceptual analysis (matching the smile to representations of
prototypical smiles), top-down application of beliefs and
stereotypes, and embodied simulation.
Embodied simulation refers to partial reenacting of a
corresponding state in the motor, somatosensory, affective and
reward systems. This reenacting represents the meaning of the
expression to the perceiver [17, 10, 29] as if he/she was in the
place of the smiling person. The perception of a smile is
therefore accompanied by the bodily and affective states
associated with the production of this facial expression. In
addition to affective state, an important part of the embodied
simulation of a smile is facial mimicry. We define facial
mimicry as the visible or non-visible use of facial musculature
by an observer to imitate another person’s facial expression
The important role of the facial mimicry was suggested by the
findings of Stel & van Knippenberg [37]. They showed that
inhibiting facial mimicry decreased the speed of judging facial
displays as expressing positive or negative emotion. In another
study, Maringer et al. [26] showed that inhibition of facial
mimicry impaired the distinction between genuine and
nongenuine smiles. A recent study by Neal and Chartrand [28]
further bolsters this conclusion, showing that amplifying facial
mimicry improves one’s ability to read others’ facial emotions.
Although parts of embodied simulation, such as facial mimicry,
appear to be helpful in forming an accurate understanding of
facial expression, what is less clear are the conditions under
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies
are not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
Gaze-In’12, October 26, 2012, Santa Monica, California, USA.
Copyright 2012 ACM 978-1-4503-1516-6 …$15.00.
which embodied simulation occurs. According to the SIMS
model, a sufficient though not necessary trigger for embodied
simulation is the achievement of eye contact with the individual
displaying the expression.
1.2 Eye Contact as a Trigger to Simulation
Both developmental research [14, 19, 25], and work on
intimacy [21, 34] provide hints of the role of eye contact in
embodied simulation of emotion. This role is more explicitly
indicated by the findings of Bavelas, Black, Lemery, and
Mullett [6] on the perception of pain expressions. There, a
confederate faked the experience of pain and expressed the pain
facially. Further, he made eye contact with some of the
participants but not others. Eye contact significantly affected
participants’ reactions: they mimicked the confederate’s
expressions most clearly when eye contact with the confederate
was made. Relatedly, Schrammel and colleagues [35] showed
that participants’ zygomatic major muscle activity was stronger
when viewing happy faces than neutral faces, and, most
importantly, facial expression had an effect only under
conditions of eye contact. These results suggest a close link
between eye contact and facial mimicry.
In the present three studies, our aim was to test the SIMS
model’s specific hypothesis that eye contact is a trigger of
embodied simulation of the smile. The first study relied on
existing portraiture paintings. We selected portraits of subjects
who achieved different degrees of eye contact with the viewer,
and who expressed smiles. Participants saw each portrait twice.
On one exposure the participant viewed the full portrait; on the
other exposure the eyes of the portrait subject were obscured.
The indicator of embodied simulation was the participant’s
rating of the emotional impact of the painting. Since embodied
simulation is related to affective change, the more a smile is
embodied in the self, the more the viewer should report an
emotional response to the portrait. If the eye-contact-as-trigger
hypothesis is correct, then the emotional impact of the portrait
should be significantly greater when the eyes are unmasked
versus masked, and this should be particularly true if the viewer
achieves eye contact with the portrait on the unmasked trial. In
contrast, if participants were using a perceptual analysis for
decoding the smile, then seeing the eyes per se would be
important, but level of eye contact would be irrelevant to
personal feelings of emotion.
2. STUDY 1
2.1 Method
2.1.1 Participants
Undergraduates (101 female, 13 male) from two medium-size
universities participated in exchange for course credit. Data
from 6 participants were discarded because they were
incomplete or because they failed to follow instructions.
2.1.2 Stimuli
Paintings were selected from art archive internet sites by a
research assistant who was blind to the hypotheses. Criteria that
guided the selection of potential target portraits included that
the portrait showed a frontal and not profile view, and that the
eyes were clearly visible. Neither portraits of celebrities nor
very famous portraits were included in the final set. The 16
target portraits were selected based on a pilot study involving
39 undergraduate students (27 female, 12 male) from a
medium-sized university. Participants saw 32 smiling portraits
and rated the extent to which they were certain that the subject
of the portrait was actually smiling. Responses were made on
scales from 0 (not at all sure) to 100 (very sure). The 16
portraits selected as targets were those for which the average
ratings of certainty that the displayed expression was a smile
were the highest (M = 73.22, SD = 13.07). Among the 16
targets, the level of eye contact varied substantially (see
examples in Figure 1).
72 paintings from the 16th through 20th centuries, 56 distractors
and 16 target portraits, constituted the final stimulus set
. The
distractors (portraits, landscapes, and still life works) were
included to minimize demand characteristics.
A mask (pattern: small checkerboard, colors: 98, 92, 56 and
181, 188, 146 RGB) obscured the eyes for one presentation of
all 32 portraits (i.e., both target and distractor portraits; Fig. 1,
bottom panel). Four mask sizes (128 by 22 pixels, 158 by 22
pixels, 189 by 45 pixels and 242 by 60 pixels) were used,
depending on the face area proportions. Masks did not
systematically cover any particular portion of the eye area but
always obscured eye gaze, and they were applied randomly to
the landscape and still life paintings.
2.1.3 Procedure
Participants were tested in pairs, but worked independently at
individual computer stations. They were seated approximately
0.5 m from the screen (20", display resolution: 1280 x 768).
The experiment was programmed in E-Prime Version 1.2
(1996-2006 Psychology Software Tools).
Each of the 72 paintings was presented twice (once masked and
once unmasked) in a random order, with the constraint that one
exposure occurred in the first, and the other in the second half
of the trials. Stimuli were displayed on a black background.
The inter-trial interval was 800 ms, during which participants
saw a black screen.
Stimuli are available on-line at :
Figure 1. Portraits achieving eye contact (left) and not
achieving eye contact (right), in unmasked (top row) and
masked (bottom row) conditions.
For masked and unmasked presentations, target portraits were
accompanied by the question, presented simultaneously at the
bottom of the screen, “How emotional is the impact of the
painting?” Participants responded by positioning a cursor on a
bar ranging from 0 (no emotion) to 100 (a lot of emotion).
Positive emotion was not mentioned in the question in order to
minimize demand characteristics. For half of the distractors, a
filler question appeared and the other half was presented
without a question.
In the second part of the experiment, participants saw the 16
target portraits again. This time they rated the amount of
perceived eye contact (“How much eye contact does the subject
establish with you as the viewer?”) using the scale described
above (cursor bar ranging from 0, no eye contact to 100, a lot of
eye contact). At the end of the session the experimenter
debriefed the participants and probed for suspicion.
2.1.4 Results
We first divided the target portraits into two groups, based on a
median split of the eye contact ratings averaged across subjects:
portraits achieving eye contact and portraits not achieving eye
Ratings of emotional impact were then submitted to a 2 (mask:
masked vs. unmasked) x 2 (eye contact: achieved or not
achieved) repeated-measures ANOVA. Unsurprisingly, there
was a main effect of mask, F(1,107) = 92.05, p < .001, such that
emotional impact was higher for unmasked (M = 54.02, SD =
16.83) than for masked portraits (M = 42.97, SD = 15.64, d =
0.93). Emotional impact also varied as a function of eye
contact, F(1,107) = 117.80, p < .001, such that portraits that
achieved eye contact had more emotional impact on the
observer than portraits that did not achieve eye contact (M =
53.63, SD = 15.84, M = 43.36, SD = 15.93, d = 1.04).
However, as predicted, mask interacted with eye contact, F
(1,107) = 17.76, p < .001, such that the difference between the
emotional impact of masked and unmasked trials was higher for
portraits achieving eye contact (M = 13.09, SD = 12.57) than for
smiles that did not achieve eye contact (M = 9.00, SD = 13.39, d
= 0.41).
The dichotomization of continuous variables is a controversial
practice, which decreases the statistical power [7]. We therefore
reanalyzed the data using eye contact as a continuous variable.
Since participants rated the emotional impact of each of the 16
target portraits twice, impact ratings could not be considered
independent. Therefore, we used hierarchical modeling (HLM
software, version 6.06) [26] with portraits as the level-1 units
and participants as level-2 units. There were a total of 1728
observations. The intercept was allowed to vary randomly.
Mask and eye contact were specified as predictors.
Analysis of the main effects revealed the expected effect of
mask, t(107) = 9.93, p < .001, such that the emotional impact of
unmasked portraits was higher than the impact of masked
portraits. Also, emotional impact significantly increased with
eye contact, t(1726) = 11.18, p < .001. Most importantly, mask
interacted with eye contact, t(1726) = 4.43, p < .001, such that
the difference between masked and unmasked trials was
greatest for portraits achieving high levels of eye contact.
2.1.5 Discussion
Our results are consistent with the hypothesis that eye contact
triggers embodied simulation of smiles, estimated by the
reported emotional impact of portraiture painting. This impact
was greater when the subject’s eyes were visible, versus when
masked. More importantly, the difference was significantly
greater when eye contact was achieved. Facial mimicry and
the production of a corresponding emotional state are two
components of embodied simulation. Our finding complements
other results in the literature that demonstrate eye contact is
associated with greater facial mimicry [6, 35].
A limitation of Study 1 was that although we experimentally
manipulated whether or not the eyes were visible, we did not
manipulate eye contact. Further, we used one indicator of
simulation emotional impact. In Study 2 we tried to address
these limitations by manipulating eye contact and using a
different measure of embodied simulation, namely, ratings of
positivity and genuineness of smiles. We were inspired by past
research showing that smiles judged as genuine are related to
greater facial mimicry and positive feelings in the perceiver [12,
36]. If eye contact is a trigger of embodied simulation, ratings
of positivity and genuineness of smiles should be higher under
conditions of achieved eye contact.
3. STUDY 2
3.1 Method
3.1.1 Participants
41 undergraduates (40 females, 1 male) from a medium-sized
university took part in exchange for course credit. Data from 4
participants were discarded from further analyses due to their
failure to follow instructions.
3.1.2 Materials
72 photographs of smiles were developed for the study. 12
models (6 female, 6 male) were photographed by a professional
photographer in the presence of an expert on facial expression
of emotion. The expert used standard instructions [12] for
eliciting Duchenne and non-Duchenne smiles. Each model was
photographed smiling with three levels of eye contact: direct
gaze (high eye contact), left averted and right averted gaze (see
Figure 2).
3.1.3 Procedure
Participants were tested in pairs, but worked independently.
They were exposed to each of the 72 photographs
(screen size:
20", display resolution: 1280 x 768, picture size: 380 by 475
pixels) for 1500 msec. Their task was to rate the degree to
which they perceived the smile to be genuine on a scale ranging
from 0 (not genuine at all) to 100 (very genuine), and the
degree to which they perceived the smile to be positive on a
scale ranging from 0 (not at all positive) to 100 (very positive).
Stimuli are available on-line at :
Figure 2. Smile with achieved eye contact and gaze
averted to the left/right
3.1.4 Results
Two one-way ANOVAs were conducted with gaze (eye contact
or averted) as the independent variable, and genuineness and
positivity as the dependent variables. There was a main effect
of gaze on ratings of genuineness such that smiles with eye
contact were judged as more genuine (M = 60.99, SD = 11.21)
than smiles with averted gaze (M = 58.93, SD = 10.08), t(36) =
2.47, p = .018, d = 0.42. This was also true for positivity: smiles
that achieved eye contact were rated as significantly more
positive (M = 64.29, SD = 11.68) than smiles with averted gaze
(M = 60.54, SD = 10.31), t(36) = 4.76, p < .001, d = 0.81.
Mediational analyses indicated that the effect of eye contact on
genuineness disappeared when controlling for positivity,
F(1,34) = 1.73, p > .1. However, the effect of eye contact on
positivity was still significant over and above the differences in
ratings of genuineness, F(1,34) = 16.19, p <.001. This is
consistent with complete mediation, such that the increased
perceived genuineness of smiles that make eye contact was
largely determined by the increased feelings of positive emotion
generated by such smiles.
3.1.5 Discussion
The present study used an experimental manipulation of eye
contact and found that eye contact was related to higher ratings
of both positivity and genuineness, for both Duchenne and non-
Duchenne smiles. In light of past findings on the extent to
which “genuine” smiles produce physiological, bodily, and
experiential signs of positive affect, we suggest that the present
positivity ratings can be one valid indicator of emotional
simulation. In our experiment ratings of positivity fully
mediated the relationship between eye contact and perceived
genuineness. This result suggests that judgments of the
genuineness of smiles may not be based only on perceptual
features of the smile, but also on the affective experience of the
A limitation of these two studies is that only self-reported
indicators of embodied simulation - emotional impact and
ratings of positivity - were used. The aim of Study 3 was to
address this limitation by adding a measure of facial mimicry.
Participants’ EMG activity was recorded while they were
observing smiles in which eye contact was manipulated. If eye
contact is a sufficient trigger of embodied simulation, smiles
should be mimicked more when eye contact is achieved than
when it is not.
4. STUDY 3
4.1 Method
4.1.1 Participants
A total of 27 female undergraduate students from a medium-
size university participated in the experiment. They were
recruited on campus and received 10 € compensation.
4.1.2 Materials
Experimental stimuli were prepared according to the parameters
described in Study 2. This time, participants saw photographs of
6 models (3 female, 3 male) displaying facial expressions
(neutral or smiling) and two levels of eye contact (eye contact
achieved, and averted gaze no eye contact) for a total of 24
facial stimuli
4.1.3 Procedure
Stimuli are available on-line at :
Participants were tested individually. Facial stimuli were
presented on a computer screen (screen size: 17", display
resolution: 1024 x 768, picture size: 760 by 950 pixels) for 8 s.
Each stimulus appeared three times in a random order, with the
constraint that two photographs of the same face never occurred
in succession. The inter-trial interval was 500 ms. Presentations
began with a screen prompting participants to press the space
bar when ready. Participants were told to imagine real
interactions with models of the photographs.
Activity of the zygomatic major (ZM) muscle was recorded on
the left side of the face, according to the established guidelines
[16] and using bipolar 10 mm Ag/AgCl surface-electrodes
filled with SignaGel (Parker Laboratories Inc.). As a pretext for
the placement of electrodes used to record ZM activity,
participants were told that their brain waves would be recorded
- and a dummy electrode was also placed in the center of the
The EMG raw signal was measured with the 16 Channel Bio
Amp amplifier (ADInstruments, Inc.), digitized by a 16 bit
analogue-to-digital converter (PowerLab 16/30, ADInstruments,
Inc.), and stored with a sampling rate of 1000 Hz. Data were
filtered with a 10-Hz high-pass filter, a 400-Hz low-pass filter,
and a 50-Hz notch filter.
Next, participants saw the 24 photographs once again and rated
the degree to which they perceived the facial expression to be
positive on a scale ranging from 0 (not at all positive) to 100
(very positive), identical to the procedure used in Study 2. At
the end of the session participants completed a questionnaire
that tested their understanding of the task and probed for
suspicion. These post-experiment responses indicated that the
cover story was persuasive.
4.1.4 Results EMG Activity
The scores of interest were expressed as a difference in the
mean activity during the last 500 ms before stimulus onset and
the mean activity in the time window 500-1500 ms after
stimulus onset. EMG data were subjected to 2 (facial
expression: neutral, smile) x 2 (gaze: direct vs. averted)
analyses of variance (ANOVA), with both expression and gaze
as within subject factors.
Analysis of the main effects showed a significant main effect of
expression such that ZM activity was higher for smiles than for
neutral expression, F(1,26) = 11.89, p = .002. The interaction
between expression and gaze was not significant F(1,26) =
2.32, p > .1, but post-hoc comparisons showed that smiling
photographs achieving eye contact elicited higher ZM activity
(M = 49.89 mV, SD = 64.78) than photographs with averted
gaze (M = 32.11 mV, SD = 52,50), t(1,26) = 2.54, p = .017, d =
0.52, see Figure 3. This difference was not significant for
neutral photographs (MEC = 6.04 mV, SD = 33.28, MAverted =
3.63 mV, SD = 42.46), t(1,26) = 0.47, p > .5, d = 0.10. Ratings of positivity
Positivity scores were subjected to 2 x 2 analyses of variance
with facial expression and gaze as within subject factors. A
significant main effect of facial expression was found, F(1,26)
= 547.47, p < .001. Not surprisingly, smiles (M = 83.43, SD =
9.30) were rated as significantly more positive than neutral
facial expressions (M = 24.61, SD = 12.84), t(26) = 23.40, p
<.001, d = 4.62. Again, the expression-gaze interaction was not
significant, F(1,26) = 0.36, p > .5, but post-hoc comparisons
showed that ratings of positivity were significantly higher for
smiling photographs achieving eye contact (M = 84.93 mV, SD
= 8.48) than for smiling photographs with averted gaze (M =
81.93 mV, SD = 11.03), p = .020, d = 0.51. This difference was
not significant for neutral photographs (MEC = 25.52 mV, SD =
12.80, MAverted = 23.70 mV, SD = 13.76), t(1,26) = 1.38, p > .1,
d = 0.27.
4.1.5 Discussion
This study used a psychophysiological indicator of embodied
simulation to supplement the self-reported measures used in
Study 1 and 2. We found that smiles provoked greater
zygomatic major activity under conditions of eye contact
compared to averted gaze. These results are in line with the
findings of Bavelas et al. [6], where facial expressions of pain
elicited greater mimicry in condition of eye contact than when
eye contact was not achieved. Also, Schrammel et al. [35]
showed that smiles of animated virtual characters had an effect
on participants’ zygomatic activity only if the character directly
turned towards the observer (and thus, when eye contact was
achieved). At first pass these results seem contradictory to these
obtained by Mojzisch, Schilbach, Helmert, Pannasch,
Velichkovsky, & Vogeley [27], where participants smiled both
in response to characters who made eye contact and those who
were turned away. Note however that in this study mean
zygomatic activity was (not significantly) higher for conditions
where virtual characters gazed directly at participants,
compared to when characters were turned away. It should be
also mentioned that only males participated in the research of
Mojzisch et al. [27], whereas earlier EMG findings [9] suggest
that females show more a pronounced facial mimicry effect
than males.
In Study 3, the main effect of gaze was not qualified by an
interaction with facial expression, as was found by Schrammel
et al. [35]. This may be due to the type of stimuli used in the
two studies. Note that Schrammel and colleagues used dynamic
sequences presenting virtual characters, while in our study
participants observed photographs of real persons. Moreover,
we specifically manipulated eye contact, while Schrammel et al.
[35] varied the character’s body orientation. The lack of
significant interaction may be also due to an insufficient
statistical power. The impact of eye contact on facial mimicry
and possible moderations should be investigated in further
studies involving more participants.
The present studies were motivated by a prediction [30], that
eye contact is a sufficient trigger of embodied simulation of
smiles. We used two types of stimuli portraiture paintings and
portrait photography and three measures of embodied
simulation: emotional impact, smile positivity and facial EMG.
In the first study, achieved eye contact elicited more emotion
than non-achieved eye contact. The second study showed that
eye contact increased the perceived positivity and genuineness
of smiles. Finally, the third study demonstrated eye contact is
associated with greater imitation of smiles than averted gaze.
Although our dependent measures are only parts of a complex
phenomenon of embodied simulation, findings from these three
studies support our prediction and highlight the importance of
eye contact in the judgment of smiles. Moreover, these effects
of mutual gaze can extend to other facial and bodily expressions
Achieved eye contact is a powerful social signal. When
perceiving direct gaze, people allocate their attentional
resources to the interaction and engage in intensive processing
of their interaction partners’ faces [18]. Eye contact has also
been proposed to be a signal of approach motivation. For
example, Adams and Kleck [2, 3] found that eye contact
increased the recognition accuracy and perceived intensity of
so-called approach-oriented emotions (i.e., anger and
happiness). Such findings are neither completely consistent
with, nor contradictory to the present account. We argue
however that the effects of eye contact extend beyond mere
attention and information, and involve emotional experience
along with imitation of the interaction partner.
We believe that deeper understanding of eye contact can inform
the design of trustworthy and persuasive robots, helping to
solve one of the fundamental questions in building social
robots: when is the imitation appropriate [8]? Existing research
indicates that mimicry can act like "social glue", fostering
prosocial attitudes and cooperation [5, 38, 20]. Consequently,
results of the reported three studies suggest that a robot
producing or imitating human facial expressions under
conditions of eye contact should elicit higher emotional
responses than a robot that does not achieve eye contact. It is
indeed possible, but the situation is more complex that it seems:
recent studies showed that not only people tend to mimic more
sympathetic interaction partners [23] but also that being
imitated by an outgroup member can have negative
consequences and decrease likability [24]. Thus, gaze behavior
should vary as a function of the type of the robot, with more
likable robots achieving more eye contact. On the other hand,
referential gaze and head alignment with the object of interest
would be more effective in educational contexts [4].
Another important problem is whether eye contact of strongly
humanlike robots, along with a display of smiles, will elicit
mimicry and positive emotion or rather feelings of eeriness and
discomfort? These questions deserve experimental
investigation. We believe that the present research can help in
designing robots and agents that “invite" motivated, personal
processing of facial expressions [31, 32]. This embodied
processing of smiles, frowns or other grimaces can make their
impact more visceral and more persuasive.
The authors would like to thank Pierre Chausse and Cyril
Bernard for their competent programming, and Sophie
Figure 3. Mean change of zygomatic activity as a
function of facial expression and gaze.
Monceau, Alexandra Buonanotte, Elena Dujour, and Marie
Dejardin for their work as experimenters.
[1] Abe, J.A., Beetham, M., and Izard, C. 2002. What do smiles
mean? An analysis in terms of differential emotions theory.
In An empirical reflection on the smile, M.H. Abel, Ed.
Edwin Mellen Press, Lewiston, NY, 83-110.
[2] Adams, R. and Kleck, R. E. 2003. Perceived gaze direction
and the processing of facial displays of emotion. Psychol.
Sci. 14, 644-647.
DOI= 10.1046/j.0956-7976.2003.psci_1479.x
[3] Adams, R. and Kleck, R. E. 2005. The effects of direct and
averted gaze on the perception of facially communicated
emotion. Emotion. 5, 3-11. DOI= 10.1037/1528-3542.5.1.3
[4] Andrist, S., Pejsa, T., Mutlu B., and Gleicher, M. 2012.
Designing Effective Gaze Mechanisms for Virtual Agents.
In Proceedings of the 30th ACM/SigCHI Conference on
Human Factors in Computing Systems (Austin, TX). CHI
’12. ACM, New York, NY, 705-714.
[5] Bailenson, J. N. and Yee, N. 2005. Digital chameleons.
Psychol. Sci. 16, 814-819. DOI=10.1111/j.1467-
[6] Bavelas, J. B., Black, A., Lemery, C. R., and Mullett, J.
1986. "I show how you feel": Motor mimicry as a
communicative act. J. Pers. Soc. Psychol. 50, 322-329.
[7] Brauer, M. (2002). L’analyse des variables indépendantes
continues et catégorielles: alternatives à la dichotomisation.
An. Ps. 102(3), 449-484. DOI= 10.3406/psy.2002.29602
[8] Breazeal, C., and Scassellati, B. 2002. Robots that imitate
humans. Trends Cogn. Sci. 6, 481-487.
[9] Dimberg, U. and Lundqvist, L.-O. 1990. Gender differences
in facial reactions to facial expressions. Biol. Psychol. 30,
[10] Decety, J. and Sommerville, J. A. 2003. Shared
representations between self and other: A social cognitive
neuroscience view. Trends Cogn. Sci. 7, 527-533.
[11] Ekman, P. 1994. Strong evidence for universals in facial
expression: A reply to Russell's mistaken critique. Psychol.
Bull. 115, 268-287. DOI=10.1037/0033-2909.115.2.268
[12] Ekman, P. and Davidson, R. 1993. Voluntary smiling
changes regional brain activity. Psychol. Sci. 4, 342-345.
[13] Ekman, P. and Friesen, W. V. 1982. Felt, false and
miserable smiles. J. Nonverbal Behav. 6, 238-252.
[14] Farroni T., Csibra G., Simion, F., and Johnson, M. H. 2002.
Eye contact detection in humans from birth. P. Natl. Acad.
Sci. USA. 99, 9602-9605.
[15] Frank, M. and Stennett, J. 2001. The forced-choice
paradigm and the perception of facial expression of
emotion. J. Pers. Soc. Psychol. 80, 75-85.
[16] Fridlund, A. J. and Cacioppo, J. T. 1986. Guidelines for
human electromyographic research. Psychophysiology. 23,
567-89. DOI=10.1111/j.1469-8986.1986.tb00676.x
[17] Gallese, V. 2003. The roots of empathy: The shared
manifold hypothesis and the neural basis of
intersubjectivity. Psychopathology. 36, 171-180.
[18] George, N. and Conty, L. 2008. Facing the gaze of others.
Neurophysiol. Clin. 38, 197-207.
[19] Hains, S. M. J. and Muir, D. W. 1996. Infant sensitivity to
adult eye direction. Child Dev. 67, 19401951.
[20] Heyes, C. in press. What can imitation do for cooperation?
In Signalling, Commitment and Emotion, B. Calcott, R.
Joyce & K. Stereiny, Eds. MIT Press, Cambridge, MA.
[21] Iizuka, Y. 1992. Eye contact in dating couples and
unacquainted couples. Percept. Motor Skills. 75, 457-461.
[22] Izard, C. 1971. The face of emotion. Appleton-Century-
Crofts, New York, NY.
[23] Likowski, K.U., Mühlberger, A., Seibt, B., Pauli, P., and
Weyers, P. 2008. Modulation of facial mimicry by attitudes.
J. Exp. Soc. Psychol. 44, 1065-1072. DOI=
[24] Likowski, K.U., Schubert, T.W., Fleischmann, B.,
Landgraf, J., and Volk, A. Submitted. Positive effects of
mimicry are limited to the ingroup.
[25] Lohaus, A., Keller, H., and Voelker, S. 2001. Relationships
between eye contact, maternal sensitivity, and infant crying.
Int. J. Behav. Dev. 25, 542-548.
[26] Maringer, M. , Krumhuber, E. G., Fischer, A. H., and
Niedenthal, P. M. 2011. Beyond smile dynamics: mimicry
and beliefs in judgments of smiles. Emotion. 11, 181-7.
[27] Mojzisch, A., Schilbach, L., Helmert, J., Pannasch, S.,
Velichkovsky, B. M., and Vogeley, K. 2006. The effects of
self-involvement on attention, arousal, and facial expression
during social interaction with virtual others: A
psychophysiological study. Soc. Neurosci. 1, 184-195.
DOI= 10.1080/17470910600985621
[28] Neal, D. and Chartrand, T. 2011. Embodied Emotion
Perception: Amplifying and Dampening Facial Feedback
Modulates Emotion Perception Accuracy. Soc. Psychol.
Person. Sci. DOI=10.1177/1948550611406138
[29] Niedenthal, P.M. 2007. Embodying Emotion. Science. 316,
1002-1005. DOI=10.1126/science.1136930
[30] Niedenthal, P. M., Mermillod, M., Maringer, M., and Hess,
U. 2010. The Simulation of Smiles (SIMS) Model:
Embodied simulation and the meaning of facial expression.
Behav. Brain Sci. 33, 417-480.
[31] Pitcher, D., Garrido, L., Walsh, V., and Duchaine, B. 2008.
TMS disrupts the perception and embodiment of facial
expressions. J. Neurosci. 28, 8929-8933.
[32] Pourtois, G., Sander, D., Andres, M., Grandjean, D.,
Reveret, L., Olivier, E., and Vuilleumier, P. 2004.
Dissociable roles of the human somatosensory and superior
temporal cortices for processing social face signals. Eur. J.
Neurosci. 20, 3507-3515. DOI= 10.1111/j.1460-
[33] Raudenbush, S. W., Bryk, A., Cheong, Y. F., and Congdon,
R. 2004. HLM 6: Hierarchical linear and nonlinear
modeling. Scientific Software International, Chicago.
[34] Russo, N. 1975. Eye contact, interpersonal distance, and the
equilibrium theory. J. Pers. Soc. Psychol. 31, 497-502.
[35] Schrammel, F., Pannasch, S., Graupner, S.-T., Mojzisch, A.,
and Velichovsky, B. M. 2009. Virtual friend or threat? The
effects of facial expression and gaze interaction on
psychophysiological responses and emotional experience.
Psychophysiology. 46, 922-931. DOI= 10.1111/j.1469-
[36] Soussignan, R. 2002. Duchenne smile, emotional
experience, and autonomic reactivity: A test of the facial
feedback hypothesis. Emotion. 2, 52-74.
DOI= 10.1037/1528-3542.2.1.52
[37] Stel, M. and van Knippenberg. 2008. The role of facial
mimicry in the recognition of affect. Psychol. Sci. 19, 984-
985. DOI=10.1111/j.1467-9280.2008.02188.x
[38] van Baaren, R., Janssen, L., Chartrand, T.L., and
Dijksterhuis, A. 2009. Where is the love? The social aspects
of mimicry. Phil. Trans. R. Soc. B. 364, 2381-2389. DOI=
[39] Wang, Y., Newport, R., and Hamilton, A. F. 2011. Eye
contact enhances mimicry of intransitive hand movements.
Biol. Lett. 23, 7-10. DOI=10.1098/rsbl.2010.0279
... Accordingly, if obstructed eye contact decreases the affiliative link between interaction partners, it should impair emotional mimicry with all its benefits for social exchanges. In line with these considerations, obstructed eye contact has been found to reduce emotional mimicry (e.g., Rychlowska et al., 2012;Imafuku et al., 2020;Kuang et al., 2021). ...
... Yet, this research used unnaturally covered eyes (with black censor bars) or averted-gaze faces to operationalize the lack of eye contact (Rychlowska et al., 2012;Wang and de C Hamilton, 2014;Marschner et al., 2015;Leng et al., 2018;Imafuku et al., 2020;Kuang et al., 2021). Thus, even though these findings point to the importance of eye contact for emotional mimicry research, their results allow for a range of alternative explanations. ...
... This can be the beginning of a vicious circle which is difficult to break. Indeed, emotional mimicry has been found to foster mutual liking and establish social connectedness and prior research confirms that black sensor bars in front of the mimickee's eyes as well as averted gaze may impair emotional mimicry responses (e.g., Rychlowska et al., 2012;Imafuku et al., 2020;Kuang et al., 2021). ...
Full-text available
Eye contact is an essential element of human interaction and direct eye gaze has been shown to have effects on a range of attentional and cognitive processes. Specifically, direct eye contact evokes a positive affective reaction. As such, it has been proposed that obstructed eye contact reduces emotional mimicry (i.e., the imitation of our counterpart's emotions). So far, emotional mimicry research has used averted-gaze faces or unnaturally covered eyes (with black censor bars) to analyze the effect of eye contact on emotional mimicry. However, averted gaze can also signal disinterest/ disengagement and censor bars obscure eye-adjacent areas as well and hence impede emotion recognition. In the present study (N = 44), we used a more ecological valid approach by showing photos of actors who expressed either happiness, sadness, anger, or disgust while either wearing mirroring sunglasses that obstruct eye contact or clear glasses. The glasses covered only the direct eye region but not the brows, nose ridge, and cheeks. Our results confirm that participants were equally accurate in recognizing the emotions of their counterparts in both conditions (sunglasses vs. glasses). Further, in line with our hypotheses, participants felt closer to the targets and mimicked affiliative emotions more intensely when their counterparts wore glasses instead of sunglasses. For antagonistic emotions, we found the opposite pattern: Disgust mimicry, which was interpreted as an affective reaction rather than genuine mimicry, could be only found in the sunglasses condition. It may be that obstructed eye contact increased the negative impression of disgusted facial expressions and hence the negative feelings disgust faces evoked. The present study provides further evidence for the notion that eye contact is an important prerequisite for emotional mimicry and hence for smooth and satisfying social interactions.
... For instance, direct (as opposed to averted) eye gaze enhances mimicry of hand movements (Wang et al., 2011;Wang & Hamilton, 2014). A similar pattern of results has been found for emotional mimicry in response to happy faces, that is, direct gaze was shown to evoke higher zygomaticus activity following exposure to happy expressions than averted gaze (Kuang et al., 2021;Marschner et al., 2015;Rychlowska et al., 2012;Schrammel et al., 2009;Soussignan et al., 2013). When it comes to emotional mimicry to less affiliative emotional displays (i.e., angry faces), the findings are less consistent, with some studies showing that direct gaze evokes higher corrugator activity than averted gaze (Schrammel et al., 2009;Soussignan et al., 2013), while other studies report no such effect (Kuang et al., 2021;Marschner et al., 2015). ...
... On the one hand, the aforementioned studies indicating that direct eye contact increases happiness mimicry (Kuang et al., 2021;Marschner et al., 2015;Rychlowska et al., 2012;Schrammel et al., 2009;Soussignan et al., 2013) suggest that mimicry may be stronger due to the feeling of being monitored by the expresser. On the other hand, given that direct eye contact is not only a signal of watching someone but also an affiliative cue indicating approach tendencies (Leng et al., 2018;Mason et al., 2005), it is possible that faces with direct gaze (relative to faces with an averted gaze) may be more likely to evoke mimicry due to the affiliative signals they send rather than due to simply informing the perceiver that he/she is being watched by the expresser. ...
Full-text available
This article explores emotional mimicry and its interpersonal functions under the absence versus presence of visual contact between the interacting partners. We review relevant literature and stress that previous studies on emotional mimicry were focused on imitative responses to facial displays. We also show that the rules applying to mimicking facial expressions may not necessarily be applicable when visual emotional signals are not present (e.g., people attending an online meeting cannot see each other’s faces). Overall, our review suggests that emotional mimicry functionally adapts to whether the interacting partners can see each other. We therefore argue that going beyond facial displays may provide insight into emotional mimicry’s social functions, thereby clarifying its role in fostering affiliation and emotional understanding.
... With specific regards to emotional mimicry of happy faces, some studies have found that direct gaze evokes higher ZM activity than averted gaze ( Marschner et al., 2015 ;Rychlowska et al., 2012 ;Soussignan et al., 2013 ), while others have found no such result ( Mojzisch et al., 2006 ). Similarly, in the case of emotional mimicry to angry faces, some studies have found that direct gaze evokes higher CS activity than averted gaze in males ( Schrammel et al., 2009 ;Soussignan et al., 2013 ), while others have failed to find similar results ( Marschner et al., 2015 ;Mojzisch et al., 2006 ). ...
... There are several possibilities underlying the inconsistent findings among previous studies. One possibility is that in some studies virtual agents were used (e.g., Mojzisch et al., 2006 ), whereas in others real human faces were employed (e.g., Rychlowska et al., 2012 ). Another possibility is that most of studies solely rely on EMG measure, and thus the neural basis of mimicry is less clear. ...
Full-text available
Emotional mimicry plays an important role in social interaction and is influenced by social context, especially eye gaze direction. However, the neural mechanism underlying the effect of eye gaze direction on emotional mimicry is unclear. Here, we explored how eye gaze direction influenced emotional mimicry with a combination of electromyography (EMG) and electroencephalography (EEG) techniques, which may provide a more comprehensive measure. To do this, we recorded facial EMG and scalp EEG signals simultaneously while participants observed emotional faces (happy vs. angry) with direct or averted gaze. Then, we split the EEG trials into two mimicry intensity categories (high mimicry intensity, HMI vs. low mimicry intensity, LMI) according to EMG activity. The ERP difference between HMI and LMI EEG trials revealed four ERP components (P50, P150, N200 and P300), and the effect of eye gaze direction on emotional mimicry was prominent on P300 at P7 and P8. Moreover, we also observed differences in the effect of eye gaze direction on mimicry of happy faces and angry faces, which were found on P300 at P7, as well as P150 at P7 and N200 at P7 and Pz. In short, the present study isolated the neural signals of emotional mimicry with a new multimodal method, and provided empirical neural evidence that eye gaze direction affected emotional mimicry.
... In other words, this result can be interpreted to suggest that seeing the humanoid robot as well as the human partner triggered a greater affiliative response when the participants knew that there was a possibility for bidirectional interaction with the partner. Indeed, previous research has shown that although facial responses are considered to reflect rather automatic affective or social responses, they can be modulated by different types of top-down influences, including social relevance of the observed stimulus as well as the social context and simultaneous emotional processes (Bourgeois & Hess;Moody;McIntosh, Mann, & Weisser, 2007;Rychlowska, Zinner, Musca, & Niedenthal, 2012;Soussignan et al., 2013). The participants' belief of being seen or not did not influence the effect of direct vs. averted gaze on the autonomic arousal or zygomatic responses. ...
Full-text available
Eye contact with a humanoid robot has been shown to evoke similar affect and affiliation related psychophysiological responses as eye contact with another human. In this pre-registered study, we investigated whether these effects are dependent on the experience of being “watched”. Psychophysiological responses (SCR, zygomatic and corrugator facial EMG, frontal EEG asymmetry) to a humanoid robot’s or a human model’s direct vs. averted gaze were measured while manipulating the participants’ belief of whether the robot/human model could see them or not. The results showed greater autonomic arousal responses and facial responses related to positive affect both to the robot’s and the human model’s direct vs. averted gaze, regardless of the belief condition. The belief condition influenced the overall magnitude of these responses to both stimulus models, however, to a lesser extent for the robot than for the human model. For the frontal EEG asymmetry, the effect of gaze direction was non-significant in both belief conditions. The results lend further support for the importance of eye contact in human-robot interaction and provide insights into people’s implicit attributions of humanoid robots’ mental capacities.
... There are previous studies that measured facial EMG activity in response to expressive and neutral faces looking towards the observer (eye contact) or not, but these studies did not find any effect of gaze direction on the EMG responses when the faces bore a neutral expression (Schrammel et al., 2009;Soussignan et al., 2013). The effect of gaze direction was observed only when the stimulus faces were expressive, for example, the zygomaticus responses to happy faces were greater when the gaze was direct as compared to when it was averted (Rychlowska et al., 2012;Schrammel et al., 2009;Soussignan et al., 2013). It is noteworthy, that in all these previous studies the stimuli were images (avatars or images of real people) presented on a computer monitor. ...
Full-text available
Eye contact often elicits a smiling response. We investigated whether an individual’s awareness that the recipient perceives their direct gaze during eye contact has an influence on this smiling response. Participants wore glasses with either clear or dark lenses (preventing the other person from seeing their eyes). Measurements of electromyographic activity from the zygomatic and periocular muscle regions showed that the smiling responses were greater to seeing the other’s direct versus averted gaze. The participants’ own gaze direction had also an effect, and this effect was modified by the visibility of their eyes. Zygomatic responses were greater when the participants were aware that their eyes were visible. Thus, awareness of sending a direct gaze to another plays a role in the smiling response. In addition, participants’ self-evaluated level of social anxiety was positively correlated with the magnitude of the zygomatic responses to the other person’s direct versus averted gaze.
... For example, both phenomena can be found when the observed behavior is effectuated by an avatar (Pan & Hamilton, 2015;Weyers, Mühlberger, Hefele, & Pauli, 2006) and both are facilitated by social priming (van Baaren, Maddux, Chartrand, de Bouter, & van Knippenberg, 2003;Leighton, Bird, Orsini, & Heyes, 2010). Mutual gaze as well facilitates both emotional mimicry (Rychlowska, Zinner, Musca, & Niedenthal, 2012) and automatic imitation (Wang, Newport, & Hamilton, 2010). ...
The goal of this manuscript is to provide support for the notion that emotional mimicry is a social act. For this, I will provide a brief overview of recent developments in the domain of emotional mimicry research. I will present the mimicry in social context model of mimicry and evidence for four predictions that set this theory apart. Specifically, based on a review of the literature on emotional mimicry, I conclude that we do not mimic the specific muscle movements we observe, but rather we mimic what we infer from these movements. Furthermore, emotional mimicry only occurs when the expresser and observer share the intention to affiliate. Hence, we are less likely to mimic strangers and do not mimic people we do not like. Interactions in which affiliative mimicry occurs are perceived as more positive, but interactions in which mimicry is antagonistic as more negative. This supports the three social functions proposed here: Affiliation, Emotional understanding, and Social regulation.
Full-text available
Thesis. The aim of this paper is to present one of the outcomes of the Fight the Fright project. The project itself focuses on the development of adult competencies related to public speaking, self-confidence, combating fear of public speaking and interpersonal communication, using theatre techniques. It focuses on improving the competencies of adults and adult educators. Concept. Based on their knowledge and experience, the project partners have developed educational materials to support working-age people and strengthen their competencies. It is crucial to train educators so that they can pass on this knowledge to the next person. Results and conclusions. The project has developed two manuals. The first is a curriculum for developing public speaking in foreign languages. The second manual, below, was developed for adult educators to prepare them to deliver training related to overcoming the fear of public speaking in foreign languages. An online course on the above topics has also been developed. The project also envisages training adult educators, followed by further training for at least 15 participants by already trained educators.
Gaze behavior, including eye contact and gaze direction, is an essential component of non-verbal communication, helping to facilitate human-to-human conversation in ways that have often been thought of as universal and innate. However, these have been shown to be influenced partially by cultural norms and background, and despite this, the majority of social robots do not have any cultural-based non-verbal behaviors and several lack any directional gaze capabilities at all. This study aims to observe how different gaze behaviors manifest during conversation as a function of culture as well as exposure to other cultures, by examining differences in behaviors such as duration of direct gaze, duration and direction of averted gaze, and average number of shifts in gaze, with the objective of establishing a baseline of Japanese gaze behavior to be implemented into a social robot. Japanese subjects were found to have much more averted gaze during a task that involves thinking as opposed to a task focused on communication. Subjects with significant experience living overseas were found to have different directional gaze patterns from subjects with little to no overseas experience, implying that non-verbal behavior patterns can change with exposure to other cultures.
Full-text available
The analysis of continuous and categorical independent variables : Alternatives to dichotomization. Transforming continuous independent variables into categorial ones makes the statistical analyses simpler because after dichotomization, the effects of these variables can be examined via an analysis of variance (ANOVA) rather than a multiple regression analysis. However, this simplicity comes at a high price. When a continuous variable is dichotomized, one artificially introduces random error which decreases the statistical power of the inferential analyses. Assuming a normal distribution, the decrease in statistical power is equivalent to the exclusion of approximately 38 % of the participants. In this article, we present the problems associated with the dichotomization of continuous variables and we discuss various strategies that allow researchers to analyze experimental designs with continuous and categorical independent variables. Key words : continuous variables, quantitative variables, regression analysis, mean deviation form, dichotomization.
Full-text available
Elementary motor mimicry (e.g., wincing when another is injured) has been previously considered in social psychology as the overt manifestation of some intrapersonal process such as vicarious emotion. A 2-part experiment with 50 university students tested the hypothesis that motor mimicry is instead an interpersonal event, a nonverbal communication intended to be seen by the other. Part 1 examined the effect of a receiver on the observer's motor mimicry. The victim of an apparently painful injury was either increasingly or decreasingly available for eye contact with the observer. Microanalysis showed that the pattern and timing of the observer's motor mimicry were significantly affected by the visual availability of the victim. In Part 2, naive decoders viewed and rated the reactions of these observers. Their ratings confirmed that motor mimicry was consistently decoded as "knowing" and "caring" and that these interpretations were significantly related to the experimental condition under which the reactions were elicited. Results cannot be explained by any alternative intrapersonal theory, so a parallel process model is proposed in which the eliciting stimulus may set off both internal reactions and communicative responses, and it is the communicative situation that determines the visable behavior. (37 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
The view that certain facial expressions of emotion are universally agreed on has been challenged by studies showing that the forced-choice paradigm may have artificially forced agreement. This article addressed this methodological criticism by offering participants the opportunity to select a none of these terms are correct option from a list of emotion labels in a modified forced-choice paradigm. The results show that agreement on the emotion label for particular facial expressions is still greater than chance, that artifactual agreement on incorrect emotion labels is obviated, that participants select the none option when asked to judge a novel expression, and that adding 4 more emotion labels does not change the pattern of agreement reported in universality studies. Although the original forced-choice format may have been prone to artifactual agreement, the modified forced-choice format appears to remedy that problem.
How do we recognize the emotions other people are feeling? One source of information may be facial feedback signals generated when we automatically mimic the expressions displayed on others' faces. Supporting this “embodied emotion perception,” dampening (Experiment 1) and amplifying (Experiment 2) facial feedback signals, respectively, impaired and improved people’s ability to read others' facial emotions. In Experiment 1, emotion perception was significantly impaired in people who had received a cosmetic procedure that reduces muscular feedback from the face (Botox) compared to a procedure that does not reduce feedback (a dermal filler). Experiment 2 capitalized on the fact that feedback signals are enhanced when muscle contractions meet resistance. Accordingly, when the skin was made resistant to underlying muscle contractions via a restricting gel, emotion perception improved, and did so only for emotion judgments that theoretically could benefit from facial feedback.
This study addresses the longitudinal association between eye contact and maternal sensitivity during the ” rst 12 weeks of the infants’ life and relates both variables to the infants’ cry behaviour at three months of age. Free-play interactional sequences of 20 mothers were videotaped at weekly intervals. At the infants’ age of three months, the mothers were additionally asked to record their infants’ cry behaviour using a maternal diary method over a period of three days. The amount of mutual eye contact was assessed microanalytically, and rating procedures were used to assess maternal sensitivity. Duration and frequency of infant crying were taken from the maternal diaries. The results show an increase of eye contact during the ” rst three months, and maternal sensitivity is a stable feature remaining almost unchanged over time. This difference may explain why early sensitivity ratings cannot predict later eye contact rates, whereas, in contrast, early eye contact measures are significantly related to later maternal sensitivity. In addition, the results show that the eye contact rate is (in contrast to maternal sensitivity) related to a decrease in the duration of infants’ crying.
We used measures of regional brain electrical activity to show that not all smiles are the same. Only one form of smiling produced the physiological pattern associated with enjoyment. Our finding helps to explain why investigators who treated all smiles as the same found smiles to be ubiquitous, occurring when people are unhappy as well as happy. Also, our finding that voluntarily making two different kinds of smiles generated the same two patterns of regional brain activity as was found when these smiles occur involuntarily suggests that it is possible to generate deliberately some of the physiological change which occurs during spontaneous positive affect.
18 dating couples were compared with 18 pairs of unacquainted subjects on amount of eye contact as well as time spent in gazing at one another during performance of a computer bowling game. The dating couples spent more time gazing at one another than did unacquainted couples in this task situation. Dating couples of both sexes tended to look at their partners more often than would unacquainted couples. The prediction that women would spend more time looking at the men than men would spend looking at the women was not supported.
Conference Paper
Theories of embodied cognition propose that facial expression recognition depends upon processing in modality-specific visual areas and also upon a simulation of the somatovisceral and motor responses associated with the perceived emotion. To test this proposal, we targeted transcranial magnetic stimulation (TMS) at the right occipital face area (rOFA) and right somatosensory cortex while participants discriminated facial expressions. TMS impaired discrimination of facial expressions at both sites but had no effect on a matched facial identity task. In a second experiment, double pulse TMS separated by 40ms was delivered at different times to rOFA and right somatosensory cortex during the expression discrimination task. Accuracy dropped when pulses were delivered at 60–100ms at rOFA and at 100–140ms and 130–170ms at right somatosensory cortex. These sequential impairments at rOFA and right somatosensory cortex provide strong support for embodied accounts of expression recognition and hierarchical models of face processing. The results also demonstrate that non-visual areas contribute to expression processing very soon after stimulus presentation.
Differential Emotions Theory (DET) has long claimed that there is an innate concordance between smiles and positive emotions during infancy. However, over the course of development, we gradually learn to regulate facial expressions in accordance with display rules and to also use smiles as social signals. Thus, according to DET, smiles serve both expressive and social-communicative functions. This chapter presents a DET perspective on smiles and compares and contrasts it with two alternative theories; Behavioral Ecology View and Dynamic Systems Perspective. In the first section, the authors present a theoretical overview of the DET perspective on facial expressions and clarify some of the misunderstandings about this theory. In the second and third sections, we present a critique of the two alternative theories from the perspective of DET. (PsycINFO Database Record (c) 2012 APA, all rights reserved)