ArticlePDF Available

Abstract and Figures

Real-world studies show that the facial expressions produced during pain and orgasm—two different and intense affective experiences—are virtually indistinguishable. However, this finding is counterintuitive, because facial expressions are widely considered to be a powerful tool for social interaction. Consequently, debate continues as to whether the facial expressions of these extreme positive and negative affective states serve a communicative function. Here, we address this debate from a novel angle by modeling the mental representations of dynamic facial expressions of pain and orgasm in 40 observers in each of two cultures (Western, East Asian) using a data-driven method. Using a complementary approach of machine learning, an information-theoretic analysis, and a human perceptual discrimination task, we show that mental representations of pain and orgasm are physically and perceptually distinct in each culture. Cross-cultural comparisons also revealed that pain is represented by similar face movements across cultures, whereas orgasm showed distinct cultural accents. Together, our data show that mental representations of the facial expressions of pain and orgasm are distinct, which questions their nondiagnosticity and instead suggests they could be used for communicative purposes. Our results also highlight the potential role of cultural and perceptual factors in shaping the mental representation of these facial expressions. We discuss new research directions to further explore their relationship to the production of facial expressions.
Modeling dynamic mental representations of facial expressions of pain and orgasm. Stimulus: On each experimental trial, a dynamic face movement generator (18) randomly selected a biologically feasible combination of individual facial movements called action units [AUs (19)] from a core set of 42 AUs (here, brow lowerer, AU4, color-coded in blue; nose wrinkler, AU9, in green; and lip stretcher, AU20, in red). A random movement is then assigned to each AU individually by selecting random values for each of seven temporal parameters (i.e., onset latency, acceleration, peak amplitude, peak latency, sustainment, deceleration, and offset latency; see labels illustrating the blue curve). The randomly activated AUs are then combined and displayed on a photorealistic face identity to produce a random facial animation, shown here by the sequence of four face images. The color-coded vector below shows the three AUs randomly selected on this example trial. Mental representation: The observer viewed the facial animation and, if the dynamic face movements correlated with their mental representation (i.e., prior knowledge) of a facial expression of pain or orgasm, they categorized it accordingly (here, pain) and rated its intensity on a five-point scale from very weak to very strong (here, medium). Otherwise, the observer selected other. Each observer (40 per culture-Western and East Asian-heterosexual) categorized 3,600 such facial animations, each displayed on same-race, sexopposite faces and presented in random order across the experiment.
… 
Distinctiveness of the facial expression models of pain and orgasm in each culture. (A) Bayesian classification of facial expression models of pain and orgasm. Color-coded matrices show for each culture the average posterior probability that a facial expression model of pain or orgasm (input stimulus) is classified as pain or orgasm (output response). Red shows high posterior probability of classification; blue indicates low posterior probability (see color bar to the right). Exact values are shown in each square. Diagonal squares show correct classifications; off-diagonal squares show incorrect classifications. The red-colored diagonal squares show that in each culture the facial expression models of pain and orgasm are discriminated with high accuracy. (B) Perceptual discrimination of facial expression models of pain and orgasm. Each color-coded shape represents an individual observer facial expression model (n = 40 per culture) plotted according to its hit rate and its false alarm rate (i.e., d-prime value, which is a measure of perceptual discrimination). Magenta points represent Western models, and green represents East Asian models; triangles represent pain, and squares represent orgasm (see legend in the bottom right). The four quadrants (delimited by dashed lines) indicate the different response types based on hit and false alarm rates (see labels in each quadrant). The diagonal dashed line represents an equal rate of hits and false alarms. The distribution of the data points in the upper left-hand quadrant shows that the vast majority of facial expression models from both cultures are discriminated well with virtually no instances of ambiguity or confusion. (C) Comparison of facial expression models of pain and orgasm. To objectively identify which face movements (i.e., AUs) are specific to or common across pain and orgasm in each culture, we used mutual information (Methods, Comparison of the Facial Expression Models of Pain and Orgasm). For each culture, blue coloring on the face
… 
Content may be subject to copyright.
Distinct facial expressions represent pain and pleasure
across cultures
Chaona Chen
a,b
, Carlos Crivelli
c
, Oliver G. B. Garrod
b
, Philippe G. Schyns
a,b
, José-Miguel Fernández-Dols
d
,
and Rachael E. Jack
a,b,1
a
School of Psychology, College of Science and Engineering, University of Glasgow, Glasgow G12 8QB Scotland, United Kingdom;
b
Institute of Neuroscience
and Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G12 8QB Scotland, United Kingdom;
c
Institute for
Psychological Science, School of Applied Social Sciences, De Montfort University, Leicester LE1 9BH, United Kingdom; and
d
Departamento de Psicología
Social y Metodología, Facultad de Psicología, Universidad Autónoma de Madrid, 28049 Madrid, Spain
Edited by Susan T. Fiske, Princeton University, Princeton, NJ, and approved September 4, 2018 (received for review May 9, 2018)
Real-world studies show that the facial expressions produced during
pain and orgasmtwo different and intense affective experiences
are virtually indistinguishable. However, this finding is counterintu-
itive, because facial expressions are widely considered to be a pow-
erful tool for social interaction. Consequently, debate continues as
to whether the facial expressions of these extreme positive and
negative affective states serve a communicative function. Here,
we address this debate from a novel angle by modeling the mental
representations of dynamic facial expressions of pain and orgasm in
40 observers in each of two cultures (Western, East Asian) using a
data-driven method. Using a complementary approach of machine
learning, an information-theoretic analysis, and a human perceptual
discrimination task, we show that mental representations of pain
and orgasm are physically and perceptually distinct in each cul-
ture. Cross-cultural comparisons also revealed that pain is repre-
sented by similar face movements across cultures, whereas orgasm
showed distinct cultural accents. Together, our data show that men-
tal representations of the facial expressions of pain and orgasm are
distinct, which questions their nondiagnosticity and instead sug-
gests they could be used for communicative purposes. Our results
also highlight the potential role of cultural and perceptual factors in
shaping the mental representation of these facial expressions. We
discuss new research directions to further explore their relationship
to the production of facial expressions.
pain
|
orgasm
|
facial expressions
|
culture
|
data-driven methods
Studies of real-world scenarios show that people experienc-
ing intense negative or positive affectfor example, pain or
orgasmspontaneously produce facial expressions that are very
similar (14). This finding is counterintuitive, because facial ex-
pressions are widely considered to be a powerful tool for human
social communication and interaction, including the socially
relevant states of extreme positive and negative affect (57).
Consequently, the extent to which such intense states can be
accurately inferred from facial expressions remains a central
debate in the cognitive sciences that involves input from psy-
chological, ethological, pragmatic, and information-theoretic
approaches (1, 3, 813).
Here, we address this debate from a novel angle. Using a data-
driven reverse-correlation approach, we model the dynamic
mental representations of facial expressions of intense positive
and negative affectphysical pain and sexual pleasurein in-
dividuals from two cultures. We take this approach for two
reasons. First, mental representations are built from encounters
with the external environment, either directly or vicariously (e.g.,
learning cultural concepts), and are thus used to predict and
interpret the environment (14). Understanding the content of
these representations can therefore inform what an individual
might have learned from their real-world interactions. Second,
data-driven approaches enable a broader range of facial ex-
pressions to be tested as representative of these intense affects
because they are sampled agnostically from a less constrained
array than traditional theory-driven approaches (15) and without
the inevitable complexities of accurately measuring facial ex-
pressions in the wild (e.g., see ref. 16).
To examine whether mental representations of facial expres-
sions of physical pain and sexual pleasure (i.e., orgasm) are
distinguishable or not, we modeled these representations in in-
dividuals from two cultures (Western and East Asian; Methods,
Observers). For brevity, we now refer to these mental represen-
tations as facial expression models of pain and orgasm.We then
analyzed how distinguishable these facial expression models are
within and across cultures using a complementary approach of
machine learning, a human perceptual discrimination task, and an
information-theoretic analysis. We also compared the facial ex-
pression models across cultures to identify any cross-cultural
similarities and differences in the face movements that represent
these extreme affective states.
To derive these facial expression models, we used a data-driven
technique based on reverse correlation (17) that generates face
movements agnosticallythat is, with minimal assumptions about
which face movements represent which messages to whom (15, 18).
Significance
Humans often use facial expressions to communicate social
messages. However, observational studies report that people
experiencing pain or orgasm produce facial expressions that
are indistinguishable, which questions their role as an effective
tool for communication. Here, we investigate this counterin-
tuitive finding using a new data-driven approach to model the
mental representations of facial expressions of pain and or-
gasm in individuals from two different cultures. Using com-
plementary analyses, we show that representations of pain
and orgasm are distinct in each culture. We also show that pain
is represented with similar face movements across cultures,
whereas orgasm shows differences. Our findings therefore
inform understanding of the possible communicative role of
facial expressions of pain and orgasm, and how culture could
shape their representation.
Author contributions: C. Chen, C. Crivelli, O.G.B.G., P.G.S., J.-M.F.-D., and R.E.J. designed
research; C. Chen and C. Crivelli performed research; O.G.B.G. and P.G.S. contributed new
reagents/analytic tools; C. Chen and R.E.J. analyzed data; and C. Chen, C. Crivelli, P.G.S.,
J.-M.F.-D., and R.E.J. wrote the paper.
The authors declare no conflict of interest.
This article is a PNAS Direct Submission.
This open access article is distributed under Creative Commons Attribution License 4.0
(CC BY).
Data deposition: Action-unit patterns of each facial expression model and corresponding
d-prime values have been deposited on Open Science Framework (available at https://osf.
io/um6s5/).
1
To whom correspondence should be addressed. Email: rachael.jack@glasgow.ac.uk.
This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.
1073/pnas.1807862115/-/DCSupplemental.
www.pnas.org/cgi/doi/10.1073/pnas.1807862115 PNAS Latest Articles
|
1of9
PSYCHOLOGICAL AND
COGNITIVE SCIENCES
Fig. 1 illustrates this procedure using an example trial. On each trial,
a dynamic face movement generator (18) randomly selected a
combination of individual face movements called action units [AUs
(19)] from a core set of 42 AUs (minimum 1, maximum 4, and
median 3 AUs selected on each trial). In the example trial of Fig. 1,
three AUs are randomly selected: brow lowerer (AU4) color-coded
in blue, nose wrinkler (AU9) color-coded in green, and lip stretcher
(AU20) color-coded in red. A random movement is then assigned
to each AU separately using seven randomly selected values, one
for each temporal parameter of onset latency, acceleration, peak
amplitude, peak latency, sustainment, deceleration, and offset la-
tency (see labels illustrating the blue curve). The randomly activated
AUs are then combined and displayed on a photorealistic face
identity to produce a random facial animation (duration 2.25 s). An
example is shown in Fig. 1 using a sequence of four images (Movie
S1 shows the facial animation generation procedure represented in
Fig. 1). Observers in each culture viewed the resulting facial ani-
mation, and if the face movements matched their mental repre-
sentation of a facial expression of painor orgasm,they
categorized it accordingly (here, pain) and rated its intensity on a
five-point scale from very weakto very strong(here, me-
dium). Otherwise, if the facial animation did not match the ob-
servers mental representation of pain or of orgasm, they selected
other.Each observer completed 3,600 such trials, resulting in a set
of facial animations for pain and for orgasm. We can then build a
statistical relationship between the face movements on each trial
and the observers responses. This analysis thus produces a model of
the face movements that represent pain and orgasm in the mind of
each observer (see SI Appendix,Modeling Dynamic Mental Repre-
sentations of Facial ExpressionsofPainandOrgasmfor full details;
see Movie S2 for an illustration).
We used this technique to model the dynamic mental repre-
sentations of facial expressions of pain and orgasm in each of
40 observers per culture (see Movie S3 for examples of these
models in each culture). To objectively examine the distinctiveness
of these facial expression models, we used machine learning (a
Bayesian classifier) and an information-theoretic analysis using the
measurement of mutual information. We also asked new sets of
cultural observers to discriminate each facial expression model in a
perceptual discrimination task (see Methods,Physical and Perceptual
Distinctiveness of the Facial Expression Models of Pain and Orgasm
for full details). Our complementary analyses show that, in each
culture, the facial expression models of pain and orgasm are both
physically and perceptually distinct. Cross-cultural comparisons also
show differences in the facial expression models of orgasm, in-
cluding wide-open eyes among Westerners and smiling in East
Asians. In contrast, facial expression models of pain are similar
across cultures. We discuss the implications of our data-driven
findings of distinct mental representations of the facial ex-
pressions of pain and orgasm with respect to the similarity of
their production.
Results
Using the above data-driven method, we modeled a total of
160 dynamic mental representations of facial expression of pain
and orgasm (40 observers ×2 cultures ×2 affective states). We
henceforth refer to these as models.Each model is repre-
sented as a 1 ×42-dimensional binary vector detailing the AUs
that are significantly associated with the perception of each af-
fective state (i.e., pain or orgasm) plus seven values detailing the
temporal dynamics of each significant AU (see Methods,Mod-
eling Dynamic Mental Representations of Facial Expressions of
Pain and Orgasm for full details). Fig. 2 shows the distribution of
significant AUs across all 40 individual observer models for pain
and orgasm in each culture separately. Each distribution is rep-
resented as a color-coded matrix (Fig. 2A) with corresponding
face maps (Fig. 2B). We also compared the facial expression
models with the AUs reported in studies of the production of
facial expressions during physical pain (2030) or orgasm (13),
which showed that they are generally very similar (see SI Ap-
pendix,Comparison of the Mental Representations and Productions
of Facial Expressions of Pain and Orgasm and Fig. S1 for full
details). We then identified the AUs that appear most frequently
across the 40 models of pain and orgasm in each culture sepa-
rately using a Monte Carlo simulation method (see SI Appendix,
Highly Frequent Action Units for full details). Fig. 2Aindicates
these highly frequent AUs with a black dot. A casual inspection
of Fig. 2 suggests that, in each culture, the facial expression
models of pain and orgasm are distinct. To objectively test the
distinctiveness of the facial expression models in each culture, we
built and tested a Bayesian model of the classification task. We
also tested whether humans could perceptually discriminate
these facial expression models by asking a new set of observers in
each culture to perform a perceptual discrimination task.
Fig. 1. Modeling dynamic mental representations of facial expressions of
pain and orgasm. Stimulus: On each experimental trial, a dynamic face
movement generator (18) randomly selected a biologically feasible combi-
nation of individual facial movements called action units [AUs (19)] from a
core set of 42 AUs (here, brow lowerer, AU4, color-coded in blue; nose
wrinkler, AU9, in green; and lip stretcher, AU20, in red). A random movement
is then assigned to each AU individually by selecting random values for each
of seven temporal parameters (i.e., onset latency, acceleration, peak ampli-
tude, peak latency, sustainment, deceleration, and offset latency; see labels
illustrating the blue curve). The randomly activated AUs are then combined
and displayed on a photorealistic face identity to produce a random facial
animation, shown here by the sequence of four face images. The color-coded
vector below shows the three AUs randomly selected on this example trial.
Mental representation: The observer viewed the facial animation and, if the
dynamic face movements correlated with their mental representation (i.e.,
prior knowledge) of a facial expression of pain or orgasm, they categorized it
accordingly (here, pain) and rated its intensity on a five-point scale from very
weak to very strong (here, medium). Otherwise, the observer selected other.
Each observer (40 per cultureWestern and East Asianheterosexual) cate-
gorized 3,600 such facial animations, each displayed on same-race, sex-
opposite faces and presented in random order across the experiment.
2of9
|
www.pnas.org/cgi/doi/10.1073/pnas.1807862115 Chen et al.
Bayesian Classification of Facial Expression Models of Pain and
Orgasm in Each Culture. To objectively test the distinctiveness of
the facial expression models of pain and orgasm in each culture, we
built a Bayesian model of the discrimination task using a split-half
method (SI Appendix,Bayesian Classification of Facial Expression
Models). This analysis computes the average posterior probability
Fig. 2. Distribution of action units across facial expression models of pain and orgasm and their convergence across observers within each culture. (A)
Distribution of AUs across facial expression models of pain and orgasm. For each cultureWestern (Left) and East Asian (Right)and pain and orgasm
separately, the color-coded matrix shows the number of individual observer facial expression models (maximum 40) with each AU (see labels on the left).
Warmer colors indicate more observers; cooler colors indicate fewer observers (see color bar on the right). A black dot indicates the AUs that are highly
frequent (one-tailed, P<0.05) across all individual observer models as determined using a Monte Carlo simulation method (SI Appendix,Highly Frequent
Action Units). (B) Within-culture convergence of facial expression models of pain and orgasm. Color-coded face maps show for each culture and affective state
the number of individual observer facial expression models with each AU using the same color coding as in A.
Chen et al. PNAS Latest Articles
|
3of9
PSYCHOLOGICAL AND
COGNITIVE SCIENCES
that a pain or orgasm facial expression model (input stimulus) is
classified as either pain or orgasm (output response). Fig. 3Ashows
the results for each culture, where dark red shows high posterior
probability of classification for pain and orgasm, and blue indicates
low posterior probability (see color bar to the right). Exact values
are shown in each square. The diagonal squares show correct
classifications; off-diagonal squares show incorrect classifications.
AsshowninFig.3Aby the red-colored diagonal squares in each
culture, the Bayesian model consistently discriminated the facial
expression models of pain and orgasm with very little confusion
(average classification performance: Western: pain M=0.97, SD =
0.004; orgasm M=0.98, SD =0.002; East Asian: pain M=0.99,
SD =0.001; orgasm M=0.96, SD =0.005). These results show that
in each culture, mental representations of the facial expressions of
pain and orgasm are consistently different.
Perceptual Discrimination of the Facial Expression Models of Pain and
Orgasm in Each Culture. Having demonstrated that mental repre-
sentations of facial expressions of pain and orgasm are objec-
tively distinct in each culture, we now ask whether they are
perceptually discriminablethat is, they convey pain and orgasm
to other cultural observers. To test this, we asked a new set of
observers from each culture to discriminate each facial expres-
sion model in a perceptual discrimination task (see Methods,
Physical and Perceptual Distinctiveness of the Facial Expression
Models of Pain and Orgasm for full details). Following the ex-
periment, we computed the d-prime value (31, 32)a measure
of perceptual discriminationof each individual facial expres-
sion model in each culture. Specifically, d-prime shows the extent
to which a specific signal can be accurately distinguished from
others by considering both the hit rates (i.e., accurately reporting
the presence of a signal when it is present) and the false alarm
rates (i.e., erroneously reporting the presence of a signal when
another is present). This approach therefore safeguards against
artificially high accuracy rates that can occur based on random
responding (3335). Fig. 3Bshows the results where color-coded
shapes represent individual observer facial expression models
plotted according to their hit rate (yaxis) and false alarm rate
(xaxis). Magenta points represent Western models, and green
points represent East Asian models; triangles represent pain, and
squares represent orgasm (see legend on bottom right). Dashed
lines outline the four response types based on high/low hits and
false alarm rates (see labels in each quadrant). For example, a
high hit and low false alarm rate (top left-hand quadrant) indi-
cates strong perceptual discrimination of the facial expression
model (e.g., many yesresponses when correctly matched, and
few yes responses when incorrectly matched). In contrast, a high
hit rate and a high false alarm rate (top right-hand quadrant)
indicates that the facial expression model is ambiguous (e.g.,
many yes responses when the facial expression model appears
with either pain or orgasm). The diagonal dashed line indicates
equal hit and false alarm rates. As shown by the distribution of
the data points primarily in the top left-hand quadrant, the
majority of the facial expression models of pain and orgasm in
each culture were discriminated well by human observers with
virtually no instances of ambiguity or confusion. SI Appendix, Fig.
S2 shows all individual observer pain and orgasm facial expres-
sion models displayed as face maps and ranked by d-prime for
each culture separately.
Fig. 3. Distinctiveness of the facial expression models of pain and orgasm in
each culture. (A) Bayesian classification of facial expression models of pain
and orgasm. Color-coded matrices show for each culture the average pos-
terior probability that a facial expression model of pain or orgasm (input
stimulus) is classified as pain or orgasm (output response). Red shows high
posterior probability of classification; blue indicates low posterior proba-
bility (see color bar to the right). Exact values are shown in each square.
Diagonal squares show correct classifications; off-diagonal squares show
incorrect classifications. The red-colored diagonal squares show that in each
culture the facial expression models of pain and orgasm are discriminated
with high accuracy. (B) Perceptual discrimination of facial expression models
of pain and orgasm. Each color-coded shape represents an individual ob-
server facial expression model (n=40 per culture) plotted according to its hit
rate and its false alarm rate (i.e., d-prime value, which is a measure of per-
ceptual discrimination). Magenta points represent Western models, and
green represents East Asian models; triangles represent pain, and squares
represent orgasm (see legend in the bottom right). The four quadrants
(delimited by dashed lines) indicate the different response types based on hit
and false alarm rates (see labels in each quadrant). The diagonal dashed line
represents an equal rate of hits and false alarms. The distribution of the data
points in the upper left-hand quadrant shows that the vast majority of facial
expression models from both cultures are discriminated well with virtually
no instances of ambiguity or confusion. (C) Comparison of facial expression
models of pain and orgasm. To objectively identify which face movements
(i.e., AUs) are specific to or common across pain and orgasm in each culture,
we used mutual information (Methods,Comparison of the Facial Expression
Models of Pain and Orgasm). For each culture, blue coloring on the face
maps shows the AUs that are specific to pain or to orgasm (significantly high
MI, P<0.05), and red coloring shows those that are common across pain and
orgasm (low MI; see color bar to the left). Homogeneous blue coloring shows
that in each culture the facial expression models of pain and orgasm have no
commonalities; AUs specific to pain and to orgasm in each culture are listed
below each face map.
4of9
|
www.pnas.org/cgi/doi/10.1073/pnas.1807862115 Chen et al.
Revealing the Distinctive Face Movements in the Mental Representations
of Pain and Orgasm. The Bayesian classifier demonstrated the ob-
jective distinctiveness of the facial expression models of pain and
orgasm in each culture. The perceptual discrimination task further
demonstrated that observers in each culture could use these dif-
ferences in the face movements to distinguish pain and orgasm. To
precisely identify which face movements (i.e., AUs) are distinct
to the facial expression models of pain and orgasm (or are common
to them), we measured the relationship of each AU to pain and to
orgasm using an information-theoretic analysis based on mutual
information (MI; see Methods,Comparison of the Facial Expression
>Models of Pain and Orgasm for full details). Specifically, MI
measures the strength of the relationship between an AU (e.g.,
brow lowerer) and category labels (here, pain and orgasm). High
MI values indicate a strong relationship (i.e., strong effect size)
thatis,theAUismorecommonlyassociatedwithpainthanwith
orgasm, or vice versa. Low MI indicates a weak relationshipthat
is, the AU is associated with both pain and orgasm with similar
frequency. We applied this analysis only to the highly frequent AUs
(SI Appendix,Highly Frequent Action Units) to ensure that the
specific and common face movements identified are representative
of the set of 40 models of pain and orgasm in each culture.
Therefore, AUs with high MI values and that are highly frequent in
either pain or orgasm are considered to be specific AUs. Common
AUs have low MI values and are highly frequent in both pain and
orgasm. We established statistical significance of the MI values
using a Monte Carlo method (see Methods,Comparison of the Fa-
cial Expression Models of Pain and Orgasm for full details). Fig. 3C
shows the results. For each culture, color-coded face maps show the
AUs that are specific to pain and to orgasm (blue, significantly high
MI, P<0.05) and those that are common (red, low MI; see color
bar to the left). As shown by the homogeneous blue coloring, the
facial expression models of pain and orgasm have no common AUs.
TheAUsspecifictopainandtoorgasmineachculturearelisted
below each face.
Cross-Cultural Comparison of the Facial Expression Models of Pain
and Orgasm. Using a combination of Bayesian classification, hu-
man perceptual discrimination, and an information-theoretic
analysis, we showed that the facial expression models of pain and
orgasm are distinct in each culture. One outstanding question is
whether the different representations of pain and orgasm are the
same across cultures, because identifying cultural similarities and
differences in social communication is critical to understanding
the diversity and fundamental basis of human interaction (36
40). To examine this, we used the same information-theoretic
approach described above (i.e., MI analysis applied to highly
frequent AUs) to measure the relationship between AUs and
culture (see Methods,Cross-Cultural Comparison of the Facial
Expression Models of Pain and Orgasm for full details). Fig. 4
shows the results. AUs that are common across cultures are in-
dicated by red coloring (low MI values); AUs that are specific to one
culture are indicated by blue coloring (high MI values; see color bar
to the left). The results show that across cultures, the facial ex-
pression models of pain comprise highly similar face movements
including brow lowerer (AU4), cheek raiser (AU6), nose wrinkler
(AU9), upper lip raiser (AU10), and lip stretcher (AU20). In con-
trast, the facial expression models of orgasm comprise different face
movements across cultures: Western models include upper lid raiser
(AU5), jaw drop (AU26), and mouth stretch (AU27), whereas East
Asian models include lip cornerpuller(AU12).Asshownbythe
blue coloring, these culture-specific face movements are also com-
bined with cross-cultural AUs including brow raiser (AUs 1 and 2)
andeyesclosed(AU43).
Fig. 4. Cross-cultural comparison of facial expression models of pain and orgasm. To identify any cross-cultural and culture-specific action units, we usedMI
to measure the relationship between AUs and culture (Methods,Cross-Cultural Comparison of the Facial Expression Models of Pain and Orgasm). Each color-
coded face map shows the AUs that are common across cultures (red, low MI) or specific to one culture (blue, high MI, P<0.05; see color bar to the left). As
shown by the red coloring, pain shows several cross-cultural face movements (see AU labels below) and no culture-specific face movements. Orgasm also
showed cross-cultural face movements (e.g., brow raiser, AUs 1 and 2) with culture-specific accents such as jaw drop (AU26) and mouth stretch (AU27) among
Westerners, and lip corner puller (AU12) among East Asians.
Chen et al. PNAS Latest Articles
|
5of9
PSYCHOLOGICAL AND
COGNITIVE SCIENCES
Comparison of the Mental Representations and Productions of Facial
Expressions of Pain and Orgasm. Here, we compare our facial ex-
pression models with the face movements reported in real-world
production studies. To benchmark, we compiled 11 studies on
the production of facial expressions during pain (2030) and
3 studies for orgasm (13). We report results only using Western
data, because sufficient production data are not yet available for
East Asians (but see also refs. 1 and 41). For each production
study, we extracted all AUs reported as produced during pain or
orgasm, thereby producing 11 pain AU patterns and 3 orgasm
AU patterns (see SI Appendix,Comparison of the Mental Rep-
resentations and Productions of Facial Expressions of Pain and
Orgasm and Fig. S1 for methodology and full details).
Classification of mental representations of facial expressions by real-world
productions. First, we tested whether the facial expression models
are accurately classified as pain or orgasm based on knowledge
of their real-world production. Specifically, we measured the
similarity between the facial expression models and their real-
world productions using the Hamming distance, which measures
the number of dimensional values that differ between two (here,
binary) vectors (42). For all facial expression models (n=
80 total), we classified each as pain or orgasm based on its
highest similarity (i.e., lowest Hamming distance) to the AU
patterns reported in all production studies (n=14 AU patterns
total). Fig. 5Ashows the results. The color-coded confusion
matrix shows the number of facial expression models (yaxis)
classified as pain or orgasm (xaxis). Diagonal squares show
correct classifications; off-diagonal squares show incorrect clas-
sifications. Warmer colors indicate a higher number of facial
expression models; cooler colors indicate lower numbers (see
color bar to the right). Exact numbers are shown in each cell. As
shown by the diagonal squares, the vast majority of both pain and
orgasm facial expression models are correctly classified based on
knowledge of their real-world production [Χ
2
(1, n=80) =34.58,
P<0.01].
Similarities and differences in face movements between mental repre-
sentations and produced facial expressions of pain and orgasm. Next, we
examined the similarities and differences in face movements
between the mental representations and productions of facial
expressions of pain and orgasm. We included all AUs reported in
the majority of production studies or mental representations (SI
Appendix, Fig. S1A). Fig. 5Bpresents these AUs as a four-set
Venn diagram with pain (cyan) and orgasm (magenta) repre-
sented on the horizontal axis and models and production studies
presented on the vertical axis. Action units shared between pain
and orgasm are shown in the textured areas, whereas AUs shared
between the models and productions are shown in the shaded
areas. Action units that are specific to one set (e.g., produced
facial expressions of pain) are shown in the untextured/unshaded
areas (see key to the right). As shown by the top textured area,
no AUs are shared between models of pain and orgasm, as
reported earlier. In contrast, several AUs4, 6, 10, 43, and 25 to
27are shared between real-world productions of pain and or-
gasm, which confirms their reported similarity (see bottom textured
area). However, a majority of production studies also report that
AUs 7, 17, and 45 are displayed during pain but not orgasm (see
bottom left, unshaded). Together, these data suggest that produced
facial expressions of pain and orgasm have distinctive face move-
ments but are less distinctive than mental representations.
Next, as shown by the shaded blue area, several AUs converge
between the models and production of painthat is, AUs 4, 6, 9,
10, 12, and 20with a few AUs that are specific to the models
(AU11) and productions (AUs 7, 17, 45). Although there are
fewer orgasm production data (n=3) to build a reliable com-
parison with the models, AUs 43 and 25 to 27 are convergent
whereas AUs 2 and 5 are only present in the models. We return
to these similarities and differences between models and pro-
ductions in Discussion.
Fig. 5. Comparison of mental representations and productions of facial expressions of pain and orgasm. (A) Classification of facial expression models of pain
and orgasm. The color-coded matrix shows the number of facial expression models (n=40 total) classified as pain or orgasm based on their highest similarity
to real-world productions of facial expressions of pain and orgasm (n=14 produced facial expressions in total). Diagonal squares show correct classifications;
off-diagonal show incorrect classifications. Warmer colors indicate higher numbers of models; cooler colors indicate lower numbers (see color bar to the
right). Exact numbers are shown in each cell. As shown by the diagonal squares, the vast majority of pain and orgasm facial expression models are correctly
classified based on the face movements reported in real-world production studies. (B) Comparison of facial expression models of pain and orgasm with real-
world productions. The four-set Venn diagram shows the AUs that converge or diverge across the four sets of AUs, reported in facial expression models and
produced facial expressions of pain (cyan) and orgasm (magenta). Textured areas show the AUs that are shared between pain and orgasm; shaded areas show
the AUs that are shared between models and productions of facial expressions; untextured/unshaded areas show the AUs that are specific to one of the sets
(see key to the right).
6of9
|
www.pnas.org/cgi/doi/10.1073/pnas.1807862115 Chen et al.
Discussion
Here, we examined whether facial expressions of the extreme
positive and negative affective states of physical pain and sexual
pleasure form distinct representations in the minds of observers in
two cultures. Using a data-driven technique, we mathematically
modeled the mental representations of dynamic facial expressions
of physical pain and orgasm in individuals from Western and East
Asian cultures. We then used a complementary approach of ma-
chine learning, a human perceptual discrimination task, and an
information-theoretic analysis to show that, in each culture, mental
representations of the facial expressions of pain and orgasm are
distinct. Furthermore, a cross-cultural analysis showed that mental
representations of pain share similar face movements across cul-
tures including brow lowering, cheek raising, nose wrinkling, and
mouth stretching. In contrast, mental representation of orgasm
comprised culture-specific face movementsWesterners included
wide-open eyes and a vertically stretched mouth, whereas East
Asians included smilingwhichwerecombinedwithcross-cultural
face movements such as brow raising and closed eyes. Together,
these data show that mental representations of the extreme posi-
tive and negative affective states of physical pain and orgasm are
distinct in the two cultures. We now discuss the implications of
these results in relation to evidence from real-world production
studies that show that people experiencing physical pain and or-
gasm produce similar facial expressions.
Implications of Distinct Mental Representations of Facial Expressions
of Pain and Orgasm. Our results from modeling the mental repre-
sentations of facial expressions of pain and orgasm show that they
are distinct. Specifically, we show in both cultures that mental rep-
resentations of pain and orgasm comprise opposing face move-
mentswhereas pain is characterized by those that contract the
face inward (e.g., brow lowering, nose wrinkling, and cheek rais-
ing), orgasm is represented by face movements that expand the
face outward (e.g., brow raising in both cultures; mouth opening
and eyelid raising among Westerners). Such contrasting face
movements are therefore prime candidates for communicating
these different affective states to others (43) and to influence their
behaviorfor example, eliciting help in the case of pain or in-
dicating completion of a sexual act in orgasm. Disentangling which
face movements serve social communication and which are pri-
marily a physiological response requires further understanding of
how social contexts (e.g., dyads) influence facial behaviors in dif-
ferent cultures. In either case, our data show that distinct facial
expressions can be used to convey the extreme affective states of
painandorgasminbothcultures.Although not studied here, tran-
sient changes in facial coloration such as blushing and pallor could
comprise a key component of the facial behavior produced during
pain and orgasm and thus contribute to the perception of these in-
tense affective states in others (e.g., see refs. 44 and 45). We antici-
pate that such questions will soon be addressed in future research.
Mental Representations Versus Real-World Production of Facial Expressions.
We show that mental representations of pain and orgasm share
many face movements with their real-world productions, sug-
gesting that mental representations are statistically related to real-
world displays. This is further supported by the models being
recognized as pain and orgasm by a separate group of observers in
each culture. However, our finding that mental representations of
facial expressions of pain and orgasm are distinct contrasts with
real-world studies of the production of these facial expressions,
which report that they are similar. Specifically, productions of pain
and orgasm share several face movements such as brow lowering,
cheek and lip raising, eye closing, and mouth opening, and differ
on others such as wincing, chin raising, and blinking. This suggests
that although produced facial expressions of pain and orgasm
show distinctive features, mental representations are even more
distinctive than their real-world displays. This discrepancy could
arise from specific divergences between mental representations
and real-world displays. For example, our comparison analysis
suggests that mental representations comprise a subset of the most
diagnostic face movementsfor example, facial expression mod-
els of pain include most AUs reported in produced facial ex-
pressions such as brow lowering, nose wrinkling, and horizontal lip
stretching, but not AUs such as chin raising or wincing. Such ef-
ficient encoding of the most diagnostic and visually salient face
movements could facilitate memory storage and perceptual cate-
gorization in the real world. Relatedly, mental representations
could also represent supernormal stimuli where certain features of
real-world displays are exaggerated, and which could draw more
attention to these features in the environment as a result (46, 47).
Divergence could also arise due to the influence of other con-
cepts such as idealized behaviors, that is, those that have a high
value within a culture. For example, our results show that East
Asian mental representations of facial expressions of orgasm in-
clude smiling, whereas Western models show a wide-open mouth.
These cultural differences correspond to current theories of ideal
affect (48) that propose that Westerners value high arousal positive
states such as excitement and enthusiasm, which are often associ-
ated with wide-open eye and mouth movements (2, 3, 40), whereas
East Asians tend to value low arousal positive states, which are
often associated with closed-mouth smiles (49). As discussed in
current theories of ideal affect, cultural ideals influence the be-
haviors of individuals within that culturethat is, Westerners are
expected to display positive states as high arousal (e.g., excited),
whereas East Asians are expected to display positive states as low
arousal (e.g., calm). Therefore, it is likely that Westerners and East
Asians display different facial expressions in line with the expec-
tations of their culture. Indeed, we show that Western mental
representations of orgasm share AUs with produced facial ex-
pressions (e.g., AUs 43 and 25 to 27), which by extension suggests
that East Asians might produce these facial expressions during
orgasm. We anticipate that such questions could be addressed
when sufficient East Asian production data become available.
Similarly, mental representations could also reflect the influence of
social motives, values, semantic knowledge, or pragmatic competence
(6, 11, 12), which could also shape real-world displays.
A further source of divergence between mental representa-
tions and real-world displays could be variance in experimental
conditions. For example, the mental representations reported
here comprise a dynamic facial expression displayed over a rel-
atively short time period (2.25 s), whereas some production
studies capture face movements displayed over longer periods or
during a single snapshot. Such variance in recording methods
could therefore capture different segments of dynamically evolving
facial display or a series of different facial displays that represent
different stages of experiencing pain or sexual pleasure (see SI
Appendix,Fig.S1for study details).
In all cases discussed above, a better understanding of the nature
and origin of the divergences and/or biases in mental representa-
tions requires detailed comparisons with the ground truth”—that
is, knowledge of the true variety of facial expressions of pain and
orgasm that are displayed in the real world, including variability
across time and within different cultures and social contexts (50).
We anticipate that such data will become more available with the
increasing development of technologies that can be used to sys-
tematically record and decode face movements in the wild.
Conclusions
We found that mental representations of facial expressions of
the extreme negative and positive states of physical pain
and orgasm are distinct in two different cultures. Our results
therefore question the nondiagnosticity of these facial expres-
sions and suggest that they could serve as effective tools for
social communication and interaction (4, 13). Our results also
address existing questions of whether culture influences how
Chen et al. PNAS Latest Articles
|
7of9
PSYCHOLOGICAL AND
COGNITIVE SCIENCES
facial expressions are represented and used to communicate
basic social messages (51, 52).
Finally, understanding the ontology of facial expressionsthat is,
the form of face movement patternsis a substantial question due
to the complexity of the social world and the multiple variables that
could influence communication (53). Our data highlight the rele-
vance of controlling potential (and known) variables when examin-
ing the form and function of signals, such as the nature of the social
context and the communication channel such as viewing distance
(54). We anticipate that the development of new methods that can
precisely control these potential variables and measure their con-
tribution will allow better navigation of the complex social world and
provide a richer, more accurate account of social communication.
Methods
Observers. To model the mental representationsoffacialexpressionsofphysical
pain and orgasm in each culture, we recruited a total of 80 observers (40
Westerners, white European, 20 females, mean age 22 y, SD =2.68 y; 40 East
Asians, Chinese, 20 females, mean age 23 y, SD =1.80 y). For the perceptual
discrimination task, we recruited a new group of 104 observers (52 Western,
white European, 26 females, mean age 22 y, SD =2.73 y; 52 East Asians, Chinese,
26 females, mean age 23 y, SD =1.54 y). To control for the possibility that the
observers mental representations or interpretation of these facial expressions
could have been influenced by cross-cultural interactions, we recruited observers
with minimal exposure to and engagement with other cultures (55) as assessed
by questionnaire (SI Appendix,Screening Questionnaire). We also recruited ob-
servers who were sexually active (as per self-report) and identified as hetero-
sexual as assessed by the Kinsey scale (56) (SI Appendix,Kinsey Scale). All East
Asian observers had arrived in the United Kingdom for the first time with an
average UK residence of 3 mo at the time of testing (SD =1.9 mo) and had a
minimum International English Testing System score of 6.0 (competent user). All
observers had normal or corrected-to-normal vision and were free from any
emotion-related atypicalities (autism spectrum disorder, depression, anxiety),
learning difficulties (e.g., dyslexia), synesthesia, and disorders of face perception
(e.g., prosopagnosia) as per self-report. We obtained each observerswrittenin-
formed consent before testing and paid each observer £6 per h for their par-
ticipation. The University of Glasgow, College of Science and Engineering Ethics
Committee authorized the experimental protocol (reference ID 300140074).
Modeling Dynamic Mental Representations of Facial Expressions of Pain and
Orgasm. All observers completed the facial animation categorization task as
illustrated in Fig. 1. We instructed observers to categorize each facial animation
according to physical pain defined as the sharp sensory pain when receiving,
for example, an electroshock, keeping a limb sunken in icy water, or back pain
or orgasm defined as the brief and intense experience during the sexual re-
sponse cycle after the first arousal phase and the sustained sexual excitement
phase.We provided participants with a figure illustrating the different phases
of the sexual response cyclethat is, excitement, plateau, orgasm, and reso-
lution. To compute models of the dynamic facial expressions of pain and or-
gasm for each individual observer, we used an established model-fitting
procedure (18). First, we performed a Pearson correlation between two binary
vectors: The first vector detailed the presence or absence of each AU on each
trial; the second detailed the response of the observer (pain =0, orgasm =1).
For all significant correlations (two-tailed, P<0.05), we assigned a value of 1
(0 otherwise), thus producing a 1 ×42-dimensional binary vector detailing the
composition of AUs that are significantly associated with the perception of
each affective state for that observer. To model the dynamic components of
each significant AU, we performed a linear regression between the second
binary response variable and the seven temporal parameters of each signifi-
cantly correlated AU, as detailed on each trial. To calculate the intensity gra-
dients of each of the facial expression models, we fitted a linear regression
model to the temporal parameters of each significantly correlated AU and the
observers intensity ratings. To make the resulting dynamic face movements
into movies for later use as stimuli, we then combined the significantly corre-
lated AUs with their temporal activation parameters, using only the high-
intensityratings, as these comprise the most salient signals (see Movie S2
for an illustration of the procedure). Our approach therefore delivers the
precise dynamic facial expressions that elicit the perception of pain and orgasm
in each individual observer in each culture.
Physical and Perceptual Distinctiveness of the Facial Expression Models of Pain
and Orgasm. For each culture and sex of observer (total 104 observers;
2 cultures ×2 sexes ×26 observers), we displayed a set of 40 same-culture,
same-sex facial expression models of pain and orgasm (20 models ×2 pain/
orgasm) on 10 new same-race, sex-opposite face identities (20 white Euro-
pean, 10 females, mean age 22 y, SD =3.49 y; 19 Chinese, 1 Japanese, mean
age 24 y, SD =2.14 y). For example, for the new group of Western male
observers, we displayed the 40 facial expression models of pain and orgasm
derived from Western male observers in Modeling Dynamic Mental Repre-
sentations of Facial Expressions of Pain and Orgasm on 10 white female
faces. Thus, for each culture and sex of new observers, we generated
400 facial expression stimuli (20 same-culture, same-sex facial expression
models ×2 pain/orgasm ×10 same-race, sex-opposite face identities).
On each experimental trial, observers first viewed a word (pain or orgasm)
displayed on-screen for 1.5 s and followed directly by either a correctly or
incorrectly matched facial expression displayed once for 2.25 s. We asked
observers to indicate whether or not the preceding word accurately described
the facial expression by pressing yes or no keys on a keyboard and to respond
as accurately as possible. We assigned yes and no keys to separate hands for
each observer and counterbalanced key assignments across observers. Half of
the trials comprised correct wordfacial expression matches and included all
400 facial expression stimuli, with the other half of the trials comprising in-
correct wordfacial expression matches. Each observer therefore completed
800 trials (400 facial expression stimuli ×correct/incorrect matches) presented
in random order across the experiment. We used the same stimulus pre-
sentation display as used in Modeling Dynamic Mental Representations of
Facial Expressions of Pain and Orgasm. Each observer completed the experi-
ment over four 20-min sessions with a short break (5 min) in-between
sessions. On average, observers completed the experiment in 1.25 h (SD =
0.25 h) in 1 d. Following the experiment, we computed the d-prime of each
individual facial expression model in each culture by pooling the responses
from all observers who completed the perceptual discrimination task.
Comparison of the Facial Expression Models of Pain and Orgasm. To identify
the AUs that are specific to or common across the facial expression models of
pain andorgasm in each culture,we computed the mutualinformationfor each
highly frequent AU and established statistical significance using a Monte Carlo
approach. Specifically, for each highly frequent AU, we produced a random
distribution of MI values by randomly shuffling the affective state assignment
(i.e., pain or orgasm) of each individual facial expression model 1,000 times,
computing the MI for each AU at each iteration, and then taking the maximum
MI value across all AUs.We then used the distribution of maximum MI values to
identify the AUs with an MIvalue in the 95th percentile of the distribution (57).
Cross-Cultural Comparison of the Facial Expression Models of Pain and Orgasm.
To examine whether the facial expression models of pain and orgasm are
similar ordifferent across cultures, we applied MI analysis betweenthe AUs and
culture for pain and orgasm separately. To establish statistical significance, we
used a Monte Carlo approach as above but by randomly shuffling the cultural
assignment (i.e., Western or East Asian) of the facial expression models.
ACKNOWLEDGMENTS. We thank Elisabeth Fredstie and Dovile Blindaruk-
Vile for their assistance with data collection. This work was supported by the
Economic and Social Research Council Grant (ESRC ES/K001973/1), British
Academy Grant (SG113332), Wellcome Trust Grant (107802/Z/15/Z), Multidis-
ciplinary University Research Initiative/Engineering and Physical Sciences Re-
search Council Grant (EP/N019261/1), Chinese Scholarship Council Award
(201306270029), and a Spanish Government Grant (PSI2017-88776-P).
1. Hughes SM, Nicholson SE (2008) Sex differences in the assessment of pain versus
sexual pleasure facial expressions. J Soc Evol Cult Psychol 2:289298.
2. Masters WH, Johnson VE (1966) Human Sexual Response (Little, Brown, Boston).
3. Fernández-Dols J-M, Carrera P, Crivelli C (2011) Facial behavior while experiencing
sexual excitement. J Nonverbal Behav 35:6371.
4. Barrett LF, Mesquita B, Gendron M (2011) Context in emotion perception. Curr Dir
Psychol Sci 20:286290.
5. Niedenthal PM, Mermillod M, Maringer M, Hess U (2010) The simulation of smiles
(SIMS) model: Embodied simulation and the meaning of facial expression. Behav
Brain Sci 33:417433, discussion 433480.
6. Crivelli C, Fridlund AJ (2018) Facial displays are tools for social influence. Trends Cogn
Sci 22:388399.
7. Parkinson B (2017) Interpersonal effects and functions of facial activity. The Science
of Facial Expression, eds Fernández-Dols JM, Russell JA (Oxford Univ Press, New York),
pp 435456.
8. Mehu M, DErrico F, Heylen D (2012) Conceptual analysis of social signals: The im-
portance of clarifying terminology. J Multimod U Interf 6:179189.
9. Gratch J (2008) True emotion vs. social intentions in nonverbal communication:Towards
a synthesis for embodied conversational agents. Modeling Communication with Robots
and Virtual Humans, eds Wachsmuth I, Knoblich G (Springer, Berlin), pp 181197.
8of9
|
www.pnas.org/cgi/doi/10.1073/pnas.1807862115 Chen et al.
10. Seyfarth RM, Cheney DL (2017) The origin of meaning in animal signals. Anim Behav
124:339346.
11. Fernández-Dols JM (2017) Natural facial expression: A view from psychological con-
structionism and pragmatics. The Science of Facial Expression, eds Fernández-Dols JM,
Russell JA (Oxford Univ Press, New York), pp 457478.
12. Patterson ML (2011) More than Words: The Power of Nonverbal Communication
(Aresta, Barcelona).
13. Aviezer H, Trope Y, Todorov A (2012) Body cues, not facial expressions, discriminate
between intense positive and negative emotions. Science 338:12251229.
14. Herschbach M, Bechtel W (2014) Mental mechanisms and psychological construction.
The Psychological Construction of Emotion, eds Feldman Barrett L, Russell J (Guilford
Press, New York), pp 2144.
15. Jack RE, Crivelli C, Wheatley T (2018) Data-driven methods to diversify knowledge of
human psychology. Trends Cogn Sci 22:15.
16. Kanade T, Tian Y, Cohn JF (2000) Comprehensive database for facial expression
analysis. Proceedings Fourth IEEE International Conference on Automatic Face and
Gesture Recognition (IEEE, New York), pp 4654.
17. Ahumada A, Lovell J (1971) Stimulus features in signal detection. J Acoust Soc Am 49:
17511756.
18. Yu H, Garrod OGB, Schyns PG (2012) Perception-driven facial expression synthesis.
Comput Graph 36:152162.
19. Ekman P, Friesen WV (1978) Manual for the Facial Action Coding System (Consulting
Psychologists, Palo Alto, CA).
20. Patrick CJ, Craig KD, Prkachin KM (1986) Observer judgments of acute pain: Facial
action determinants. J Pers Soc Psychol 50:12911298.
21. LeResche L, Dworkin SF (1988) Facial expressions of pain and emotions in chronic TMD
patients. Pain 35:7178.
22. Prkachin KM, Mercer SR (1989) Pain expression in patients with shoulder pathology:
Validity, properties and relationship to sickness impact. Pain 39:257265.
23. Prkachin KM, Solomon PE (2008) The structure, reliability and validity of pain ex-
pression: Evidence from patients with shoulder pain. Pain 139:267274.
24. Hadjistavropoulos HD, Craig KD (1994) Acute and chronic low back pain: Cognitive,
affective, and behavioral dimensions. J Consult Clin Psychol 62:341349.
25. Kunz M, Scharmann S, Hemmeter U, Schepelmann K, Lautenbacher S (2007) The facial
expression of pain in patients with dementia. Pain 133:221228.
26. LeResche L (1982) Facial expression in pain: A study of candid photographs.
J Nonverbal Behav 7:4656.
27. Craig KD, Hyde SA, Patrick CJ (1991) Genuine, suppressed and faked facial behavior
during exacerbation of chronic low back pain. Pain 46:161171.
28. Prkachin KM (1992) The consistency of facial expressions of pain: A comparison across
modalities. Pain 51:297306.
29. Galin KE, Thorn BE (1993) Unmasking pain: Detection of deception in facial expres-
sions. J Soc Clin Psychol 12:182197.
30. Craig KD, Patrick CJ (1985) Facial expression during induced pain. J Pers Soc Psychol
48:10801091.
31. Stanislaw H, Todorov N (1999) Calculation of signal detection theory measures. Behav
Res Methods Instrum Comput 31:137149.
32. Green DM, Swets JA (1966) Signal Detection Theory and Psychophysics (Wiley, New
York).
33. Lynn SK, Barrett LF (2014) Utilizingsignal detection theory. Psychol Sci 25:
16631673.
34. Elfenbein HA, Ambady N (2002) Is there an in-group advantage in emotion recog-
nition? Psychol Bull 128:243249.
35. Russell JA (1994) Is there universal recognition of emotion from facial expression? A
review of the cross-cultural studies. Psychol Bull 115:102141.
36. Scarantino A (2017) How to do things with emotional expressions: The theory of
affective pragmatics. Psychol Inq 28:165185.
37. Fridlund AJ (2017) On scorched earths and bad births: Scarantinos misbegotten
theory of affective pragmatics.Psychol Inq 28:197205.
38. Wharton T (2009) Pragmatics and Non-Verbal Communication (Cambridge Univ Press,
Cambridge, UK).
39. Senft G (2014) Understanding Pragmatics (Routledge, New York).
40. Jack RE, Sun W, Delis I, Garrod OG, Schyns PG (2016) Four not six: Revealing culturally
common facial expressions of emotion. J Exp Psychol Gen 145:708730.
41. Rosmus C, Johnston CC, Chan-Yip A, Yang F (2000) Pain response in Chinese and non-
Chinese Canadian infants: Is there a difference? Soc Sci Med 51:175184.
42. MacKay DJC (2003) Information Theory, Inference, and Learning Algorithms (Cam-
bridge Univ Press, Cambridge, UK).
43. Shannon CE (2001) A mathematical theory of communication. SIGMOBILE Mob
Comput Commun Rev 5:355.
44. Benitez-Quiroz CF, Srinivasan R, Martinez AM (2018) Facial color is an efficient
mechanism to visually transmit emotion. Proc Natl Acad Sci USA 115:35813586.
45. Thorstenson CA, Elliot AJ, Pazda AD, Perrett DI, Xiao D (November 27, 2017) Emotion-
color associations in the context of the face. Emotion, 10.1037/emo0000358.
46. Arak A, Enquist M (1993) Hidden preferences and the evolution of signals. Philos
Trans R Soc Lond B Biol Sci 340:207213.
47. Tinbergen N (1948) Social releasers and the experimental method required for their
study. Wilson Bull 60:651.
48. Tsai JL (2007) Ideal affect: Cultural causes and behavioral consequences. Perspect
Psychol Sci 2:242259.
49. Rychlowska M, et al. (2017) Functional smiles: Tools for love, sympathy, and war.
Psychol Sci 28:12591270.
50. Fernández-Dols J-M, Crivelli C (2013) Emotion and expression: Naturalistic studies.
Emot Rev 5:2429.
51. Laland KN (2017) Darwins Unfinished Symphony: How Culture Made the Human
Mind (Princeton Univ Press, Princeton, NJ).
52. Fernández-Dols J-M (2013) Nonverbal communication: Origins, adaptation, and
functionality. Handbook of Nonverbal Communication, eds Hall JA, Knapp ML
(Mouton de Gruyter, New York), pp 6992.
53. Jack RE, Schyns PG (2017) Toward a social psychophysics of face communication. Annu
Rev Psychol 68:269297.
54. Smith FW, Schyns PG (2009) Smile through your fear and sadness: Transmitting and
identifying facial expression signals over a range of viewing distances. Psychol Sci 20:
12021208.
55. De Leersnyder J, Mesquita B, Kim HS (2011) Where do my emotions belong? A study
of immigrantsemotional acculturation. Pers Soc Psychol Bull 37:451463.
56. Kinsey AC, Pomeroy WB, Martin CE, Gebhard PH (1998) Sexual Behavior in the Human
Female (Indiana Univ Press, Bloomington, IN).
57. Nichols TE, Holmes AP (2002) Nonparametric permutation tests for functional neu-
roimaging: A primer with examples. Hum Brain Mapp 15:125.
Chen et al. PNAS Latest Articles
|
9of9
PSYCHOLOGICAL AND
COGNITIVE SCIENCES
... Problematically, the unprecedented ability of both state and non-state actors to engage in NCDC raises a myriad of legal and ethical concerns over privacy, fairness, accuracy, autonomy, and misuse [12,13]. At a more fundamental level, is the contentious science behind the emotional AI industry [14][15][16]. Many emotional AI technologies are predicated on Paul Eckman's [17] now widely discredited 'universality of emotion' that assumes the display of human emotions stays constant from culture to culture. ...
... Given the current lack of international or national level regulatory agreements on NCDC [19], inherent risks of algorithmic bias and discrimination [20], surveillance and loss of privacy [21] and critically, the yet proven science of emotion-tracking technologies [14][15][16], we argue that members of Generation Z or Gen Z (1997-2012) represent the most current, at-risk demographics. Interestingly, while studies show this age group to have higher rates of loneliness, depression, and anxiety, they also indicate them to be the most accepting of emerging AI technologies [22,23]. ...
Article
This paper examines technological acceptance for automated emotion-sensing devices and non-conscious data collection (NCDC). We argue that conventional 20th century scholarship of human-machine relations is ill-equipped in the age of intelligent machines that sense, monitor, and tracks human sentiment, emotion, and feeling. We conduct a regression analysis on a dataset of 1015 Generation Z student respondents (age 18–27) from 48 countries and 8 regions worldwide using the Bayesian Hamiltonian Monte Carlo approach. The empirical results highlight the significance of sociocultural factors that influence technological acceptance by this specific generational demographic. Our findings also demonstrate the advantage but also inherent limitation of traditional theories such as Davis's “Technological Acceptance Model” in accounting for of cross-cultural factors such as religions and regions, given the transfer of new technologies across borders. Moreover, our findings highlight important governance and design implications that need to be addressed to ensure that emotional AI systems and devices serve the best interests of individuals and societies.
... For EB+, only binary data indicates the presence or absence of the coded AU. In the pain facial expressions that data-driven approach clarified ( Fig. 2 in Chen et al., 2018), there may be asymmetric patterns in upper lip raiser movements. Moreover, AU intensity scores would be important, because it is assumed that the strength of the expression covaries with the strength of the experience (Tourangeau & Ellsworth, 1979). ...
Article
Full-text available
Smiles are universal but nuanced facial expressions that are most frequently used in face-to-face communications, typically indicating amusement but sometimes conveying negative emotions such as embarrassment and pain. Although previous studies have suggested that spatial and temporal properties could differ among these various types of smiles, no study has thoroughly analyzed these properties. This study aimed to clarify the spatiotemporal properties of smiles conveying amusement, embarrassment, and pain using a spontaneous facial behavior database. The results regarding spatial patterns revealed that pained smiles showed less eye constriction and more overall facial tension than amused smiles; no spatial differences were identified between embarrassed and amused smiles. Regarding temporal properties, embarrassed and pained smiles remained in a state of higher facial tension than amused smiles. Moreover, embarrassed smiles showed a more gradual change from tension states to the smile state than amused smiles, and pained smiles had lower probabilities of staying in or transitioning to the smile state compared to amused smiles. By comparing the spatiotemporal properties of these three smile types, this study revealed that the probability of transitioning between discrete states could help distinguish amused, embarrassed, and pained smiles.
... Microexpression recognition is a much more difficult task than the thoroughly studied facial expression recognition problem. However, psychologists still argue about whether facial expressions reliably communicate emotions [8]. Compared with facial expressions, facial microexpressions are more reliable and more difficult to hide. ...
Article
Full-text available
With the recent development of microexpression recognition, deep learning (DL) has been widely applied in this field. In this paper, we provide a comprehensive survey of the current DL-based microexpression (ME) recognition methods. In addition, we introduce a novel dataset based on fusing all the existing ME datasets. We also evaluate a baseline DL for the microexpression recognition task. Finally, we make the new dataset and the code publicly available to the community at https://github.com/wenjgong/microExpressionSurvey.
... We included six Action Units (AU4: Brow Lowerer, AU7: Lid Tightener, AU9: Nose Wrinkler, AU10: Upper Lip Raiser, AU26: Jaw Drop and AU43: Eyes Closed) 31 as these have been shown to be present in models of pain expressions of different intensities and across different cultures 6,27 . Using MakeHuman 20 with the FACSHuman 21 plugin, we generated a natural expression mesh and 6 maximum AU activation mesh for each AU. ...
Article
Full-text available
Medical training simulators can provide a safe and controlled environment for medical students to practice their physical examination skills. An important source of information for physicians is the visual feedback of involuntary pain facial expressions in response to physical palpation on an affected area of a patient. However, most existing robotic medical training simulators that can capture physical examination behaviours in real-time cannot display facial expressions and comprise a limited range of patient identities in terms of ethnicity and gender. Together, these limitations restrict the utility of medical training simulators because they do not provide medical students with a representative sample of pain facial expressions and face identities, which could result in biased practices. Further, these limitations restrict the utility of such medical simulators to detect and correct early signs of bias in medical training. Here, for the first time, we present a robotic system that can simulate facial expressions of pain in response to palpations, displayed on a range of patient face identities. We use the unique approach of modelling dynamic pain facial expressions using a data-driven perception-based psychophysical method combined with the visuo-haptic inputs of users performing palpations on a robot medical simulator. Specifically, participants performed palpation actions on the abdomen phantom of a simulated patient, which triggered the real-time display of six pain-related facial Action Units (AUs) on a robotic face (MorphFace), each controlled by two pseudo randomly generated transient parameters: rate of change β and activation delay τ. Participants then rated the appropriateness of the facial expression displayed in response to their palpations on a 4-point scale from “strongly disagree” to “strongly agree”. Each participant (n=16, 4 Asian females, 4 Asian males, 4 White females and 4 White males) performed 200 palpation trials on 4 patient identities (Black female, Black male, White female and White male) simulated using MorphFace. Results showed facial expressions rated as most appropriate by all participants comprise a higher rate of change and shorter delay from upper face AUs (around the eyes) to those in the lower face (around the mouth). In contrast, we found that transient parameter values of most appropriate-rated pain facial expressions, palpation forces, and delays between palpation actions varied across participant-simulated patient pairs according to gender and ethnicity. These findings suggest that gender and ethnicity biases affect palpation strategies and the perception of pain facial expressions displayed on MorphFace. We anticipate that our approach will be used to generate physical examination models with diverse patient demographics to reduce erroneous judgments in medical students, and provide focused training to address these errors.
... Whilst our in-room observations emphasised the sensory impact of the simulated message such as participants wincing, moving their heads away from handsets or covering ears, data from participant activity sheets contained more cognitive reactions than sensory or emotional ones. Observations of the body language of participants indicated that the distinctive, loud and penetrating tone used for CB messages was jarring and unpleasant: participants physically recoiled from the handsets they were holding, blocked their ears and displayed facial expressions consistent with pain or discomfort (Prkachin 1992;Chen et al. 2018). ...
Article
Full-text available
European Governments must implement a public alerting system to reach mobile phone users affected by major emergencies and disasters by June 2022. Cell Broadcast (CB) is used to issue emergency alerts in several countries, but few studies have considered the impact of messages on recipients. This paper presents the results of a joint research exercise that explored recipients’ responses to CB messages that warned of floods of varying certainty, severity, and urgency. We adopted a mixed methods approach employing semi-structured questions to assess the needs and perceptions of 80 workshop participants who received simulated CB emergency alerts. Our results suggest that although emergency alerting is generally welcomed, it is necessary to provide accurate and verifiable information, address accessibility challenges, and to state location clearly and understandably. Participants raised concerns about individual and community reaction to the unusual emergency warning alert tone and handset behaviour, and potential desensitisation of recipients to emergency alerting if it is over-used. This paper is the first to systematically study recipients’ responses to CB alerts for flooding. Its results will be of benefit to regional and national governments seeking to issue emergency alerts to a range of threats (e.g., other geohazards, pandemics or civil emergencies).
... internal representations) and compared these across emotions (e.g. Jack, Garrod & Schyns, 2014;Chen et al., 2018), cultures (e.g. Jack, Caldara & Schuns, 2012;Jack, Sun, Delis, Garrod & Schyns, 2016), and participant groups (e.g. ...
... En consonancia, algunas culturas pueden ser menos sensibles a señales que son definitivamente ostensibles para otras culturas (v.Jack;Schyns, 2017). En un estudio -realizado con imágenes intervenidas digitalmente-sobre las señales percibidas como disímiles entre los gestos de dolor y los de placer por un orgasmo (que la actuación en el teatro y el cine suele emparentar hasta confundirlos), se ha visto que los occidentales y los asiáticos difieren(Chen et al., 2018).4 Puede recordarse el modo inexpresivo en que las familias reales europeas transitan el duelo en público. ...
Article
There is a growing consensus that culture influences the perception of facial expressions of emotion. However, relatively few studies have examined whether and how culture shapes the production of emotional facial expressions. Drawing on prior work on cultural differences in communication styles, we tested the prediction that people from the Netherlands (a low-context culture) produce facial expressions that are more distinct across emotions compared to people from China (a high-context culture). Furthermore, we examined whether the degree of distinctiveness varies across posed and spontaneous expressions. Dutch and Chinese participants were instructed to either pose facial expressions of anger and disgust, or to share autobiographical events that elicited spontaneous expressions of anger or disgust. Using a supervised machine learning approach to categorize expressions based on the patterns of activated facial action units, we showed that both posed and spontaneous facial expressions of anger and disgust were more distinct when produced by Dutch compared to Chinese participants. Yet, the distinctiveness of posed and spontaneous expressions differed in their sources. The difference in the distinctiveness of posed expressions appears to be due to a larger array of facial expression prototypes for each emotion in Chinese culture than in Dutch culture. The difference in the distinctiveness of spontaneous expressions, however, appears to reflect the greater similarity of expressions of anger and disgust from the same Chinese individual than from the same Dutch individual. The implications of these findings are discussed in relation to cross-cultural emotion communication, including via cultural products.
Article
Full-text available
Human facial expressions are complex, multi-component signals that can communicate rich information about emotions,1, 2, 3, 4, 5 including specific categories, such as “anger,” and broader dimensions, such as “negative valence, high arousal.”6, 7, 8 An enduring question is how this complex signaling is achieved. Communication theory predicts that multi-component signals could transmit each type of emotion information—i.e., specific categories and broader dimensions—via the same or different facial signal components, with implications for elucidating the system and ontology of facial expression communication.⁹ We addressed this question using a communication-systems-based method that agnostically generates facial expressions and uses the receiver’s perceptions to model the specific facial signal components that represent emotion category and dimensional information to them.10, 11, 12 First, we derived the facial expressions that elicit the perception of emotion categories (i.e., the six classic emotions¹³ plus 19 complex emotions³) and dimensions (i.e., valence and arousal) separately, in 60 individual participants. Comparison of these facial signals showed that they share subsets of components, suggesting that specific latent signals jointly represent—i.e., multiplex—categorical and dimensional information. Further examination revealed these specific latent signals and the joint information they represent. Our results—based on white Western participants, same-ethnicity face stimuli, and commonly used English emotion terms—show that facial expressions can jointly represent specific emotion categories and broad dimensions to perceivers via multiplexed facial signal components. Our results provide insights into the ontology and system of facial expression communication and a new information-theoretic framework that can characterize its complexities.
Article
Full-text available
Facial expressions of emotion in humans are believed to be produced by contracting one's facial muscles, generally called action units. However, the surface of the face is also innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Here, we study the hypothesis that these visible facial colors allow observers to successfully transmit and visually interpret emotion even in the absence of facial muscle activation. To study this hypothesis, we address the following two questions. Are observable facial colors consistent within and differential between emotion categories and positive vs. negative valence? And does the human visual system use these facial colors to decode emotion from faces? These questions suggest the existence of an important, unexplored mechanism of the production of facial expressions of emotion by a sender and their visual interpretation by an observer. The results of our studies provide evidence in favor of our hypothesis. We show that people successfully decode emotion using these color features, even in the absence of any facial muscle activation. We also demonstrate that this color signal is independent from that provided by facial muscle movements. These results support a revised model of the production and perception of facial expressions of emotion where facial color is an effective mechanism to visually transmit and decode emotion.
Article
Full-text available
Based on modern theories of signal evolution and animal communication, the behavioral ecology view of facial displays (BECV) reconceives our 'facial expressions of emotion' as social tools that serve as lead signs to contingent action in social negotiation. BECV offers an externalist, functionalist view of facial displays that is not bound to Western conceptions about either expressions or emotions. It easily accommodates recent findings of diversity in facial displays, their public context-dependency, and the curious but common occurrence of solitary facial behavior. Finally, BECV restores continuity of human facial behavior research with modern functional accounts of non-human communication , and provides a non-mentalistic account of facial displays well-suited to new developments in artificial intelligence and social robotics.
Article
Full-text available
Psychology aims to understand real human behavior. However, cultural biases in the scientific process can constrain knowledge. We describe here how data-driven methods can relax these constraints to reveal new insights that theories can overlook. To advance knowledge we advocate a symbiotic approach that better combines data-driven methods with theory.
Article
Full-text available
A smile is the most frequent facial expression, but not all smiles are equal. A social-functional account holds that smiles of reward, affiliation, and dominance serve basic social functions, including rewarding behavior, bonding socially, and negotiating hierarchy. Here, we characterize the facial-expression patterns associated with these three types of smiles. Specifically, we modeled the facial expressions using a data-driven approach and showed that reward smiles are symmetrical and accompanied by eyebrow raising, affiliative smiles involve lip pressing, and dominance smiles are asymmetrical and contain nose wrinkling and upper-lip raising. A Bayesian-classifier analysis and a detection task revealed that the three smile types are highly distinct. Finally, social judgments made by a separate participant group showed that the different smile types convey different social messages. Our results provide the first detailed description of the physical form and social messages conveyed by these three types of functional smiles and document the versatility of these facial expressions.
Article
Facial expressions of emotion contain important information that is perceived and used by observers to understand others’ emotional state. While there has been considerable research into perceptions of facial musculature and emotion, less work has been conducted to understand perceptions of facial coloration and emotion. The current research examined emotion-color associations in the context of the face. Across 4 experiments, participants were asked to manipulate the color of face, or shape, stimuli along 2 color axes (i.e., red-green, yellow-blue) for 6 target emotions (i.e., anger, disgust, fear, happiness, sadness, surprise). The results yielded a pattern that is consistent with physiological and psychological models of emotion.
Article
It is widely accepted that emotional expressions can be rich communicative devices. We can learn much from the tears of a grieving friend, the smiles of an affable stranger, or the slamming of a door by a disgruntled lover. So far, a systematic analysis of what can be communicated by emotional expressions of different kinds and of exactly how such communication takes place has been missing. The aim of this article is to introduce a new framework for the study of emotional expressions that I call the theory of affective pragmatics (TAP). As linguistic pragmatics focuses on what utterances mean in a context, affective pragmatics focuses on what emotional expressions mean in a context. TAP develops and connects two principal insights. The first is the insight that emotional expressions do much more than simply expressing emotions. As proponents of the Behavioral Ecology View of facial movements have long emphasized, bodily displays are sophisticated social tools that can communicate the signaler's intentions and requests. Proponents of the Basic Emotion View of emotional expressions have acknowledged this fact, but they have failed to emphasize its importance, in part because they have been in the grip of a mistaken theory of emotional expressions as involuntary readouts of emotions. The second insight that TAP aims to articulate and apply to emotional expressions is that it is possible to engage in analogs of speech acts without using language at all. I argue that there are important and so far largely unexplored similarities between what we can “do” with words and what we can “do” with emotional expressions. In particular, the core tenet of TAP is that emotional expressions are a means not only of expressing what's inside but also of directing other people's behavior, of representing what the world is like and of committing to future courses of action. Because these are some of the main things we can do with language, the take home message of my analysis is that, from a communicative point of view, much of what we can do with language we can also do with non-verbal emotional expressions. I conclude by exploring some reasons why, despite the analogies I have highlighted, emotional expressions are much less powerful communicative tools than speech acts.