Article

Perception of emotional expressions is independent of face selectivity in monkey inferior temporal cortex

Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD 20892, USA.
Proceedings of the National Academy of Sciences (Impact Factor: 9.67). 05/2008; 105(14):5591-6. DOI: 10.1073/pnas.0800489105
Source: PubMed

ABSTRACT

The ability to perceive and differentiate facial expressions is vital for social communication. Numerous functional MRI (fMRI) studies in humans have shown enhanced responses to faces with different emotional valence, in both the amygdala and the visual cortex. However, relatively few studies have examined how valence influences neural responses in monkeys, thereby limiting the ability to draw comparisons across species and thus understand the underlying neural mechanisms. Here we tested the effects of macaque facial expressions on neural activation within these two regions using fMRI in three awake, behaving monkeys. Monkeys maintained central fixation while blocks of different monkey facial expressions were presented. Four different facial expressions were tested: (i) neutral, (ii) aggressive (open-mouthed threat), (iii) fearful (fear grin), and (iv) submissive (lip smack). Our results confirmed that both the amygdala and the inferior temporal cortex in monkeys are modulated by facial expressions. As in human fMRI, fearful expressions evoked the greatest response in monkeys-even though fearful expressions are physically dissimilar in humans and macaques. Furthermore, we found that valence effects were not uniformly distributed over the inferior temporal cortex. Surprisingly, these valence maps were independent of two related functional maps: (i) the map of "face-selective" regions (faces versus non-face objects) and (ii) the map of "face-responsive" regions (faces versus scrambled images). Thus, the neural mechanisms underlying face perception and valence perception appear to be distinct.

Full-text

Available from: Roger B Tootell, May 22, 2014
Perception of emotional expressions is independent
of face selectivity in monkey inferior temporal cortex
Fadila Hadj-Bouziane*
, Andrew H. Bell*, Tamara A. Knusten
, Leslie G. Ungerleider*
, and Roger B. H. Tootell*
*Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD 20892; and
Athinoula A. Martinos
Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129
Contributed by Leslie G. Ungerleider, January 31, 2008 (sent for review December 26, 2007)
The ability to perceive and differentiate facial expressions is vital
for social communication. Numerous functional MRI (fMRI) studies
in humans have shown enhanced responses to faces with different
emotional valence, in both the amygdala and the visual cortex.
However, relatively few studies have examined how valence
influences neural responses in monkeys, thereby limiting the
ability to draw comparisons across species and thus understand the
underlying neural mechanisms. Here we tested the effects of
macaque facial expressions on neural activation within these two
regions using fMRI in three awake, behaving monkeys. Monkeys
maintained central fixation while blocks of different monkey facial
expressions were presented. Four different facial expressions were
tested: (i) neutral, (ii) aggressive (open-mouthed threat), (iii) fear-
ful (fear grin), and (iv) submissive (lip smack). Our results confirmed
that both the amygdala and the inferior temporal cortex in mon-
keys are modulated by facial expressions. As in human fMRI,
fearful expressions evoked the greatest response in monkeys—
even though fearful expressions are physically dissimilar in hu-
mans and macaques. Furthermore, we found that valence effects
were not uniformly distributed over the inferior temporal cortex.
Surprisingly, these valence maps were independent of two related
functional maps: (i) the map of “face-selective” regions (faces
versus non-face objects) and (ii) the map of “face-responsive”
regions (faces versus scrambled images). Thus, the neural mecha-
nisms underlying face perception and valence perception appear to
be distinct.
amygdala emotion valence fMRI
F
aces are complex stimuli that convey infor mation not only
about an individual’s identity, but also about the individual’s
emotional state. For instance, in monkeys a fearful ex pression
c ould indicate a nearby predator, whereas a lip smack expression
c ould reflect social submission. Thus, the interpretation of facial
ex pressions is crucial for both individual and group survival.
However, the neural bases underlying the recognition of facial
ex pression remain unclear.
Lesion studies in both humans and monkeys suggest that the
amygdala plays a key role in both the recognition of facial
ex pressions (1) and the production of behavioral responses to
emotional stimuli (e.g., refs. 2–4). Numerous functional MRI
(fMRI) studies in humans have shown enhanced responses to
faces with emotional valence (e.g., greater responses to fearful
relative to neutral faces) in both the amygdala and visual cortex
(e.g., refs. 5–9; for a review see refs. 10 and 11). As in the human
fMRI studies, several single-unit studies in monkeys have re-
ported modulation of amygdala responses by viewing facial
ex pressions (e.g., refs. 12–15). A few single-unit studies also
examined the ef fects of facial expressions in the inferior tem-
poral (IT) c ortex, including the superior temporal sulcus (STS)
(16–18), and found that some neurons were also modulated by
facial expression.
Surprisingly, a recent fMRI study in monkeys showed virtually
no effect of facial expression within the ventral visual pathway,
even though valence effects were found in the amygdala (19).
Conceivably, this lack of effect in IT cortex could indicate that
few neurons are modulated by facial expressions in monkey IT
c ortex, relative to its human counterpart. Alternatively, neurons
showing valence effects in monkey IT cortex may be widely
distributed and therefore not easily detectable using fMRI.
Thus, it was important to reexamine whether it is possible to
use fMRI to reveal valence modulation in monkeys. A positive
finding would resolve the currently discrepant fMRI results
bet ween humans and monkeys, thus allowing subsequent clari-
fication of fMRI results using classical invasive techniques (e.g.,
ref. 20). A specific goal here was to reexamine valence effects
within monkey IT c ortex using fMRI, comparing regions selec-
tive for faces (relative to non-face objects) with those responsive
for faces (relative to scrambled images). In both humans and
monkeys, the former contrast is the standard measure for face
selectivity (e.g., refs. 21–23) whereas the latter isolates only
regions that are activated by objects versus non-objects. Of
c ourse, the object category includes faces, but, because most
objects are non-faces, this comparison is much less functionally
specific as compared with face selectivity (e.g., refs. 24–26). A
further goal was to compare valence effects in monkey visual
c ortex w ith those observed in the amygdala.
Results
We scanned three awake, fixating monkeys while they viewed
blocks of four different monkey facial expressions (neutral,
threat, fear grin, and lip smack) and scrambled faces (Fig. 1A ).
The imaged region included the entire temporal lobe and the
amygdala (e.g., Fig. 1C). Typically, posterior regions of visual
c ortex (including V1–V4) and the frontal lobe were not covered.
Perception of Faces: “Face-Responsive“ Versus “Face-Selective” Re-
gions.
Relative to scrambled images, the images of neutral faces
elicited widespread activation throughout the temporal cortex as
well as in the amygdala in all three monkeys. As described above,
these regions were defined as face-responsive. As illustrated in
Fig. 2, this face-responsive activation extended bilaterally along
IT c ortex, including STS and the IT gyrus, as well as in the lunate
and the inferior occipital sulci. In monkey E (in which the imaged
volume included more anterior regions) face-responsive activa-
tion was also found in prefrontal cortex, within the left inferior
bank of the arcuate sulcus, and in the lateral portion of the
orbitof ront al c ortex. In all monkeys, the strongest face-
responsive activation was c onsistently elicited anteriorly within
area TE, at the level of the anterior middle temporal sulcus, and
posteriorly within STS, proximal to the anterior tip of the
Author contributions: R.B.H.T. designed research; T.A.K. performed research; F.H.-B. and
A.H.B. analyzed data; and F.H.-B., L.G.U., and R.B.H.T. wrote the paper.
The authors declare no conflict of interest.
Freely available online through the PNAS open access option.
To whom correspondence may be addressed at: National Institute of Mental Health,
Laboratory of Brain and Cognition, 49 Convent Drive, Building 49/1B80, Bethesda,
MD 20892. E-mail: hadjf@mail.nih.gov or ungerlel@mail.nih.gov.
This article contains supporting information online at www.pnas.org/cgi/content/full/
0800489105/DC1.
© 2008 by The National Academy of Sciences of the USA
www.pnas.orgcgidoi10.1073pnas.0800489105 PNAS
April 8, 2008
vol. 105
no. 14
5591–5596
NEUROSCIENCE
Page 1
posterior middle temporal sulcus (red regions in Fig. 2). Addi-
tionally in monkey E, strong face-responsive activation was
found in the posterior visual cortex, within both the lunate and
the inferior occipital sulci.
We also mapped the activation produced in monkeys E and R
by the present ation of neutral faces relative to non-face objects
[black outlines in Fig. 2, see supporting information (SI) Fig. 6].
These regions were defined as being face-selective. Consistent
with prior fMRI studies (20–22), we found t wo face-selective
regions (‘‘patches’’) within IT cortex: (i) an anterior face patch,
located in area TE, and (ii) a posterior face patch, located
posteriorly, near/within area TEO (27). As in previous studies,
the anterior face patch was smaller and less robust than the
posterior face patch; in two of four hemispheres mapped the
anterior patch was not statistically sign ificant.
Effect of Facial Expression in Face-Responsive and Face-Selective
Regions Within IT Cortex.
Within IT cortex we examined the ef fect
of facial expression in both face-responsive and face-selective
regions, using size-matched regions of interest (ROIs) in each
region, independently in monkeys E and R (Fig. 3). Face-
selective regions could not be mapped in monkey B for technical
reasons. In each hemisphere, two face-selective ROIs were
selected, one anterior and one posterior; as a c ontrol comparison
t wo nearby face-responsive ROIs were also selected (see Mate-
r ials and Methods). An ANOVA with repeated measures tested
for the ef fects of ( i) ROI selectivity (two levels corresponding to
face-selective and face-responsive); (ii) ROI location (two levels
c orresponding to anterior and posterior); (iii) expression (four
levels corresponding to the dif ferent ex pressions); and (iv) the
interaction between the different factors. In both monkeys, a
main effect was found for ROI selectivity [F
1–37
20.0, P
0.001; F
1–31
184.3, P 0.001 (for monkeys E and R, respec-
tively)]; a stronger response was found to all faces within the
face-selective ROIs as compared with the face-responsive ROIs.
A main effect of ROI location was found only for monkey E
(F
1–37
7.7, P 0.009); a stronger response was found for all
faces in the anterior face-selective ROI c ompared with that in
the posterior face-selective ROI. Additionally, a main effect of
ex pression was found for both animals [F
3–111
5.6, P 0.001;
F
3–93
3.0, P 0.033 (for monkeys E and R, respectively)]; the
fear grin expressions c onsistently elicited a greater response
relative to the neutral expressions. A sign ificant interaction was
also found between ROI selectivity and expression for both
an imals [F
3–111
3.6, P 0.02; F
3–93
5.4, P 0.003 (for
NEUTRAL THREAT FEAR GRIN LIP SMACK
A
Time
700 ms
300 ms
B
C
Fig. 1. Stimulus conditions and fMRI coverage. (A) Examples of the different
facial expressions tested. (B) Within each block, each image was presented for
700 ms followed by a mask for 300 ms. Four facial expressions were presented
from each of eight different monkeys. (C) Sagittal view of a monkey anatom-
ical scan illustrating the typical location of the slices (in red).
Fig. 2. Face-responsive versus face-selective regions. Lateral and ventral inflated views show face-responsive regions and face-selective regions in both
hemispheres from monkeys E and R. Face-responsive regions were defined as those showing significantly greater activation for neutral faces relative to scrambled
faces (shown in yellow/red), whereas face-selective regions were defined as those showing greater activation for neutral faces relative to non-face objects
(outlined in black). The frontal lobe was imaged only in monkey E. as, arcuate sulcus; ios, inferior occipital sulcus; ls, lateral sulcus; los, lateral orbital sulcus; lus,
lunate sulcus; pmts, posterior middle temporal sulcus; sts, superior temporal sulcus.
5592
www.pnas.orgcgidoi10.1073pnas.0800489105 Hadj-Bouziane et al.
Page 2
monkeys E and R, respectively)]; the modulation by fear grin
ex pressions was greater w ithin the face-selective ROIs as com-
pared with face-responsive ROIs. In addition, some idiosyncratic
ef fects were found across animals; monkey R showed a signif-
icantly decreased response to open-mouthed threats (relative to
neutral, P 0.05), whereas monkey E did not. However, all six
hemispheres in our sample showed the largest fMRI increase in
response to fear g rins relative to neutral expressions (see SI Fig.
7 for monkey B).
In addition to examin ing valence effects within these specific
ROIs, we also mapped the distribution of valence effects across
the full expanse of IT cortex. Specifically, we measured the
dif ference in the fMRI signal evoked by any facial expression
(e.g., threat, fear g rin, or lip smack) relative to the neutral facial
ex pression, within all face-responsive and face-selective regions.
The resulting maps showed that the valence effect was not
distributed uniformly across IT cortex (Fig. 4A). Moreover,
although face-selective regions (black outlines in Fig. 4A) some-
times showed strong valence effects (e.g., the anterior face patch
in the left hemisphere of monkey E), just as often they did not
(e.g., the posterior face patches in the right hemispheres of both
monkeys E and R).
To quantify this relationship further, we calculated the cor-
relation between the magnitude of the valence effect relative to
the magnitude of either (i) face selectivity (face object) or (ii)
face responsivity (face scrambled) in a voxel-by-voxel manner,
averaged across both an imals (Fig. 4B). These tests revealed that
none of the three maps (valence, face selectivity, or face
responsivity) was significantly correlated with any of the other
maps (Fig. 4B). Thus, the voxels showing the strongest face-
selective or face-responsive variation did not c orrespond to the
voxels with the highest valence modulation, except by chance
c ovariation. Even when valence modulation was calculated based
solely on fear grins (the expression that produced the strongest
response), no correlation emerged. Thus, in IT cortex, the map
of valence variations is apparently independent of the maps of
variations in face selectivit y and face responsivity.
Effect of Facial Expression Within the Amygdala. We also found
face-responsive regions in the amygdala, located bilaterally
within the dorsal portion of the basal and the lateral nuclei of the
amygdala, in all three animals (Fig. 5 A and B). However, no
st atistically sign ificant amygdala activation was found when using
the more stringent standard of face selectivity.
Fig. 5C shows the amygdala responses to the different facial
ex pressions from the right hemisphere of monkey B (see also SI
Fig. 8). The fear grin expression evoked sign ificantly g reater
activation than the neutral expression across all animals (signif-
icant effect w ithin each hemisphere for each an imal, P 0.05).
This response profile resembled that found in the visual cortex
(described above), although the response magnitude was con-
siderably smaller in the amygdala (see SI Fig. 8).
Discussion
Valence Effects in IT Cortex. Using fMRI in monkeys we demon-
strated that the perception of facial expression modulates
activ ity in some subregions of IT cortex, c onfir ming fMRI
findings in humans (5–9) and single-un it studies in monkeys
(16–18). Our results extend recent findings from Hoffman et
al . (19), who also used fMRI in monkeys; that earlier study
showed a valence effect in amygdala but virtually none in the
v isual cortex. Although we cannot explain this discrepancy, it
c ould be due to techn ical dif ferences in the t wo studies (e.g.,
higher c ontrast/noise due to MION versus BOLD, different
amounts of signal averaging).
Interestingly, we found that the valence modulation was not
un iformly distributed; instead it varied across IT cortex. Fur-
ther more, this valence modulation was not simply overlaid on
previously described functional maps of either face selectivity (2,
21) or face responsivity (e.g., refs. 24–26). In other words, in IT
c ortex, the voxels that showed the greatest difference between
faces and non-face objects or between faces and scrambled
images were not necessarily those that showed the greatest
valence modulation. Even the voxels that showed the highest
selectivity for faces (faces objects) were not simply the more
activated voxels in the map of face responsiv ity (faces scram-
bled images). The mutual independence of these maps strongly
suggests that there are correspondingly independent neural
mechan isms underlying face perception, valence perception, and
object perception.
Does the Amygdala Modulate IT Activity? As reported previously
(e.g., refs. 12, 15, and 19), we found face-responsive regions in
the amygdala in the dorsal part of the lateral nucleus, extending
into the basal nucleus. The fMRI signal in these nuclei was
modulated by facial expressions, consistent w ith prior single-unit
studies (e.g., refs. 12 and 15) and one recent fMRI study (19).
Neuroanatomical studies in monkeys have revealed that visual
infor mation reaches the amygdala through the ventral (‘‘object
rec ognition’’) pathway, which projects from the primary visual
c ortex (V1) through multiple extrastriate areas to area TE within
the anterior IT cortex. Information from area TE then projects
to the amygdala, mainly to the lateral nucleus (28, 29). Feedback
projections from the amygdala arise mainly from the basal
nucleus and terminate in virtually all areas of the ventral visual
pathway, including V1.
It has been proposed that the valence effects evoked in human
visual c ortex by fearful faces reflect feedback signals generated
in the amygdala (7, 30), and our results are consistent with this
Fig. 3. Amplitude of valence effect within face-responsive and face-selective
regions of IT cortex. The percent signal change is shown for the different facial
expressions within two face-selective and two face-responsive ROIs in the left
hemispheres of monkeys E and R. The lateral view illustrates the approximate
location of the selected ROIs. For both the face-selective and face-responsive
regions, the histograms on the left indicate activations in the anterior ROIs,
whereas the histograms on the right indicate activations in the posterior ROIs.
All ROIs were equated in size. Asterisks indicate significant differences relative
to the neutral condition for each ROI (P 0.05; errors bars indicate the SEM).
N, neutral; T, threat; F, fear grin; L, lip smack.
Hadj-Bouziane et al. PNAS
April 8, 2008
vol. 105
no. 14
5593
NEUROSCIENCE
Page 3
model. First, the valence ef fect found in the visual cortex
resembled that found in the amygdala, with fear grins producing
the strongest response in all six hemispheres tested. Second,
activation in the amygdala was localized w ithin the nuclei
receiving input f rom or projecting to the visual c ortex (28, 29).
Direct support for this view comes from a recent fMRI study
showing that valence effects were absent within face-responsive
regions of the visual cortex in patients with extended amygdala
ls
sts
lus
ios
pmts
Monkey E
Monkey R
LR
A
10
p <
5x10
-2
10
-5
10
-8
-15
% signal change (face>scrambled)
B
% signal change (face>object)
-1.5 -1
-0.5
00.510.5 1
1.5
22.530
0
2
1
-1
0
2
1
-1
% signal change
(any expression>neutral)
r = -0.07 r = -0.05
Face-selective
region
Fig. 4. Map of valence effects throughout IT cortex. (A) Lateral inflated views of the left and right hemispheres of monkeys E and R, showing the magnitude
of the valence effect within the face-selective regions (outlined in black) and the more extensive face-responsive regions. The significance of the valence effect
reflects the difference in the fMRI signal of any facial expression relative to the neutral expression (i.e., threat versus neutral or fear grin versus neutral or lip
smack versus neutral). (B) Correlation between valence effect and either face responsivity (Left) or face selectivity (Right) across monkeys E and R.
Fig. 5. Valence effect within the amygdala. (A) Coronal section illustrating face-responsive regions within the amygdala for all three animals. (B) Magnified
view of the amygdala activation and schematic representation of the amygdala nuclei. The face-responsive regions were found bilaterally in the basal and lateral
nuclei of the amygdala in all animals. (C) Percent signal change within face-responsive regions in the amygdala in one hemisphere of one monkey, evoked by
the perception of facial expressions. Asterisks indicate that fear grin elicited a greater response than the neutral expression. AB, accessory basal nucleus; B, basal
nucleus; CE, central nucleus; L, lateral nucleus; M, medial nucleus; N, neutral; T, threat; F, fear grin; L, lip smack.
5594
www.pnas.orgcgidoi10.1073pnas.0800489105 Hadj-Bouziane et al.
Page 4
lesions (31). However, it remains unclear exactly how these
valence effects influence visual cortical processing.
Social Cognition in Primates. Monkey studies of facial ex pression
have compared the effects of open-mouthed threat versus ‘‘ap-
peasing/submissive’’ expressions (e.g., refs. 16–19). In those
earlier studies, fear grin expressions were not systematically
tested (17, 18)—or, when tested, fear grins were averaged
together with lip smack expressions (19). This is indeed appro-
priate if variations in macaque facial expression are graded along
the axis of dominance–submission. However, lip smack could
instead be considered an affiliative behavior, whereas fear grin
c ould be an avoidance response, based on a classification along
three axes: dominance, avoidance, and affiliation (32–34).
Here, open-mouthed threat produced varied responses
across an imals. Such indiv idual differences in the pattern of
‘‘threat’’ modulation suggest that dif ferent animals may per-
ceive distinct ex pressions ac c ording to their level in the social
hierarchy. By c ontrast, fear g rin consistently elicited the
greatest response across an imals, within both IT c ortex and the
amygdala—analogous to the c onsistently enhanced response
to fearful faces in human fMRI. A lthough fear grins in
monkeys and fear ful human expressions are physically dissim-
ilar, they presumably c onvey similar emotional states. Such
increased activit y in response to facial expressions could
reflect the ambiguit y related to the fear-inducing situations
(35, 36). Angry human faces (as in open-mouthed threat in
monkeys) provide infor mation about both the presence and
the source of a threat, whereas fearful faces provide infor ma-
tion about the presence of threat but not its source. Thus, a
stronger neural response to fearful faces would presumably
reflect greater attentional engagement to select the most
appropriate behavioral response (35, 36).
Materials and Methods
Subjects and General Procedures. Three male macaque monkeys were used
(Macaca mulatta, 3–5 years, 3–5 kg). All procedures were in accordance with
Massachusetts General Hospital guidelines and are described in detail else-
where (37, 38). Briefly, each monkey was surgically implanted with a plastic
head post under anesthesia. After recovery, monkeys were trained to sit in a
sphinx position in a plastic restraint barrel (Applied Prototype) with their
heads fixed, facing a screen on which visual stimuli were presented. During MR
scanning, gaze location was monitored by using an infrared pupil tracking
system (ISCAN).
Stimuli and Task. Stimuli were presented by using a custom Matlab (Math-
works) program, including PsychToolbox, and displayed via a LCD projector
(Sharp model XG-NV6XU) onto a back-projection screen positioned within the
magnet bore. All stimuli used in this experiment were colored images of
macaque faces, 25° wide. Images were acquired from eight unfamiliar
monkeys (i.e., eight identities), each with four different expressions in frontal
view (Fig. 1 A): (i) neutral, (ii) aggressive (open-mouthed threat), (iii) fearful
(fear grin), and (iv) submissive (lip smack) (39).
Stimuli from each condition were presented in blocks of 40 s each. These
block conditions included the four different emotional expression condi-
tions listed above (equated for the eight identities), a fixation condition
(gray background), and scrambled faces (mosaic scrambled and Fourier
phase scrambled). Each stimulus was presented for 700 ms and was fol-
lowed by a 300-ms mask period (Fig. 1B). Each image was presented five
times per block, with a total of 40 images presented per block. The order
of blocks was randomized across runs. Each stimulus was overlaid with a
small (0.2°) central fixation point, which the monkeys were required to
fixate to receive a liquid reward. In the reward schedule, the frequency of
reward increased as the duration of fixation increased (e.g., refs. 21, 37,
and 38).
We also mapped the location of face-selective regions in two animals
(monkeys E and R) in a separate study. In those experiments, each block was
devoted to one of four visual stimulus categories: neutral faces, body parts,
objects, and places. Each block lasted 32 s, during which each of 16 images
(22 22°) was presented for 2 s.
Scanning. Before each experiment, an exogenous contrast agent [monocrys-
talline iron oxide nanocolloid (MION)] was injected into the femoral vein (7–11
mg/kg) to increase the contrast/noise ratio and to optimize the localization of
fMRI signals (37, 38). Imaging data were collected by using a 3-T Allegra
scanner (Siemens) and a single-loop, send/receive surface coil. Functional data
were obtained by using a multiecho gradient echo sequence, i.e., using two
echoes with alternating phase-encoding direction: TR, 4 s; TE, 30/71 ms; flip
angle, 90°; field of view (FOV), 96 mm; matrix, 64 64; voxel size, 1.5 mm
isotropic; 28 coronal slices (no gap). For monkey E, the FOV was reduced to 80
mm to increase the spatial resolution (1.25 mm isotropic), but all other
scanning parameters remained constant.
In separate scan sessions, high-resolution anatomical scans were obtained
from each monkey under anesthesia (3D MPRAGE; TR, 2.5 s; TE, 4.35 ms; flip
angle, 8°; matrix, 384 384; voxel size, 0.35 mm isotropic). These anatomical
scans were used as an underlay for the functional data and to create anatom-
ical ROIs. Inflated cortical surfaces were also created from these scans by using
FreeSurfer (40, 41).
Data Analysis. Functional data were analyzed by using AFNI (42). Images were
realigned to the first volume of the first session, for each subject, and spatially
smoothed by using a 2-mm full-width half-maximum Gaussian kernel. Signal
intensity was normalized to the mean signal value within each run. A total of
3,720, 3,360, and 3,720 volumes were collected and analyzed for monkeys E,
R, and B, respectively, across two scan sessions per monkey. Data were ana-
lyzed by using a general linear model and a MION kernel to model the
hemodynamic response function (35). The different facial expressions and the
scrambled conditions were used as regressors of interest. Regressors of no
interest included baseline, movement parameters from realignment correc-
tions, and signal drifts (linear as well as quadratic). Note that all fMRI signals
throughout the article have been inverted so that an increase in cerebral
blood volume is represented by an increase in signal intensity. We identified
brain regions that responded more strongly to neutral faces compared with
scrambled images (face-responsive regions) or compared with objects (face-
selective regions). Statistical maps were thresholded to at least P 10
9
uncorrected and overlaid onto anatomical scans and/or inflated cortical sur-
faces.
Analysis of Activity in Face-Responsive and Face-Selective Regions of Interest.
One main goal was to compare the activity produced by different expressions
within face-responsive and face-selective ROIs. We defined ROIs of 30 (5)
voxels each within both face-responsive and face-selective regions. Within the
face-selective regions, the ROIs corresponded to a spherical mask surrounding
the peak activation. For the face-responsive regions, we chose regions medial
to both face-selective regions (one anterior and one posterior) within the STS
and generated an ROI of the same size. The percent signal change was
extracted from these ROIs. Specifically we calculated a response for each block
and averaged these values over the 35 presentation blocks for each condi-
tion. We performed an ANOVA with three levels, testing for the effect of ROI
selectivity (responsive/selective), ROI location (anterior/posterior), and expres-
sion (neutral, threat, fear grin, and lip smack), followed by multiple paired t
tests and tests for interactions. Analogous tests were performed in the amyg-
dala by using only a single ROI (the face-responsive region in the anatomically
defined amygdala).
Valence Effects Within Face-Responsive and Face-Selective Regions. To further
examine the effect of valence across face-responsive and face-selective regions,
statistical maps were created for monkeys E and R, which reflected any difference
in the fMRI signal between neutral versus threat, neutral versus fear grin, or
neutral versus lip smack expressions. We then tested the relationship between
valence effects and either face responsivity or face selectivity on a voxel-by-voxel
basis using Pearson correlations. Valence effects corresponded to the sum of any
absolute difference of any expression (threat, fear grin, or lip smack) relative to
neutral expressions. Data from both animals were equated for the number of
voxels sampled and grouped together for this analysis.
ACKNOWLEDGMENTS. We thank Shruti Japee, Ziad Saad, Gang Chang, and
Jennifer Becker for help with the analysis; Katalin Gothard for providing the
original monkey facial expression images; and Helen Deng for her assistance
with animal training. We also thank Byoung Wu Kim for normalizing and
preparing the stimuli, Hans Breiter for discussions and support, and Wim
Vanduffel for his contribution to imaging at Massachusetts General Hospital.
This study was supported by National Institutes of Health Grants R01 MH67529
and R01 EY017081 (to R.B.H.T.), the Athinoula A. Martinos Center for Bio-
medical Imaging, the National Center for Research Resources, and the Na-
tional Institute of Mental Health Intramural Research Program (F.H.-B., A.H.B.,
and L.G.U.).
Hadj-Bouziane et al. PNAS
April 8, 2008
vol. 105
no. 14
5595
NEUROSCIENCE
Page 5
1. Adolphs R, Tranel D, Damasio H, Damasio A (1994) Impaired recognition of emotion in
facial expressions following bilateral damage to the human amygdala. Nature
372:669 672.
2. Rosvold HE, Mirsky AF, Pribram KH (1954) Influence of amygdalectomy on social
behavior in monkeys. J Comp Physiol Psychol 47:173–178.
3. Aggleton JP, Passingham RE (1981) Syndrome produced by lesions of the amygdala in
monkeys (Macaca mulatta). J Comp Physiol Psychol 95:961–977.
4. Meunier M, Bachevalier J, Murray EA, Malkova L, Mishkin M (1999) Effects of aspiration
versus neurotoxic lesions of the amygdala on emotional responses in monkeys. Eur
J Neurosci 11:4403–4418.
5. Breiter HC, et al. (1996) Response and habituation of the human amygdala during
visual processing of facial expression. Neuron 17:875– 887.
6. Dolan RJ, et al. (1996) Neural activation during covert processing of positive emotional
facial expressions. NeuroImage 4:194 –200.
7. Pessoa L, McKenna M, Gutierrez E, Ungerleider LG (2002) Neural processing of emo-
tional faces requires attention. Proc Natl Acad Sci USA 99:11458 –11463.
8. Surguladze SA, et al. (2003) A preferential increase in the extrastriate response to
signals of danger. NeuroImage 19:1317–1328.
9. Ishai A, Schmidt CF, Boesiger P (2005) Face perception is mediated by a distributed
cortical network. Brain Res Bull 67:87–93.
10. Blair RJ (2003) Facial expressions, their communicatory functions and neuro-cognitive
substrates. Philos Trans R Soc London B 358:561–572.
11. Vuilleumier P, Pourtois G (2007) Distributed and interactive brain mechanisms during
emotion face perception: Evidence from functional neuroimaging. Neuropsychologia
45:174–194.
12. Rolls ET (1984) Neurons in the cortex of the temporal lobe and in the amygdala of the
monkey with responses selective for faces. Hum Neurobiol 3:209–222.
13. Brothers L, Ring B, Kling A (1990) Response of neurons in the macaque amygdala to
complex social stimuli. Behav Brain Res 41:199–213.
14. Kuraoka K, Nakamura K (2006) Impacts of facial identity and type of emotion on
responses of amygdala neurons. NeuroReport 17:9 –12.
15. Gothard KM, Battaglia FP, Erickson CA, Spitler KM, Amaral DG (2007) Neural responses to
facial expression and face identity in the monkey amygdala. J Neurophysiol 97:1671–1683.
16. Perrett DI, et al. (1984) Neurones responsive to faces in the temporal cortex: Studies of
functional organization, sensitivity to identity and relation to perception. Hum Neu-
robiol 3:197–208.
17. Hasselmo ME, Rolls ET, Baylis GC (1989) The role of expression and identity in the
face-selective responses of neurons in the temporal visual cortex of the monkey. Behav
Brain Res 32:203–218.
18. Sugase Y, Yamane S, Ueno S, Kawano K (1999) Global and fine information coded by
single neurons in the temporal visual cortex. Nature 400:869 873.
19. Hoffman KL, Gothard KM, Schmid MC, Logothetis NK (2007) Facial-expression and
gaze-selective responses in the monkey amygdala. Curr Biol 17:766–772.
20. Tsao DY, Freiwald WA, Tootell RB, Livingstone MS (2006) A cortical region consisting
entirely of face-selective cells. Science 311:670 674.
21. Tsao DY, Freiwald WA, Knutsen TA, Mandeville JB, Tootell RB (2003) Faces and objects
in macaque cerebral cortex. Nat Neurosci 6:989–995.
22. Pinsk MA, DeSimone K, Moore T, Gross CG, Kastner S (2005) Representations of faces
and body parts in macaque temporal cortex: A functional MRI study. Proc Natl Acad Sci
USA 102:6996 –7001.
23. Kanwisher N, McDermott J, Chun MM (1997) The fusiform face area: A module in
human extrastriate cortex specialized for face perception. J Neurosci 17:4302–4311.
24. Malach R, et al. (1995) Object-related activity revealed by functional magnetic reso-
nance imaging in human occipital cortex. Proc Natl Acad Sci USA 92:8135–8139.
25. Tootell RB, Tsao D, Vanduffel W (2003) Neuroimaging weighs in: Humans meet
macaques in ‘‘primate’’ visual cortex. J Neurosci 23:3981–3989.
26. Orban GA, Van Essen D, Vanduffel W (2004) Comparative mapping of higher visual
areas in monkeys and humans. Trends Cognit Sci 8:315–324.
27. Boussaoud D, Desimone R, Ungerleider LG (1991) Visual topography of area TEO in the
macaque. J Comp Neurol 306:554 –575.
28. Webster MJ, Ungerleider LG, Bachevalier J (1991) Connections of inferior temporal
areas TE and TEO with medial temporal-lobe structures in infant and adult monkeys.
J Neurosci 11:1095–1116.
29. Amaral DG, Price JL, Pitka¨ nen A, Carmichael ST (1992) Anatomical organization of
the primate amygdaloid complex. The Amygdala: Neurobiological Aspects of
Emotion, Memory, and Mental Dysfunction, ed Aggleton JP (Wiley-Liss, New York),
pp 1–66.
30. Pessoa L, Ungerleider LG (2004) Neuroimaging studies of attention and the processing
of emotion-laden stimuli. Prog Brain Res 144:171–182.
31. Vuilleumier P, Richardson MP, Armony JL, Driver J, Dolan RJ (2004) Distant influences
of amygdala lesion on visual cortical activation during emotional face processing. Nat
Neurosci 7:1271–1278.
32. Chevalier-Skolnikoff S (1973) Facial expression of emotion in nonhuman primates.
Darwin and Facial Expression, ed Ekman P (Academic, New York), pp 11–90.
33. Maxim PE (1982) Contexts and messages in macaque social communication. Am J
Primatol 2:63– 85.
34. Maestripieri D, Wallen K (1997) Affiliative and submissive communications in Rhesus
macaques. Primates 38:127–138.
35. Davis M, Whalen PJ (2001) The amygdala: Vigilance and emotion. Mol Psychiatry
6:13–34.
36. Whalen PJ (2007) The uncertainty of it all. Trends Cognit Sci 11:499 –500.
37. Vanduffel W, et al. (2001) Visual motion processing investigated using contrast agent-
enhanced fMRI in awake behaving monkeys. Neuron 32:565–577.
38. Leite FP, et al. (2002) Repeated fMRI using iron oxide contrast agent in awake,
behaving macaques at 3 Tesla. NeuroImage 16:283–294.
39. Gothard KM, Erickson CA, Amaral DG (2004) How do rhesus monkeys (Macaca mulatta)
scan faces in a visual paired comparison task? Anim Cognit 7:25–36.
40. Dale AM, Fischl B, Sereno MI (1999) Cortical surface-based analysis. I. Segmentation and
surface reconstruction. NeuroImage 9:179 –194.
41. Fischl B, Sereno MI, Dale AM (1999) Cortical surface-based analysis. II. Inflation,
flattening, and a surface-based coordinate system. NeuroImage 9:195–207.
42. Cox RW (1996) AFNI: Software for analysis and visualization of functional magnetic
resonance neuroimages. Comput Biomed Res 29:162–173.
5596
www.pnas.orgcgidoi10.1073pnas.0800489105 Hadj-Bouziane et al.
Page 6
    • "In contrast, a recent study has shown that intranasal administration of oxytocin in monkeys reduced the activity in face-responsive ROIs to fearful and threatening faces, but not to neutral or appeasing faces, suggesting a selective effect of oxytocin on the perception of negative, but not positive, facial expressions (Liu et al. 2015). Our findings, in terms of both the differential valence effects and the differential modulation of the effective connectivity, suggest a modulation in macaques based not on the dichotomy of positive (neutral and appeasing) versus negative (fearful and threatening) facial expressions, but rather on the classification of facial expression along 3 axes: dominance (threatening), avoidance (fearful), and affiliation (appeasing) (Hadj-Bouziane et al. 2008). Thus, our study provides empirical evidence for dynamic alterations in neural coupling during the perception of behaviorally relevant facial expressions that are vital for social communication and interaction. "
    [Show abstract] [Hide abstract] ABSTRACT: In humans and monkeys, face perception activates a distributed cortical network that includes extrastriate, limbic, and prefrontal regions. Within face-responsive regions, emotional faces evoke stronger responses than neutral faces ("valence effect"). We used fMRI and Dynamic Causal Modeling (DCM) to test the hypothesis that emotional faces differentially alter the functional coupling among face-responsive regions. Three monkeys viewed conspecific faces with neutral, threatening, fearful, and appeasing expressions. Using Bayesian model selection, various models of neural interactions between the posterior (TEO) and anterior (TE) portions of inferior temporal (IT) cortex, the amygdala, the orbitofrontal (OFC), and ventrolateral prefrontal cortex (VLPFC) were tested. The valence effect was mediated by feedback connections from the amygdala to TE and TEO, and feedback connections from VLPFC to the amygdala and TE. Emotional faces were associated with differential effective connectivity: Fearful faces evoked stronger modulations in the connections from the amygdala to TE and TEO; threatening faces evoked weaker modulations in the connections from the amygdala and VLPFC to TE; and appeasing faces evoked weaker modulations in the connection from VLPFC to the amygdala. Our results suggest dynamic alterations in neural coupling during the perception of behaviorally relevant facial expressions that are vital for social communication.
    No preview · Article · Jan 2016 · Cerebral Cortex
    • "Quite surprisingly, when using dynamic facial expressions, Zhu et al. (2012) observed an additional face-responsive region lateral to the face patches (Figure 2). Although significant progress has been made in understanding the neuronal response characteristics of monkey face patches (Freiwald & Tsao, 2010; Freiwald et al., 2009; Hadj-Bouziane et al., 2008; Issa & DiCarlo, 2012; Pinsk et al., 2009; Tsao et al., 2006), little is known about their topographic organizations. Relating face patches to retinotopic areas, however , has implications for the neural computations they perform (Freiwald & Tsao, 2010; Halgren et al., 1999). "
    [Show abstract] [Hide abstract] ABSTRACT: We review recent phase-encoded retinotopic mapping data and discuss the spatial relationship between the retinotopically organized monkey cortex and feature- and category-selective clusters. Four areas sharing a foveal representation, V4t, FST, MT, and MSTv, constitute the MT field map cluster. Rostral to V4, areas V4A, OTd, PITv, and PITd also share a foveal representation, again forming a cluster. Concerning the retinotopic organization of face patches, we observed a gradual shift from posterior patches that are retinotopically organized to anterior, nonretinotopic patches. Feature- and category-selective regions in the nonretinotopic IT cortex form repetitive supermodules, each containing face, body, and color patches.
    No preview · Chapter · Dec 2015
  • Source
    • "Furthermore, Etcoff (1984) found evidence for independence using the Garner (1974) selective attention paradigm (but see later work outlined in Interdependence section). Thirdly, studies using non-human primates have suggested that different cortical cell populations are sensitive to facial identity and facial expression (e.g., Perrett et al., 1984; Hasselmo et al., 1989; Hadj-Bouziane et al., 2008). This suggestion has also been supported in human studies using positron emission tomography (Sergent et al., 1994) and fMRI (Haxby et al., 2000; Winston et al., 2004). "
    [Show abstract] [Hide abstract] ABSTRACT: According to the classic Bruce and Young (1986) model of face recognition, identity and emotional expression information from the face are processed in parallel and independently. Since this functional model was published, a growing body of research has challenged this viewpoint and instead support an interdependence view. In addition, neural models of face processing (Haxby, Hoffman & Gobbini, 2000) emphasise differences in terms of the processing of changeable and invariant aspects of faces. This article provides a critical appraisal of this literature and discusses the role of motion in both expression and identity recognition and the intertwined nature of identity, expression and motion processing. We conclude, by discussing recent advancements in this area and research questions that still need to be addressed.
    Full-text · Article · Mar 2015 · Frontiers in Psychology
Show more