ArticlePDF Available

Abstract and Figures

It remains unclear how the brain represents external objective sensory events alongside our internal subjective impressions of them-affect. Representational mapping of population activity evoked by complex scenes and basic tastes in humans revealed a neural code supporting a continuous axis of pleasant-to-unpleasant valence. This valence code was distinct from low-level physical and high-level object properties. Although ventral temporal and anterior insular cortices supported valence codes specific to vision and taste, both the medial and lateral orbitofrontal cortices (OFC) maintained a valence code independent of sensory origin. Furthermore, only the OFC code could classify experienced affect across participants. The entire valence spectrum was represented as a collective pattern in regional neural activity as sensory-specific and abstract codes, whereby the subjective quality of affect can be objectively quantified across stimuli, modalities and people.
Population coding of visual, object and affect properties of visual scenes. (a) Correlations of activation patterns across trials were rank-ordered within a participant. In the ideal representation similarity matrix (RSM), trials with similar features (for example, matching valence) demonstrate higher correlations along the diagonal than those with dissimilar features on the off-diagonal. (b) After regressing out other properties and effects of no interest, residual correlations were sorted on the basis of visual features (13 × 13), animacy (13 × 13) or valence (13 × 13) properties, and then separately examined in the EVC, VTC and OFC. Correlation ranks were averaged for each cell, providing visual (13 × 13), animacy (13 × 13) and valence RSMs (13 × 13). Higher correlations were observed along the main diagonal in the visual RSM in the EVC, animacy RSM in the VTC and valence RSM in the OFC. (c) Correlation ranks in the EVC, VTC and OFC were subject to GLM with differences in visual (left), animacy (middle) and valence (right) features as linear predictors. GLM coefficients (DCI) represent to what extent correlations were predicted by the property types. For visual-features DCI, we used t test (EVC: t15 = 6.7, P = 0.00003; VTC: t15 = 8.5, P = 0.000002; OFC: t15 = 0.8, P = 1) and paired t test (EVC versus VTC: t15 = 0.8, P = 1; EVC versus OFC: t15 = 4.2, P = 0.008; VTC versus OFC: t15 = 4.4, P = 0.005). For animacy DCI, we used t test (EVC: t15 = 3.6, P = 0.01; VTC: t15 = 10.3, P = 1.5 × 10−7; OFC: t15 = 3.9, P = 0.006) and paired t test (EVC versus VTC: t15 = −9.0, P = 1.7 × 10−6; EVC versus OFC: t15 = −1.0, P = 1; VTC versus OFC: t15 = 11.3, P = 9.2 × 10−8). For valence DCI, we used t test (EVC: t15 = 2.5, P = 0.11; VTC: t15 = 5.0, P = 0.0008; OFC: t15 = 7.6, P = 7.7 × 10−6) and paired t test (EVC versus VTC: t15 = 1.8, P = 0.81; EVC versus OFC: t15 = −4.2, P = 0.007; VTC versus OFC: t15 = −4.8, P = 0.002). t tests in a region were one-tailed and paired t tests were two-tailed. n = 16 participants. Error bars represent s.e.m. ***P < 0.001, **P < 0.01, *P < 0.05, Bonferroni corrected.
Content may be subject to copyright.
© 2014 Nature America, Inc. All rights reserved.
a r t I C l e S
Even when we observe exactly the same object, subjective experience
of that object often varies considerably among individuals, allow-
ing us to form unique impressions of the sensory world around us.
Wilhelm Wundt appropriately referred to these aspects of perception
that are inherently the most subjective as ‘affect’1: the way sensory
events affect us. Beyond basic sensory processing and object recogni-
tion, Wundt argued, the most pervasive aspect of human experience
is this internal affective coloring of external sensory events. Despite
its prominence in human experience, little is known about how the
brain represents the affective coloring of perceptual experience com-
pared with the rich neural characterizations of our other perceptual
representations, such as the somatosensory system2, semantics3, and
visual features and categories4–6.
Much of what we glean from external objects is not directly available
on the sensory surface. The brain may transform sensory representa-
tions into higher order object representations (for example, animate,
edible, dangerous, etc.)7. The traditional approach to understanding
how the brain represents these abstractions has been to investigate
the magnitude of activity of specialized neurons or brain regions8,9.
An alternative approach treats neural populations in a region of
cortex as supporting dimensions of higher order object representa-
tions according to their similarity in an abstract feature space6,10 .
Measuring these patterns of neuronal activity has been made possible
by advances in multivoxel pattern analysis (MVPA)11,12. For example,
MVPA of human blood oxygen level–dependent (BOLD) response to
visual objects has revealed that low-level feature dimensions can be
decoded on the basis of topographic structure in the early visual cor-
tices12–14 (but see ref. 15). In addition, high-level object dimensions,
such as object categories4 or animacy16, have been revealed in the
distributed population codes of the ventral temporal cortex (VTC).
Although pattern classifier decoding4,17 is sensitive to informa-
tion encoded combinatorially in fine-grained patterns of activity, it
typically focuses on binary distinctions to indicate whether a region
contains information about stimulus type (for example, face ver-
sus chair). By contrast, representational mapping further affords an
examination of the space in which information is represented in a
region (for example, how specific faces are related to each other)10.
By characterizing the representational geometry of regional activity
patterns, representational mapping reveals not only where and what,
but also how information is represented. Representational mapping
emphasizes the relationships between stimulus or experiential prop-
erties and their distances in high-dimensional space defined by the
collective patterns of voxel activity6,10. For example, although population
activity in the primary visual cortex can discriminate distinct colors, the
representational geometry in extrastriate region V4 captures the distances
between colors as they relate to perceptual experience18.
We asked how external events come to be represented as internal
subjective affect compared with other lower level physical and higher
level categorical properties. Supporting Wundt’s assertion of affect as
central to perceptual experience, surveys across dozens of cultures19
have shown the primary dimension capturing the characterization of
the world’s varied contents is the evaluation of their goodness-badness,
which is often referred to as valence20. We examined whether
collective patterns of activity in the human brain support a continuous
dimension of positive-to-negative valence, and where in the neural
hierarchy this dimension is represented. Similarity-dissimilarity in
subjective valence experience would then correspond to population
level activity across stimuli, with representational geometry of activ-
ity patterns indicating that extreme positive and negative valence are
furthest apart.
It has been traditionally thought that affect is not only represented
separately from the perceptual cortices, which represent the sensor y
and perceptual properties of objects21, but also in distinct affective
zones for positive and negative valence22–24. Lesion and neuroimaging
1Human Neuroscience Institute, Department of Human Development, College of Human Ecology, Cornell University, Ithaca, New York, USA. 2Department of
Psychology, University of Toronto, Toronto, Ontario, Canada. 3Medical Research Council, Cognition and Brain Sciences Unit, Cambridge, UK. Correspondence should
be addressed to J.C. ( or A.K.A. (
Received 19 January; accepted 23 May; published online 22 June 2014; doi:10.1038/nn.3749
Population coding of affect across stimuli, modalities
and individuals
Junichi Chikazoe1, Daniel H Lee2, Nikolaus Kriegeskorte3 & Adam K Anderson1,2
It remains unclear how the brain represents external objective sensory events alongside our internal subjective impressions of
them—affect. Representational mapping of population activity evoked by complex scenes and basic tastes in humans
revealed a neural code supporting a continuous axis of pleasant-to-unpleasant valence. This valence code was distinct from
low-level physical and high-level object properties. Although ventral temporal and anterior insular cortices supported valence
codes specific to vision and taste, both the medial and lateral orbitofrontal cortices (OFC) maintained a valence code independent
of sensory origin. Furthermore, only the OFC code could classify experienced affect across participants. The entire valence
spectrum was represented as a collective pattern in regional neural activity as sensory-specific and abstract codes, whereby the
subjective quality of affect can be objectively quantified across stimuli, modalities and people.
© 2014 Nature America, Inc. All rights reserved.
a r t I C l e S
studies of affective processes suggest a central role for the OFC25, with
evidence pointing to the lateral and medial OFC regions for affective
representations of visual26, gustatory24 and olfactory stimuli22,2 7,28.
Increasing activity in the medial OFC and adjacent ventromedial pre-
frontal cortices have been associated with goal value, reward expect-
ancy, outcome values and experienced positive valence23,29, and may
support ‘core affect’30. However, meta-analyses of neuroimaging stud-
ies have uncovered substantial regional overlap between positive and
negative valence31. In conjunction with evidence against distinct basic
emotions30, localizing distinct varieties of emotional experience has
been a great challenge31. Potentially undermining simple regional
distinctions in valence, recent monkey electrophysiological studies32
have reported that valence-coding neurons with different proper-
ties (that is, neurons coding positivity, negativity, and both positivity
and negativity) are anatomically interspersed in Walker’s area 13, the
homolog of human OFC. Thus, exploring distinct regions that code
for positive and negative valence may not be fruitful, as the inter-
spersed structure of the aversive and appetitive neurons, at the scale
of cortical regions, respond equivocally and confound traditional
univariate functional magnetic resonance imaging (fMRI) analysis
methods. With a voxel-level neuronal bias, multivoxel patterns can
reveal whether the representational geometry of valence is captured
by distance in high-dimensional neural space.
The notion that affect is largely represented outside the perceptual
cortices21–27 has also not been tested with rigor. Average regional
neural activity may miss the information contained in population
level response in the perceptual cortices themselves. Rather than
depending on distinct representations, affect may be manifest in the
same regions that support sensory and object processing. Although
posterior cortical regions are often modulated by affect, it remains
unclear whether valence is coded in the perceptual cortices or whether
perceptual representations are merely amplified by it33. Examining the
representational geometry of population codes can address whether
affect overlaps with other modality-specific stimulus representations
that support basic visual features or object category membership.
If population codes reveal that valence is embodied in modality-specific
neuronal activity33, then this would provide direct support for
Wundt’s observations that affect is a dimension central to perceptual
A modality-specific conception of affective experience may suggest
that affect is not commonly coded across events originating from
distinct modalities. This would allow valence from distinct stimuli
and modalities to be objectively quantified and then compared. It
is presently unknown whether the displeasure evoked by the sight
of a rotting carcass and the taste of spoiled wine are at some level
supported by a common neural code. Although fMRI studies have
shown overlapping neural responses in the OFC related to distinct
modalities34, overlapping average activity is not necessarily diagnostic
of engagement of the same representations. Fine-grained patterns
of activity with these regions may be distinct, indicating that the
underlying representations are modality specific, although appearing
colocalized given the spatial limits of fMRI. This leaves unanswered
whether there is a common neural affect code across stimuli origi-
nating from distinct modalities, whether evoked by distal photons
or proximal molecules. If valence is represented supramodally, then,
at an even more abstract level, we may ask whether the representa-
tion of affect demonstrates correspondence across people, affording a
common reference frame across brains. This would provide evidence
that even the most subjective aspect of an individual’s experience, its
internal affective coloring, can be predicted on the basis of the pat-
terns observed in other brains, similar to what has been found for
external object properties35,36.
To answer these questions of how affect is represented in the human
brain, we first examined whether population vectors in response to
complex visual scenes support a continuous representation of experi-
enced valence and their relation to objective low-level visual proper-
ties (for example, luminance, visual salience) and higher order object
properties (animacy) in the early visual cortex (EVC), VTC and OFC.
We then examined whether the representation of affect to complex
visual scenes was shared with basic gustatory stimuli, supporting a
valence coordinate space common to objects stimulating the eye or
the tongue. Lastly, we examined whether an individual’s subjective
affect codes corresponds to that observed in the brains of others.
Visual feature, object and affect representations of complex
visual scenes
To investigate how object information and affective experience of
complex visual stimuli are represented in the human brain, we pre-
sented 128 unique visual scenes to 16 participants during fMRI. After
each picture presentation (3 s), participants rated their subjective
affect on separate positivity and negativity scales (1–7, least to most).
We examined the similarity of activation patterns as related to three
distinct properties, each increasing in degree of abstraction: low-
level visual features, object animacy and subjective affect (Online
Methods and Supplementary Fig. 1). Consistent with their substantial
independence, visual feature, animacy and valence scores were
largely uncorrelated, sharing 0.2% (visual features and animacy, all
R2 = 0.002), 1.0% (visual features and valence, all R2 0.022) and 2.3%
(animacy and valence) of variance (all R2 0.057; n = 128 trials). This
orthogonality allowed us to examine whether distinct or similar codes
support visual feature, object and affect representations.
Prior to multivariate analyses, we conducted a univariate parametric
modulation analysis to test the monkey electrophysiological findings
x = –2
y = 50
Activity magnitude
1 2 3 4 5 6 7
Figure 1 Parametric modulation analysis
(univariate) for independent ratings of
positive and negative valence. (a) Activation
map of sensitivity to positive valence, negative
valence and both. Yellow indicates voxels
sensitive to positive valence (P < 0.001 for
positive, P > 0.05 for negative), blue indicates
voxels sensitive to negative valence (P < 0.001
for negative, P > 0.05 for positive) and green
indicates the conjunction of positive and negative
valence (P < 0.031 for positive, P < 0.031 for negative). (b) Mean activity in vmPFC and medial OFC increased along with increases in both positive
and negative valence scores. Yellow lines indicate signals of the peak voxel (x = −8, y = 42, z = −12, t15 = 8.7, P = 0.0000003, FDR 0.05) maximally
sensitive to positive valence. Blue lines indicate signals of the peak voxel (x = −8, y = 52, z = −8, t15 = 6.8, P = 0.000006, FDR 0.05) maximally
sensitive to negative valence. Dotted lines indicate signal for opposite valence (that is, negative valence in the peak positive voxel, and positive valence
in the peak negative voxel). n = 16 participants. Error bars represent s.e.m.
© 2014 Nature America, Inc. All rights reserved.
a r t I C l e S
of bivalent neuronal coding32. When using positivity and negativity
ratings as independent parameters, a large majority of the valence-
sensitive regions in the medial and lateral OFC were responsive to
both positivity and negativity (75.7%; positivity only, 17.1%; negativity
only, 7.2%; Fig. 1a). Specifically, we found that the medial OFC and
more dorsal regions in the ventromedial prefrontal cortex (vmPFC),
areas typically associated with value coding and positive valence23,29,
exhibited parallel linear increases in activation with increasing rat-
ings of both negative and positive valence (Fig. 1b). The peak voxel
(x = −8, y = 42, z = −12, t15 = 8.7, false discovery rate (FDR) 0.05)
activity, which was maximally sensitive to positive valence also lin-
early increased with negative valence, whereas the peak voxel (x = −8,
y = 52, z = −8, t15 = 6.8, FDR 0.05) activity, which was maximally
sensitive to negative valence, also linearly increased with negative
valence. These responses may reflect a common underlying arousal
coding and therefore contain little diagnostic information about
experienced valence. Alternatively, this univariate activity may reflect
coding of both positive and negative valence37, which is equivocal
at voxel signal resolution, consistent with the interspersed structure
of the aversive and appetitive neurons, as obser ved in the monkey
OFC32. Representational mapping of population level responses10 may
address this ambiguity. Given that positive and negative valence were
experienced as experientially distant (average r = −0.53), if the brain
supports a valence code, then increasing dissimilarity in valence expe-
rience would be supported by increasing dissimilarity in population
activity patterns, despite their similarity in univariate magnitude.
We used representational similarity analysis6,10, a method for
uncovering representational properties underlying multivariate data.
We first modeled each trial as a separate event, and then examined
multivoxel brain activation patterns in broad, anatomically defined
regions of interest (ROIs) across different levels of the neural hierarchy,
including EVC, VTC and OFC (Fig. 2a). To assess how informa-
tion was represented in each region, we constructed representational
similarity matrices6 from the correlation coefficients of the activation
patterns between trials for all picture combinations (128 × 127 / 2)
separately in the EVC, VTC and OFC. To the degree that activity pat-
terns corresponded to specific property, relations among pictures pro-
vide a mapping of the representational contents of each region. These
similarity matrices were submitted to multidimensional scaling (MDS)
for visualization. This assessment revealed response patterns organized
maximally by gradations of low-level visual features in the EVC (r = 0.38,
P = 0.00001), object animacy in the VTC (r = 0.73, P = 7.7 × 10−23) and
valence in the OFC (r = 0.46, P = 0.00000005) (n = 128 trials; Fig. 2b and
Supplementary Fig. 2), with all MDS analyses reaching fair levels of fit
(Stress-I; EVC, 0.2043; VTC, 0.2230; OFC, 0.2912; stress values denote
how well the MDS fits the measured distances).
Property-region associations were further examined by converting
each regions representational similarity matrix, which relate trial rep-
resentations, into property representational similarity matrices, which
relate property representations of visual feature, animacy or valence.
For example, a valence representational similarity matrix was created by
sorting the trial-based representational similarity matrix (128 × 127 /2)
into 13 × 13 valence bins (Fig. 3 and Supplementary Figs. 3 and 4).
As such, representational similarity matrices were sorted according to
distinct stimulus properties, allowing us to visualize a representation
map of each region according to each property. As presented in an ideal
representational similarity matrix (Fig. 3a), if activity patterns across
pictures correspond to a property, then we would expect higher cor-
relations along the main diagonal. Higher correlations were observed
along the main diagonal for visual features in the EVC, animacy in the
VTC and valence in the OFC (Fig. 3b and Supplementary Fig. 5).
To statistically test the validity of these representational maps, we used
a general linear model (GLM) decomposition of the representational
similarity matrices (Online Methods and Supplementary Fig. 3) to
derive a distance correspondence index (DCI): a measure of how well
distance (dissimilarity) in neural activation pattern space corresponds
to distance in the distinct property spaces. The DCI for visual feature,
animacy and valence from each region were computed for each partici-
pant and submitted to one-sample t tests. This revealed representation
maps of distinct kinds of property distance, increasing in abstrac-
tion from physical features to object categories to subjective affect
along a posterior-to-anterior neural axis (Fig. 3b,c, Supplementary
Fig. 5 and Supplementary Table 1). Valence distance was maximally
coded in the OFC (t15 = 7.6, P = 0.000008), to a lesser degree in the
VTC (t15 = 5.0, P = 0.0008) and not reliably in the EVC (t15 = 2.5,
P = 0.11) (all Bonferroni corrected). A two-way repeated-measures
ANOVA revealed a highly significant interaction (F1.9, 28.8 = 69.5,
P = 4.5 × 10−9, Greenhouse-Geisser correction applied) between
regions (EVC, VTC, OFC) and property type (visual feature, animacy,
valence). These results indicate that visual scenes differing in objec-
tive visual features and object animacy, but evoking similar subjective
affect, resulted in similar representation in the OFC. In these activ-
ity pattern analyses, mean activity in a region was removed and the
activity magnitude of each voxel was normalized by subtracting mean
values across trials; thus, mean activity could not account for the
observed representation mapping. However, to further test whether
valence representations were driven by regions that only differed in
activity magnitude, we reran these analyses in each ROI after removing
voxels showing significant main effects of valence in activity under a
liberal threshold (P < 0.05, uncorrected). Even after this removal, very
similar results were obtained (Supplementary Table 1).
These results suggest that distributed activation patterns across
broad swaths of cortex can support affect coding distinct from
other object properties. To investigate whether distinct or over-
lapping subregions in the EVC, VTC and OFC support visual
aEVC Visual features
Low High
Inanimate Animate
OFC Valence
Negative Positive
ii iii iv v
iii iii iv v
iii iii iv v
ii iii iv v
i ii iii iv v
i ii iii iv v
Figure 2 Representational geometry of multi-voxel activity patterns in
EVC, VTC and OFC. (a) ROIs were determined on the basis of anatomical
gray matter masks. (b) The 128 visual scene stimuli were arranged using
MDS such that pairwise distances reflected neural response-pattern
similarity. Color code indicates feature magnitude scores for low-level
visual features in EVC (top), animacy in VTC (middle) and subjective
valence in OFC (bottom) for the same stimuli. Examples i, ii, iii, iv and
v traverse the primary dimension in each feature space, with pictures
illustrating visual features (for example, luminance) (top), animacy
(middle) and valence (bottom).
© 2014 Nature America, Inc. All rights reserved.
a r t I C l e S
feature, object and affect codes, as well as the contribution of
other brain regions, we conducted a cubic searchlight analysis38.
In a given cube, correlations across trials were calculated and were
subjected to the same GLM decomposition used above to compute
DCIs (Supplementary Fig. 3). This revealed a similar posterior-to-
anterior pattern of increasing affective subjectivity and abstraction
from physical features. Visual features were maximally represented
primarily in the EVC and moderately in the VTC, object animacy
was maximally represented in the VTC, and affect was maximally
represented in the vmPFC, including the medial OFC, and the lateral
OFC, and moderately represented in ventral and anterior temporal
regions including the temporal pole (Fig. 4a,b, Supplementary Fig. 6
and Supplementary Tables 2 and 3). These results indicate that object
and affect representations are not only represented as distributed acti-
vation patterns across large areas of cortex, but are also represented as
distinct region-specific population codes (that is, in a 1-cm3 cube).
To further examine what pattern-based affect coding uniquely
codes, we tested whether differences in mean activity magnitude
across trials could code valence information in the regions defined
by the above searchlight (that is, the medial OFC and vmPFC and
lateral OFC). To test whether mean activity magnitude is capable of
discrimination of valence representations, we applied the same GLM
decomposing procedure to mean activity magnitude instead of activa-
tion patterns. Here, similarity-dissimilarity of neural activation was
defined by difference in mean activity magnitude in the region. The
medial OFC and vmPFC showed a linear increase in activation, with
increases in both positive and negative valence from neutral (Fig. 4c).
The mean-based GLM decomposition analysis revealed a lack of
valence specificity in mean magnitude (t15 = 1.1, P = 0.13; Fig. 4d),
whereas a pattern-based approach showed a clear separation of valence,
with positive and negative valence lying on opposite ends of a con-
tinuum (t15 = 4.2, P = 0.0004; Fig. 4e). By contrast, the lateral OFC did
not demonstrate a relationship between mean activation and positive
or negative valence (Fig. 4f), confirmed by a mean activity–based
GLM decomposition analysis (t15 = 0.5, P = 0.30; Fig. 4g), whereas
a pattern-based approach still yielded a clear separation of valence
(t15 = 3.9, P = 0.0007; Fig. 4h). These results not only explain why
pattern analysis is required for representational mapping of affect,
but also further indicate importance of discriminating arousal and
valence coding39. That is, even when regional univariate activity
showed similar responses to both positive and negative valence, it
may not be diagnostic of arousal coding, but rather may reveal coding
of both positive and negative valence (Fig. 4d,e,g,h).
Common and distinct representations of affect in vision and taste
To test whether valence codes are modality specific or also support
an abstract, nonvisual representation of affect, we conducted a gusta-
tory experiment on the same participants. Affective appraisals40 of
complex scenes may also require deeper and more extensive process-
ing that is not required by simpler, chemical sensory stimuli such as
taste. As such, it is important to establish the generality of the affect
coding in response to visual scenes. In this experiment, four differ-
ent taste solutions matched for each participant in terms of inten-
sity (sour, sweet, bitter, salty) and a tasteless solution were delivered
20 times each across 100 trials during fMRI. Paralleling our analy-
sis of responses to scenes, representational similarity matrices were
constructed from correlations of activation patterns across trials
(100 × 99/2) for each region. To visualize the representation maps from
the gustatory experiment, we created a valence representational simi-
larity matrix of the OFC, which revealed higher correlations across taste
experiences of similar valence—along the main diagonal (Fig. 5a and
Supplementary Table 1). Valence DCIs in the OFC were computed for
each participant and submitted to one-sample t tests, which revealed a
significant relation between activation pattern similarity and valence
distance (t15 = 2.9, P = 0.018). Valence DCIs in the VTC also achieved
significance (t15 = 3.3, P = 0.007), but not the EVC (t15 = 1.5, P = 0.23).
Figure 3 Population coding of visual, object
and affect properties of visual scenes.
(a) Correlations of activation patterns across
trials were rank-ordered within a participant. In
the ideal representation similarity matrix (RSM),
trials with similar features (for example, matching
valence) demonstrate higher correlations along
the diagonal than those with dissimilar features
on the off-diagonal. (b) After regressing out
other properties and effects of no interest,
residual correlations were sorted on the basis of
visual features (13 × 13), animacy (13 × 13) or
valence (13 × 13) properties, and then separately
examined in the EVC, VTC and OFC. Correlation
ranks were averaged for each cell, providing visual
(13 × 13), animacy (13 × 13) and valence RSMs
(13 × 13). Higher correlations were observed
along the main diagonal in the visual RSM in the
EVC, animacy RSM in the VTC and valence RSM
in the OFC. (c) Correlation ranks in the EVC, VTC
and OFC were subject to GLM with differences in
visual (left), animacy (middle) and valence (right)
features as linear predictors. GLM coefficients
(DCI) represent to what extent correlations were
predicted by the property types. For visual-
features DCI, we used t test (EVC: t15 = 6.7, P = 0.00003; VTC: t15 = 8.5, P = 0.000002; OFC: t15 = 0.8, P = 1) and paired t test (EVC versus VTC: t15 = 0.8,
P = 1; EVC versus OFC: t15 = 4.2, P = 0.008; VTC versus OFC: t15 = 4.4, P = 0.005). For animacy DCI, we used t test (EVC: t15 = 3.6, P = 0.01; VTC: t15 = 10.3,
P = 1.5 × 10−7; OFC: t15 = 3.9, P = 0.006) and paired t test (EVC versus VTC: t15 = −9.0, P = 1.7 × 10−6; EVC versus OFC: t15 = −1.0, P = 1; VTC versus OFC:
t15 = 11.3, P = 9.2 × 10−8). For valence DCI, we used t test (EVC: t15 = 2.5, P = 0.11; VTC: t15 = 5.0, P = 0.0008; OFC: t15 = 7.6, P = 7.7 × 10−6) and
paired t test (EVC versus VTC: t15 = 1.8, P = 0.81; EVC versus OFC: t15 = −4.2, P = 0.007; VTC versus OFC: t15 = −4.8, P = 0.002). t tests in a region were
one-tailed and paired t tests were two-tailed. n = 16 participants. Error bars represent s.e.m. ***P < 0.001, **P < 0.01, *P < 0.05, Bonferroni corrected.
(averaged percentile of r)
b c
Low High
Rank-order correlations
1 (neg)
13 (pos)
7 (neu)
Valence RSM
1 (neg) 7 (neu) 13 (pos)
Averaged percentile of r
Dissimilar Similar
Percentile of r (residual)
0 100
Visual-features DCI
Animacy DCIValence DCI
© 2014 Nature America, Inc. All rights reserved.
a r t I C l e S
Thus, in addition to the OFC, the VTC represents affect information
even when evoked by taste. Similar results were found when excluding
regions that demonstrated a significant change (P < 0.05, uncorrected)
in mean activity to taste valence (Supplementary Table 1).
Evidence of affective coding for pictures and tastes is not singularly
diagnostic of an underlying common valence population code, as each
sensory modality coding may be independently represented in the
same regions. To directly examine a cross-modal commonality, we
examined the representations of valence in the OFC on the basis of
trials across visual and gustatory experiments. We first computed new
cross-modal representational similarity matrices correlating activa-
tion patterns across the 128 visual × 100 taste trials. Then, to visualize
the cross-modal representation map, we created a valence represen-
tational similarity matrix of the OFC, which revealed higher correla-
tions across visual and taste experiences of similar valence, along the
main diagonal (Fig. 5b and Supplementary Table 1). DCI revealed
increasing similarity of OFC activation patterns between visual and
gustatory trials, as affect was more similar (t15 = 3.0, P = 0.013; Fig. 5b
and Supplementary Table 1). Notably, the same analysis revealed
no such relation in the VTC (t15 = 0.5 P = 0.99). That is, although
we found modality-specific valence coding in the VTC (Figs. 3b,c
and 4a,b and Supplementary Table 1), we found modality-independ-
ent valence coding only in the OFC. Similar results were obtained
when excluding regions that demonstrated a significant change
(P < 0.05, uncorrected) in mean activity to taste or visual valence
(Supplementary Table 1).
To investigate whether specific subregions support modality-
specific versus supramodal affect, we performed three independent
cubic searchlight analyses on the basis of trials within visual,
within gustatory, and across visual and gustatory experiments.
In a given cube, correlation of activation patterns of each cross-trial
combination (128 × 127/2 for visual, 100 × 99/2 for gustatory, and
128 × 100 across visual and gustatory trials) were subject to the same
GLM decomposition procedure (Supplementary Fig. 3). We defined
a region as representing supramodal affect if it was discovered in all
three independent searchlights, whereas a region was defined as rep-
resenting visual-specific affect if it was discovered only in the visual
searchlight, but not the other two (analogously for gustatory-specific
affect). This revealed that the anteroventral insula and posterior
OFC (putative primary and secondary gustatory cortex41) repre-
sented gustatory valence, and adjacent, but distinct, regions in the
VTC represented gustatory and visual valence separately (Fig. 5c,d
and Supplementary Table 4). In contrast with this sensory specific
affect coding, the medial OFC and vmPFC, as well as the lateral OFC
and midcingulate cortex, contained supramodal representations of
valence (Fig. 5c,d). These searchlight results were not only explora-
tory, but also confirmatory, as they survived multiple comparison
correction (Online Methods).
Classification of affect brain states across participants
Lastly, we assessed whether valence in a specific individual corres-
ponded to affect representations in others’ brains. As previous work has
Figure 4 Region-specific population coding of
visual features, object animacy and valence in
visual scenes. (a) Multivariate searchlight
analysis revealed distinct areas represent
coding of visual features (green), animacy
(yellow) and valence (red) properties.
Activations were thresholded at P < 0.001,
uncorrected. (b) GLM coefficients
(DCI) represent the extent to which correlations
were predicted by the property types (visual
features, animacy and valence). For visual-
features DCI, we used t test (EVC: t15 = 8.4,
P = 0.000003; VTC: t15 = 4.3, P = 0.004;
temporal pole (TP): t15 = −0.1, P = 1; OFC:
t15 = 1.4, P = 1) and paired t test (EVC versus
VTC: t15 = 6.4, P = 0.0002; EVC versus
TP: t15 = 5.8, P = 0.0006; EVC versus OFC:
t15 = 4.5, P = 0.008; VTC versus TP: t15 = 2.6,
P = 0.36; VTC versus OFC: t15 = 1.2, P = 1;
TP versus OFC: t15 = −1.4, P = 1). For
animacy DCI, we used t test (EVC: t15 = 3.5,
P = 0.017; VTC: t15 = 7.8, P = 0.000007;
TP: t15 = 0.9, P = 1; OFC: t15 = 3.6,
P = 0.015) and paired t test (EVC versus
VTC: t15 = −6.4, P = 0.0002; EVC versus
TP: t15 = 2.4; P = 0.54; EVC versus OFC:
t15 = 1.1, P = 1; VTC versus TP: t15 = 6.8,
P = 0.0001; VTC versus OFC: t15 = 7.8,
P = 0.00002; TP versus OFC: t15 = −2.9,
P = 0.19). For valence DCI, we used t test
(EVC: t15 = 1.0, P = 1; VTC: t15 = 2.6,
P = 0.12; TP: t15 = 3.5, P = 0.019; OFC:
t15 = 6.0, P = 0.0001) and paired t test
(EVC versus VTC: t15 = −0.7, P = 1; EVC versus TP: t15 = −1.5, P = 1; EVC versus OFC: t15 = −3.4, P = 0.071; VTC versus TP: t15 = −1.6, P = 1;
VTC versus OFC: t15 = −5.0, P = 0.003; TP versus OFC: t15 = −5.2, P = 0.002). t tests in a region were one-tailed and paired t tests were two-tailed.
n = 16 participants. (ch) Difference in mean activity magnitude and pattern in the searchlight defined regions (ce: the medial OFC (mOFC) and
vmPFC; fh: the lateral OFC (lOFC)). (c,f) Relationship of activity magnitude and ratings for positivity and negativity (n = 16 participants). (d,g) Valence
representation similarity matrices based on the mean activity magnitude. (e,h) Valence representation similarity matrices based on pattern activation
(correlation). n = 16 participants. Error bars represent s.e.m. ***P < 0.001, **P < 0.01, *P < 0.05. Bonferroni corrected.
Activity magnitude
1 2 3 4 5 6 7
(2, 54, –4)
Mean activation
bVisual features
*** ***
z = –2z = –10
ValenceAnimacyVisual features
Activity magnitude
(30, 52, –2)
Mean activation
Mean activation
Similarity (averaged
percentile of r)
Activation pattern
Similarity (averaged
percentile of r)
Activation pattern
Similarity (averaged
percentile of r)
Mean activation
Similarity (averaged
percentile of r)
© 2014 Nature America, Inc. All rights reserved.
a r t I C l e S
demonstrated that representational geometry of object categories in the
VTC can be shared across participants35,36, we first examined whether
item-level (that is, by picture) classification is possible by comparing
each participant’s item-based representational similarity matrices to
that estimated from all other participants in a leave-one-out procedure.
We calculated the classification performance for each target picture as
the percentage that its representation was more similar to its estimate,
compared pairwise to all other picture representations (50% chance;
Online Methods and Supplementary Fig. 7). We found that item-
specific representations in the VTC were predicted very highly by
the other participants’ representational map (80.1 ± 1.4% accuracy,
t15 = 21.4, P = 2.4 × 1012; Fig. 6a). Cross-participant classification accu-
racy was also statistically significant in the OFC (54.7 ± 0.8% accuracy,
t15 = 5.7, P = 0.00008); however, it was substantially reduced compared
with the VTC (t15 = 15.9, P = 8.4 × 10−11), suggesting that item-specific
information is more robustly represented and translatable across
participants in the VTC compared with the OFC.
We next examined whether a persons affect representations toward
these visual items could be predicted by others’ affect representations.
After transforming representations of items into subjective affect
and conducting a similar leave-one-out procedure (Online Methods
and Supplementary Fig. 8), although overall much lower than item
Distance in valence
1 2 3 4 5
Similarity (percentile of r)
Visual DCI Gustatory DCI Visual × gustatory DCI
Valence DCI
mOFC (2, 60, –4)
*** *****
lOFC (40, 50, 0)
*** *****
MCC (6, –18, 42)
*** ******
VTC1 (54, –62, –2)
VTC2 (54, –48, –2)
TP (60, 12, 2)
aINS (–26, 20, –8)
0.008 ***
pOFC(–38, 20, –14)
STR (24, 14, –4)
0.008 ***
y = –14
Visual specific Gustatory specific Modality independent
y = 22
z = –2
z = –6
Distance in valence
Similarity (percentile of r)
Valence (gustatory)
Pos Neg Pos
Valence (gustatory)
(averaged percentile of r)
(averaged percentile of r)
Pos Neg Pos
Valence (gustatory)
Valence (visual)
Figure 5 Visual, gustatory and cross-modal affect codes. (a,b) OFC voxel activity pattern correlations across trials in the gustatory experiment (a) and
across visual and gustatory experiments (b) were rank-ordered in each participant and then averaged on the basis of valence combinations (13 × 13).
Correlations across trials were sorted into five bins of increasing distance in valence. OFC correlations corresponded to valence distance, both within
tastes and across tastes and visual scenes (n = 15 participants). (c) Multivariate searchlight results revealed subregions coding modality-specific
(visual = red, taste = yellow) and modality-independent (green) valence. (d) GLM coefficients (DCI) represent the extent to which correlations were
predicted by valence. Averaged DCI in the visual (top), taste (middle) and visual × gustatory (bottom) valence subregions. In TP, we used t test
(visual valence (V): t15 = 4.3, P = 0.0003; gustatory valence (G): t15 = 0.23, P = 0.41; visual × gustatory valence (V × G): t15 = 0.71, P = 0.24).
In VTC1, we used t test (V: t15 = 4.9, P = 0.00009; G: t15 = −0.43, P = 1; V × G: t15 = 0.10, P = 0.46). In striatum (STR), we used t test (V: t15 = 3.9,
P = 0.0007; G: t15 = 0.23, P = 0.41; V × G: t15 = 1.2, P = 0.12). In anterior insula (aINS), we used t test (V: t15 = 1.2, P = 0.12; G: t15 = 4.0,
P = 0.0006; V × G: t15 = −1.2, P = 1). In VTC2, we used t test (V: t15 = 0.62, P = 0.27; G: t15 = 4.8, P = 0.0001; V × G: t15 = −1.2, P = 1).
In posterior OFC (pOFC), we used t test (V: t15 = 0.40, P = 0.34; G: t15 = 3.7, P = 0.0010; V × G: t15 = 0.78, P = 0.22). In medial OFC, we used t test
(V: t15 = 6.3, P = 0.000007; G: t15 = 2.6, P = 0.010; V × G: t15 = 3.9, P = 0.0008). In lateral OFC, we used t test (V: t15 = 5.2, P = 0.00005; G: t15 = 2.8,
P = 0.007; V × G: t15 = 4.1, P = 0.0005). In midcingulate cortex (MCC), we used t test (V: t15 = 3.8, P = 0.0008; G: t15 = 3.8, P = 0.0009;
V × G: t15 = 4.0, P = 0.0005). P values were uncorrected. n = 16 participants. Error bars represent s.e.m. ***P < 0.001, **P < 0.01.
© 2014 Nature America, Inc. All rights reserved.
a r t I C l e S
representations in the VTC, we found cross-participant classification
of valence in the OFC (55.6 ± 0.9% accuracy, t15 = 6.4, P = 0.00002)
(Fig. 6a and Supplementary Table 5). Valence classification did not
achieve significance in the VTC (51.7 ± 0.8% accuracy, t15 = 2.0,
P = 0.13), with a paired t test between the OFC and VTC reveal-
ing greater classification accuracy in the OFC than in the VTC
(t15 = 4.2, P = 0.0007). A two-way repeated-measures ANOVA revealed
a highly significant interaction (F1,15 = 278.1, P = 4.3 × 10−11; Fig. 6a)
between region and representation type (item versus affect). This
interaction revealed that, although stimulus specific representations
were shared in the VTC, subjective affect representations are similarly
structured across people in the OFC, even when the specific stimuli
evoking affect may vary. Furthermore, the continuous representa-
tion of valence information was also revealed here, as increases in
valence distance decreased the confusability between two affect rep-
resentations, thereby increasing classification rates (F1.4, 20.3 = 37.4,
P = 5.6 × 10−6; Fig. 6b and Supplementary Table 5).
As an even stronger test of cross participant translation of affect,
we asked whether OFC affect representations toward pictures could
predict other participants’ affect representations in response to tastes
(visual × gustatory) and vice versa (gustatory × visual). These analyses
revealed classification accuracies that were significantly higher than
chance level (visual × gustatory, 54.2 ± 0.7%, t15 = 5.8, P < 0.001;
gustatory × visual, 54.6 ± 1.2%, t14 = 3.8, P < 0.001), with increases in
valence distance between stimuli again increasing classification accu-
racy (Fig. 6b). Even without an overlap in objective stimulus features
(that is, vision versus taste), the OFC supported classification of the
affective contents of subjective experience across individuals.
Representational mapping revealed that a complex scene is transformed
from basic perceptual features and higher level object categories into
affective population representations. Furthermore, rather than special-
ized regions designated to represent what is good or bad, population
activity in a region supported a continuous dimension of positive-to-
negative valence. Population codes also revealed that there are multiple
representations of valence to the same event, both sensory specific and
sensory independent. Posterior cortical representations in the tempo-
ral lobe and insular cortices were unique to the sensory modality of
origin, whereas more anterior cortical representations in the medial and
lateral OFC afforded a translation across distinct stimuli and modalities.
This shared affect population code demonstrated correspondence
across participants. Taken together, these data indicate that the neural
population vector in a region may represent the affective coloring of
experience, whether between objects, modalities or people.
Population coding of affect
As suggested by monkey electrophysiological studies32, positivity- and
negativity-sensitive neurons are likely interspersed in various sec-
tors of the human OFC and vmPFC. Consistent with these single-cell
recording data, our present univariate parametric modulation analysis
did not show clear separation of positivity- and negativity-sensitive
voxels in the OFC, with much greater overlap than separation. Prior
studies typically assume a mathematical inversion of affective coding
in the brain (for example, positivity is the inverse of negativity)30,42.
We were able to test this assumption directly as participants indicated
their experience of positive and negative valence independently on
each trial. Using these independent parameters, we found that regions
such as the medial OFC and vmPFC, which have been associated with
increasing positive value23,29, responded equally to negative valence
(Fig. 1). This bivalent association is often taken to indicate a coding
of arousal—the activating aspect of emotional experience20, rather
than separate coding of the opposing ends of valence22. However, our
results indicate that regional activity magnitude in the medial OFC
and vmPFC could not differentiate between opposing experiences
of positive and negative valence, whereas population coding of the
same voxels distinguished them as maximally distant (Fig. 4d,e,g,h),
suggesting the distinct coding of unique valence experiences.
Although pattern analysis may be able to capture the difference in
distribution of positivity- and negativity-sensitive neurons in the local
structure, what the patterns exactly reflect is still a matter of debate11.
With regard to its underlying physiological bases, the interdigitation of
single cell specialization for either positive or negative valence32 need
not suggest the utilization of a pattern code. Given the averaging of
many hundreds of thousands of neurons in a voxel in fMRI, it may be
that pattern analysis sensitive to voxel-level biases in valence tuned
neurons32 is required to reveal valence coding in BOLD imaging.
It remains to be determined whether the coding of valence is best captured
by a distributed population level code across cells with distinct valence
tuning properties. Evidence of colocalization of distinct valenced tuned
neurons32 may suggest the importance of rapid within-region computa-
tion of mixed valence responses, whereby the overall affective response
is derived from a population level code across individual neurons.
Sensory-specific affect codes in the perceptual cortices
Wundt’s proposal of affect as an additional dimension of perceptual
experience1 may suggest that these subjective qualia are represented
in posterior sensory cortices, binding affect to specific sensor y
events. Although altered mean activity in perceptual cortices associ-
ated with valence has been found, including reward-related activity
in the VTC in monkeys43 and humans42, it is unclear whether these
Figure 6 Cross-participant classification of items and affect.
(a) Classification accuracies of cross-participant multivoxel patterns for
specific items and subjective valence in the VTC (gray) and OFC (white).
Each target item or valence was estimated by all other participants
representation in a leave-one-out procedure. Performance was calculated by
the target’s similarity to its estimate compared with all other trials in pairwise
comparison (50% chance). For item classification, t test (OFC: t15 = 5.7,
P = 0.00008, VTC: t15 = 21.4, P = 2.4 × 10−12), paired t test (OFC versus
VTC: 15 = −15.9, P = 8.4 × 10−11). For valence classification, t test
(OFC: t15 = 6.4, P = 0.00002, VTC: t15 = 2.0, P = 0.13), paired t test
(OFC versus VTC: t15 = 4.2, P = 0.0007). Bonferroni correction was applied,
based on number of comparisons for each ROI (2 (ROI: OFC and VTC)). t tests in a region were one-tailed and paired t tests were two-tailed. n = 16
participants. (b) Relationship between classification accuracies and valence distance in the OFC. Accuracies increased monotonically as experienced
valence across trials became more clearly differentiated for all conditions. ANOVA (visual: F1.4, 20.3 = 37.4, P = 5.6 × 10−6; gustatory: F1.3, 18.9 = 4.7,
P = 0.033; visual × gustatory: F1.2, 18.6 = 9.7, P = 0.004; gustatory × visual: F1.4, 19.6 = 4.3, P = 0.04). Greenhouse-Geisser correction was applied, as
Mauchly’s test revealed violation of assumption of sphericity. For visual and visual by gustatory, n = 16 participants. For gustatory and gustatory × visual,
n = 15 participants. Error bars represent s.e.m. ***P < 0.001, **P < 0.01.
ValenceItem 50
0 1 2 3 4
Distance in valence
a b
accuracy (%)
Gustatory × visual
Visual × gustatory
*** ***
accuracy (%)
© 2014 Nature America, Inc. All rights reserved.
a r t I C l e S
regions contain valence information. Population-level activity
revealed modally bound affect codes, consistent with Wundt’s thesis,
and evidence of sensory-specific hedonic habituation44. Activity pat-
terns in the VTC not only represented visual features and object infor-
mation4,16,45, but also corresponded to the representational geometry
of an individual person’s subjective affect. Low-level visual, object
and affect properties, however, did not arise from the same activity
patterns, but were largely anatomically and functionally dissociated
in the VTC. In vision, posterior regions supported representations
that were descriptions of the external visual stimulus, whereas more
anterior association cortices, including the anterior ventral and tem-
poral polar cortices, the latter of which was densely interconnected
with the OFC46, supported the internal affective coloring of visual
scene perception. Consistent with the hedonic primacy of chemical
sensing47, taste-evoked affect codes were found in the anteroventral
insula and posterior OFC (putative primary and secondary gustatory
cortex41), suggesting that higher level appraisal processes may not be
as necessary for the extraction of their valence properties.
The lack of correspondence in activity patterns across modalities,
despite both coding valence, suggests that modality specific proc-
esses are involved in extracting valence information. The role of these
modality-specific valence representations may be to allow differential
weighting of distinct features in determining one’s overall judgment
of value42 or subjective valence experience. Fear conditioning renders
once indiscriminable odors perceptually discriminable, supported by
divergence of ensemble activity patterns in primary olfactory (piri-
form) cortex48. Rather than only the domain of specialized circuits
outside of perceptual systems, valence coding may also be central
to perceptual encoding, affording sensory-specific hedonic weight-
ings. It remains to be determined whether valance codes embodied
in a sensory system support distinct subjective qualia, as well as their
relation to sensory-independent affect representations.
Supramodal affect codes in the OFC
It has been proposed that a common scale is required for organisms to
assess the relative value of computationally and qualitatively different
events, such as drinking water, smelling food, scanning predators and
so forth29. To decide on an appropriate behavior, the nervous system
must convert the value of events into a common scale. By using food
or monetary reward, previous studies of monkey electrophysiology9,
neuroimaging23 and neuropsychology25 have found that the OFC is
critical for generating this kind of currency-like common scale. However,
without investigating its microstructure, overlap in mean activity in a
region is insufficient to reveal an underlying commonality in represen-
tation space. By examining multi-voxel patterns, a recent fMRI study
demonstrated that the vmPFC commonly represents the monetary
value (that is, how much one was willing to pay) of different visual goal
objects (pictures of food, money and trinkets)17. However, such studies
of ‘common currency’17,42 employ only visual cues denoting associ-
ated value of different types, rather than physical stimulus modalities.
By delivering pleasant and unpleasant gustatory stimuli (for example,
sweet and bitter liquids) instead of presenting visual stimuli that denote
gustatory reward in the future (goal value), as well as complex scenes
that varied across the entire valence spectrum, including highly nega-
tive valence, we found that, even when stimuli were delivered via vision
or taste, modality-independent codes were projected into the same
representation space whose coordinates were defined as subjective
positive-to-negative affective experience in the OFC. This provides
strong evidence that some part of affect representation space in the
OFC is not only stimulus, but also modality, independent.
The exploratory searchlight analysis revealed that affect representa-
tions across modality were found in the lateral OFC, the medial OFC
and the vmPFC. This finding is important, as most previous studies
of value representations17,29 have mainly focused on the medial OFC
and vmPFC, and not on the lateral OFC. Although both may support
supramodal valence codes, the processes that work on these represen-
tations are likely different49. The medial OFC may represent approach
tendencies, whereas the more inhibitory functions associated with the
lateral OFC sectors may use the same valence information to suppress
desire to approach a stimulus, such as consume an appetizing, yet
unhealthy, food.
Beyond examining valence representations across complex visual
scenes and its correspondence across pictures to tastes, we also exam-
ined commonality of representations across the brains of individuals.
To do so, we extended previous application of cross-participant MVPA
in representing object types35,36 to the domain of subjective affect.
Although item-specific population responses were highly similar in
the VTC, affording classification of what particular scene was being
viewed, these patterns captured experienced affect across people to
a much lesser degree. In contrast, population codes in the OFC were
less able to code the specific item being viewed, but demonstrated
similarity among people even if affective responses to individual
items varied. Cross-participant classification of affect across items was
lower in the OFC compared with item-specific coding in the VTC.
This cross-region difference may be a characteristic of the neural
representations of external items versus internal affective responses:
the need to abstract from physical appearance to invisible affect.
Notwithstanding, such cross-participant commonality may allow a
common scaling of value and valence experience across individuals.
In sum, these findings suggest that there exists a common affect code
across people, underlying a wide range of stimuli and object catego-
ries, and even when originating from the eye or tongue.
Methods and any associated references are available in the online
version of the paper.
Note: Any Supplementary Information and Source Data files are available in the
online version of the paper.
We thank T. Schmitz, M. Taylor, D. Hamilton and K. Gardhouse for technical
collaboration and discussion. This work was funded by Canadian Institutes of
Health Research grants to A.K.A. J.C. was supported by the Japan Society for the
Promotion of Science Postdoctoral Fellowships for Research Abroad (H23).
J.C. and A.K.A. designed the experiments. J.C. and D.H.L. built the experimental
apparatus and performed the experiments. J.C. analyzed the data. J.C., D.H.L., N.K.
and A.K.A. wrote the paper. N.K. and A.K.A. supervised the study.
The authors declare no competing financial interests.
Reprints and permissions information is available online at
1. Wundt, W. Grundriss der Psychologie, von Wilhelm Wundt (W. Engelmann, Leipzig,
2. Penfield, W. & Boldrey, E. Somatic motor and sensory representation in the
cerebral cortex of man as studies by electrical stimulation. Brain 60, 389–443
3. Huth, A.G., Nishimoto, S., Vu, A.T. & Gallant, J.L. A continuous semantic space
describes the representation of thousands of object and action categories across
the human brain. Neuron 76, 1210–1224 (2012).
4. Haxby, J.V. et al. Distributed and overlapping representations of faces and objects
in ventral temporal cortex. Science 293, 2425–2430 (2001).
© 2014 Nature America, Inc. All rights reserved.
a r t I C l e S
5. Kanwisher, N., McDermott, J. & Chun, M.M. The fusiform face area: a module
in human extrastriate cortex specialized for face perception. J. Neurosci. 17,
4302–4311 (1997).
6. Kriegeskorte, N., Mur, M. & Bandettini, P. Representational similarity analysis -
connecting the branches of systems neuroscience. Front. Syst. Neurosci. 2, 4
7. Hinton, G.E., McClelland, J.L. & Rumelhart, D.E. Distributed representations. in
Parallel Distributed Processing: Explorations in the Microstructure of Cognition
(eds. Rumelhart, D.E. & McClelland, J.L.) 77–109 (The MIT Press, Cambridge,
Massachusetts, 1986).
8. Lewis, P.A., Critchley, H.D., Rotshtein, P. & Dolan, R.J. Neural correlates of processing
valence and arousal in affective words. Cereb. Cortex 17, 742–748 (2007).
9. Padoa-Schioppa, C. & Assad, J.A. Neurons in the orbitofrontal cortex encode
economic value. Nature 441, 223–226 (2006).
10. Kriegeskorte, N. & Kievit, R.A. Representational geometry: integrating cognition,
computation and the brain. Trends Cogn. Sci. 17, 401–412 (2013).
11. Haynes, J.D. Decoding and predicting intentions. Ann. NY Acad. Sci. 1224, 9–21
12. Kamitani, Y. & Tong, F. Decoding the visual and subjective contents of the human
brain. Nat. Neurosci. 8, 679–685 (2005).
13. Freeman, J., Brouwer, G.J., Heeger, D.J. & Merriam, E.P. Orientation decoding
depends on maps, not columns. J. Neurosci. 31, 4792–4804 (2011).
14. Sasaki, Y. et al. The radial bias: a different slant on visual orientation sensitivity
in human and nonhuman primates. Neuron 51, 661–670 (2006).
15. Alink, A., Krugliak, A., Walther, A. & Kriegeskorte, N. fMRI orientation decoding
in V1 does not require global maps or globally coherent orientation stimuli.
Front. Psychol. 4, 493 (2013).
16. Kriegeskorte, N. et al. Matching categorical object representations in inferior
temporal cortex of man and monkey. Neuron 60, 1126–1141 (2008).
17. McNamee, D., Rangel, A. & O’Doherty, J.P. Category-dependent and category-
independent goal-value codes in human ventromedial prefrontal cortex.
Nat. Neurosci. 16, 479–485 (2013).
18. Brouwer, G.J. & Heeger, D.J. Decoding and reconstructing color from responses in
human visual cortex. J. Neurosci. 29, 13992–14003 (2009).
19. Osgood, C.E., May, W.H. & Miron, M.S. Cross-Cultural Universals of Affective
Meaning (University of Illinois Press, Urbana, Illinois, 1975).
20. Russell, J.A. A circumplex model of affect. J. Pers. Soc. Psychol. 39, 1161–1178
21. Grill-Spector, K. & Malach, R. The human visual cortex. Annu. Rev. Neurosci. 27,
649–677 (2004).
22. Anderson, A.K. et al. Dissociated neural representations of intensity and valence
in human olfaction. Nat. Neurosci. 6, 196–202 (2003).
23. O’Doherty, J., Kringelbach, M.L., Rolls, E.T., Hornak, J. & Andrews, C. Abstract
reward and punishment representations in the human orbitofrontal cortex.
Nat. Neurosci. 4, 95–102 (2001).
24. Small, D.M. et al. Dissociation of neural representation of intensity and affective
valuation in human gustation. Neuron 39, 701–711 (2003).
25. Ongür, D. & Price, J.L. The organization of networks within the orbital and medial
prefrontal cortex of rats, monkeys and humans. Cereb. Cortex 10, 206–219
26. Shenhav, A., Barrett, L.F. & Bar, M. Affective value and associative processing share
a cortical substrate. Cogn. Affect. Behav. Neurosci. 13, 46–59 (2013).
27. Gottfried, J.A., O’Doherty, J. & Dolan, R.J. Appetitive and aversive olfactory learning
in humans studied using event-related functional magnetic resonance imaging.
J. Neurosci. 22, 10829–10837 (2002).
28. Rolls, E.T., Kringelbach, M.L. & de Araujo, I.E. Different representations of pleasant
and unpleasant odours in the human brain. Eur. J. Neurosci. 18, 695–703
29. Montague, P.R. & Berns, G.S. Neural economics and the biological substrates of
valuation. Neuron 36, 265–284 (2002).
30. Wilson-Mendenhall, C.D., Barrett, L.F. & Barsalou, L.W. Neural evidence that human
emotions share core affective properties. Psychol. Sci. 24, 947–956 (2013).
31. Lindquist, K.A., Wager, T.D., Kober, H., Bliss-Moreau, E. & Barrett, L.F. The brain
basis of emotion: a meta-analytic review. Behav. Brain Sci. 35, 121–143 (2012).
32. Morrison, S.E. & Salzman, C.D. The convergence of information about rewarding
and aversive stimuli in single neurons. J. Neurosci. 29, 11471–11483 (2009).
33. Todd, R.M., Talmi, D., Schmitz, T.W., Susskind, J. & Anderson, A.K. Psychophysical
and neural evidence for emotion-enhanced perceptual vividness. J. Neurosci. 32,
11201–11212 (2012).
34. Grabenhorst, F., D’Souza, A.A., Parris, B.A., Rolls, E.T. & Passingham, R.E.
A common neural scale for the subjective pleasantness of different primary rewards.
Neuroimage 51, 1265–1274 (2010).
35. Raizada, R.D. & Connolly, A.C. What makes different people’s representations alike:
neural similarity space solves the problem of across-subject fMRI decoding. J. Cogn.
Neurosci. 24, 868–877 (2012).
36. Haxby, J.V. et al. A common, high-dimensional model of the representational space
in human ventral temporal cortex. Neuron 72, 404–416 (2011).
37. Kron, A., Goldstein, A., Lee, D.H., Gardhouse, K. & Anderson, A.K. How are you
feeling? Revisiting the quantification of emotional qualia. Psychol. Sci. 24,
1503–1511 (2013).
38. Kriegeskorte, N., Goebel, R. & Bandettini, P. Information-based functional brain
mapping. Proc. Natl. Acad. Sci. USA 103, 3863–3868 (2006).
39. Dolcos, F., LaBar, K.S. & Cabeza, R. Dissociable effects of arousal and valence on
prefrontal activity indexing emotional evaluation and subsequent memory: an event-
related fMRI study. Neuroimage 23, 64–74 (2004).
40. Lazarus, R.S. & Folkman, S. Stress, Appraisal and Coping (Springer Publishing
Company, 1984).
41. Small, D.M. et al. Human cortical gustatory areas: a review of functional
neuroimaging data. Neuroreport 10, 7–14 (1999).
42. Lim, S.L., O’Doherty, J.P. & Rangel, A. Stimulus value signals in ventromedial PFC
reflect the integration of attribute value signals computed in fusiform gyrus and
posterior superior temporal gyrus. J. Neurosci. 33, 8729–8741 (2013).
43. Mogami, T. & Tanaka, K. Reward association affects neuronal responses to visual
stimuli in macaque te and perirhinal cortices. J. Neurosci. 26, 6761–6770
44. Poellinger, A. et al. Activation and habituation in olfaction: an fMRI study.
Neuroimage 13, 547–560 (2001).
45. Misaki, M., Kim, Y., Bandettini, P.A. & Kriegeskorte, N. Comparison of multivariate
classifiers and response normalizations for pattern-information fMRI. Neuroimage
53, 103–118 (2010).
46. Olson, I.R., Plotzker, A. & Ezzyat, Y. The enigmatic temporal pole: a review of
findings on social and emotional processing. Brain 130, 1718–1731 (2007).
47. Lapid, H. et al. Neural activity at the human olfactory epithelium reflects olfactory
perception. Nat. Neurosci. 14, 1455–1461 (2011).
48. Li, W., Howard, J.D., Parrish, T.B. & Gottfried, J.A. Aversive learning enhances
perceptual and cortical discrimination of indiscriminable odor cues. Science 319,
1842–1845 (2008).
49. Noonan, M.P. et al. Separate value comparison and learning mechanisms in
macaque medial and lateral orbitofrontal cortex. Proc. Natl. Acad. Sci. USA 107,
20547–20552 (2010).
© 2014 Nature America, Inc. All rights reserved.
nature neurOSCIenCe doi:10.1038/nn.3749
Subjects and imaging procedures. 16 healthy adults (10 male, ages 26.1 ± 2.1) pro-
vided informed consent to participate in the visual (experiment 1) and gustatory
(experiment 2) experiments in the same session (that is, without leaving the MRI
scanner). Exclusion criteria included significant psychiatric or neurological his-
tory. This study was approved by University of Toronto Research Ethics Board and
Sick Kids hospital Research Ethics Board. No statistical test was run to determine
sample size a priori. The sample sizes that we chose are similar to those used in
previous publications17,35,36. The experiments were conducted using a 3.0-T fMRI
system (Siemens Trio) during daytime. Localizer images were first collected to
align the field of view centered on each participant’s brain. T1-weighted anatomical
images were obtained (1 mm3, 256 × 256 FOV; MPRAGE sequence) before the
experimental echo planar imaging (EPI) runs. For functional imaging, a gradient
echo-planar sequence was used (repetition time (TR) = 2,000 ms, echo time
(TE) = 27 ms, flip angle = 70 degrees). Each functional run consisted of 292 (experi-
ment 1) or 263 (experiment 2) whole brain acquisitions (40- × 3.5-mm slices,
interleaved acquisition, field of view = 192 mm, matrix size = 64 × 64, in-plane
resolution of 3 mm). The first four functional images in each run were excluded
from analysis to allow for the equilibration of longitudinal magnetization.
Experiment 1 (visual). Visual stimuli were delivered via goggles, using
CinemaVision AV system (Resonance Technology), displayed at a resolution
of 800 × 600, 60 Hz. Affect ratings were collected by magnet-compatible button
during scanning. All 128 pictures were selected from the International Affective
Picture System49. In each trial, a picture was presented for 3 s, then a blank screen
for 5 s, then separate scaling bars to rate positivity (3 s) and negativity (3 s) of the
picture. After a 4-s inter-trial interval, the next picture was presented. Trial order
was pseudorandomized within emotion category, balanced across four runs of
32 trials each. Four runs were administered to each subject.
Experiment 2 (gustatory). Gustatory stimuli were delivered by plastic tubes
converging at a plastic manifold, whose nozzle dripped the taste solutions into
the mouth. 100 taste solution trials were randomized and balanced across five
runs. In each trial, 0.5 ml of taste solution was delivered over 1,244 ms. When
liquid delivery ended, a screen instructed participants to swallow the liquid
(1 s). After 7,756 ms, the same scaling bars from experiment 1 appeared to rate
positivity (3 s) and then negativity (3 s) of the liquid. This was followed by 0.5 ml
of the tasteless liquid delivery during 1,244 ms for rinsing, followed by the 1-s
swallow instruction. After a 7,756 ms inter-trial-interval, the next trial began.
Five runs were administered to each subject. Both experiments 1 and 2 were
conducted in the same session, which took approximately 2 h. To decrease the
need for a bathroom break during scanning, participants were instructed not to
drink liquids before the experiment.
Pre-experimental session. To account for individual differences in their subjective
experiences of different tastes, participants were asked to taste a wider range of
intensities (as measured by molar concentrations) of the different taste solutions
(sour, salty, bitter, sweet). In this pre-experimental session, participants were tested
for one trial (2 ml) of each of the 16 taste solutions: sour/citric acid (1 × 10−1 M,
3.2 × 10−2 M, 1.8 × 10−2 M and 1.0 × 10−2 M), salty/table salt (5.6 × 10−1 M,
2.5 × 10−1 M, 1.8 × 10−1 M and 1.0 × 10−1 M), bitter/quinine sulfate (1.0 × 10−3 M,
1.8 × 10−4 M, 3.2 × 10−5 M and 7.8 × 10−5 M) and sweet/sucrose (1.0 M, 0.56 M,
0.32 M and 0.18 M). The order of presentation was randomized by taste and then
by concentration in each taste. After drinking each solution, participants rinsed and
swallowed 5 ml of water, then rated the intensity and pleasantness (valence) of the
solutions experience on separate scales of 1–9. The concentrations for each taste
that matched in intensity were selected. Previous work50 had shown that partici-
pants have different rating baselines and the concentrations most reliably selected
are above medium self-reported intensity. All solutions were mixed using pharma-
ceutical grade chemical compounds from Sigma-Aldrich, safe for consumption.
ROI definition. ROIs were determined on the basis of AAL template51 and
anatomy toolbox52. The EVC ROI was defined by bilateral BA 17 in the anatomy
toolbox. The VTC ROI consisted of lingual gyrus, parahippocampal gyrus, fusi-
form gyrus and inferior temporal cortices in the bilateral hemispheres. The OFC
ROI consisted of the superior, middle, inferior and medial OFC in the bilateral
hemispheres. White-matter voxels were excluded on the basis of the result of
segmentation implemented in SPM8 (, per-
formed on each participants imaging data.
Data analysis. Data were analyzed using SPM8 software. Functional images were
realigned, slice timing corrected, and normalized to the MNI template (ICBM
152) with interpolation to a 2- × 2- × 2-mm space. The registration was per-
formed by matching the whole of the individual’s T1 image to the template T1
image (ICBM152), using 12-parameter affine transformation. This was followed
by estimating nonlinear deformations, whereby, the deformations are defined
by a linear combination of three-dimensional discrete cosine transform basis
functions. The same transformation matrix was applied to EPI images. Data was
spatially smoothed (full width half maximum = 6 mm) for univariate parametric
modulation analysis, but not for MVPA, as it may impair MVPA performance12.
Each stimulus presentation was modeled as a separate event, using the canoni-
cal function in SPM8. For the first level GLM analyses, motion regressors were
included to regress out motion-related effects. For each voxel, t values of individ-
ual trials were demeaned by subtracting the mean value across trials. To visualize
the results, xjview software ( was used.
Representational similarity analysis. For each participant, a vector was cre-
ated containing the spatial pattern of BOLD-MRI signal related to a particular
event (normalized t values per voxel) in each ROI. These t values were further
normalized by subtracting mean values across trials. Pairwise Pearson correla-
tions were calculated between all vectors of all single trials, resulting in a RSM
containing correlations among all trials for each participant in each ROI for the
visual experiment.
Low-level visual features (local contrast, luminance, hue, number of edges and
visual salience) were computed using the Image Processing Toolbox packaged
with Matlab 7.0. Local contrast was defined as the s.d. of the pixel intensities.
Luminance was calculated as the average log luminance53. Hue was calculated
using Matlabs rgb2hsv function. Edges were detected using a Canny edge detector
with a threshold of 0.5. Lines were detected by using a Hough transform and the
number of detected lines was calculated for each image. Visual salience has been
defined as those basic visual properties, such as color, intensity and orientation,
that preferentially bias competition for rapid, bottom-up attentional selection54.
Visual saliency map for each image was computed using the Saliency Toolbox55.
Saliency maps were transformed into vectors and correlations of these vectors
across images were calculated. These correlations represent similarity of saliency
maps. Then, all the visual feature values were standardized and compressed into
a single representative score for each visual stimulus using principal component
analysis. Visual feature scores were sorted into 13 bins for symmetric comparison
to valence distance. Distance in visual feature space was estimated by Euclidian dis-
tance in five-dimensional visual feature space (local contrast, hue, number of edges,
luminance and saliency). Animacy scores were determined by a separate group of
participants (n = 16) who judged the stimuli as animate (0/16 to 16/16), which were
also sorted into 13 bins. We chose object animacy as a higher order object property
because the animate-inanimate dimension has been shown to be one of the most
discriminative features for object representation in the VTC16,45,56.
These RSMs were submitted to MDS for visualization. Stimulus arrangements
computed by MDS are data driven and serve an important exploratory function:
they can reveal the properties that dominate the representation of our stimuli in
the population code without any prior hypotheses. Correlation between projec-
tions on the best-fitting axis (line in each MDS plot) and property values (visual
feature, animacy and valence).
To compute the valence representational maps, we took the trial-based RSM
(of correlation ranks) and regressed out the other properties, distance in low-level
visual features and animacy, as well as regressors of no interest (differences in
basic emotions, auto-correlation and sessions). Note that we employed rank-
ordered correlations, instead of correlation coefficients, for all the analyses which
resulted in little assumptions for distribution of correlation coefficients. This left
an RSM of residual correlation ranks predicted by valence distance, which was
then sorted according to 13 × 13 bins (Supplementary Fig. 3b). To calculate the
value for each valence m and valence n (Valencem×n) cell of the representational
map, we computed the average of [Valencem–1 × n, Valencem+1 × n, Valencem × n–1,
Valencem × n+1 and Valencem×n]. Visual feature and animacy representational
maps were computed in an analogous manner. This decomposition approach
treats the RSM in each ROI as explained by the linear summation of multiple
© 2014 Nature America, Inc. All rights reserved.
nature neurOSCIenCe
contributing properties. We took this approach to take into consideration the
possibility that the same region may simultaneously represent qualitatively dif-
ferent features (for example, the VTC, which represents not only highly abstract
features such as animacy16,45,56, but also low-level visual features57).
For statistical analysis of the representational maps, we computed a DCI as a
measure of the relationship between activation similarity and distance in each
representational property. DCI was computed using a similar GLM regression of
the trial-based RSM that included all three predictors, corresponding to distance
in visual features, animacy and valence, and including regressors of no interest
(Supplementary Fig. 3a,b). These regression coefficients represent the extent
that RSMs were predicted by the distance in each of the three properties, and
were thus termed distance-similarity index (final DCIs were calculated by mul-
tiplying GLM regression coefficients by −1). DCIs for each property in each ROI
was computed for each participant, then submitted to statistical analysis. All the
DCI analyses used one-sided tests, since negative DCIs do not make sense while
other analyses used two-sided tests. This was validated retrospectively, as we did
not observe any significant voxels in the searchlight analysis.
To examine cross-modal commonality of OFC affect representation, similar
GLM regressions were performed using either trial correlations in the gusta-
tory experiment or trial correlations across visual and gustatory experiments
with responses and valence distance as predictor. Regressors of no interest
coded differences in basic emotions, tastes, auto-correlation and sessions. To
directly illustrate the decrease in similarity with valence distance, we sorted the
rank-ordered correlations into five bins of valence distance for each participant
(Fig. 6a,b). One participant was excluded from these gustatory and
gustatory × visual analyses due to the lack of data for the fifth bin of valence
distance in gustatory experiment. However, this participant was included in all
the other analyses including DCI analyses.
Searchlight analysis. For information-based searchlight analyses, we used a
(5 × 5 × 5 voxels) searchlight. In a given cube, correlation coefficients of activa-
tion patterns of each trial combination (128 × 127/2) were calculated and sub-
ject to GLM analysis with correlations as the responses and differences in visual
features, animacy and valence scores as predictors (Fig. 5a,b). Searchlight analysis
of individual visual features (for example, local contrast, hue, number of
edges) used distances of a single feature as predictors (Supplementary Table 3
and Supplementary Fig. 5). Individual participants’ data were spatially
smoothed (8-mm full width half maximum) and were subject to a random effects
group analysis.
Searchlight analyses examining modality-specific and supramodal valence infor-
mation was conducted on within-gustatory and across-visual-and-gustatory data.
In a given cube, correlation coefficients of activation patterns of each trial combina-
tion (128 × 127/2 for visual, 100 × 99/2 for gustatory, and 128 × 100 across visual
and gustatory) were calculated and subject to GLM analysis with correlations as the
responses and differences in valence as predictors. Based on these three searchlight
results (within visual, within gustatory, and across visual and gustatory), we explored
brain regions representing visual-specific (P < 0.001 uncorrected and FDR 0.05
for visual, P > 0.05 for gustatory and across visual and gustatory), gustatory-specific
(P < 0.001 uncorrected and FDR 0.05 for gustatory, P > 0.05 for visual and across
visual and gustatory) and modality-independent valence (P < 0.01 uncorrected for
all the three conditions and cleared a threshold of P < 0.05 (familywise error (FWE))
when assuming independence of these 3 conditions).
Cross-participant classification. We examined cross-participant commonality of
visual items by comparing each participant’s trial-based RSM to a trial-based RSM
estimated by averaging across all other participants’ RSM. Thus, each target picture
was represented by 127 values that related it to all other picture trials. We then com-
pared whether the target picture representation was more similar to its estimate than
all other picture representations (similarity was computed as the correlation of the
r values; Supplementary Fig. 6). Classification performance was calculated as the
percentage success of all pairwise comparisons (50% chance).
For cross-participant commonality of affect representations of visual
items, we used a similar leave-one-out procedure, except that a target picture’s
127-score relationship to other pictures was now treated as 127 scores related in
valence space. Taking an example, consider target picture j, which was rated as
positive = 5, negative = 1 for one participant. The first of picture js 127 scores,
r(j,1), relates it to picture 1, but because we are interested in valence, this score
cannot be directly compared to the same r(j,1) score in another participant, as
that participant’s valence ratings to the same two pictures are different. Thus,
to estimate the valence representation of picture j using other participants’ data
directly, we computed valence-based RSMs for both positivity and negativity, in
which effects of no interest were regressed out. That is, the remaining participants
trial-based RSMs were first submitted to GLM decomposition to regress out
effects of no interest, and then organized by their positive and negative valence
scores, then separately combined into 7 × 7 positive and 7 × 7 negative valence
RSMs, where each (m, n) cell was computed as the average of the cells: [(m 1, n),
(m + 1, n), (m, n − 1), (m, n + 1), and (m, n)]. The classification of picture j’s
valence was then tested by looking up the 127 scores in the valence RSMs cor-
responding to the valence mapping. If the correlation of these scores was higher
for picture js valence than another picture ks valence, the classification was suc-
cessful (Supplementary Fig. 7). Classification performance was calculated as
the percentage success of all pairwise comparisons (50% chance). Given that the
across-participant MVPA that we employed cannot discriminate trials with the
same valence, classification accuracies for the closest distance were always 50%.
For commonality of valence representations for the gustatory experiment (Fig. 6b),
we applied the same procedure as above on the gustatory × gustatory similar-
ity scores and their valence ratings. We further investigated the cross-modality
commonality of the OFC affective representations by testing whether affect repre-
sentations in the visual experiment can be predicted by other participants’ affect
representations in the gustatory experiment (visual × gustatory) or vice versa
(gustatory × visual) (Fig. 6b).
Calculation of arousal. Following previously described methods37, self-reported
and autonomic indices of arousal can be estimated through the addition of inde-
pendent unipolar positive and negative valence responses. Valence categories were
defined from the distribution in Supplementary Figure 1 (negative −6 to −2,
neutral −1 to 1, positive = 2 to 6). According to these definitions, positive
(mean = 5.1, s.d. = 1.4) and negative (mean = 4.8, s.d. = 1.4) stimuli were similarly
arousing compared to neutral (mean = 2.5, s.d. = 1.7) in experiment 1. Similar
arousal values were obtained in experiment 2 (positive (mean = 5.4, s.d. = 1.5),
negative (mean = 5.3, s.d. = 1.6) and neutral (mean = 2.5, s.d. = 1.5).
Statistics. We analyzed the data, assuming normal distribution. To examine
whether DCIs are significantly above zero, we used one sample t test. To exam-
ine difference in DCIs, we used paired t test. A Shapiro-Wilk test was applied to
examine whether samples had a normal distribution. In case of a non-normal
distribution, a nonparametric test (Wilcoxon signed-rank test) was applied to
confirm whether similar results were obtained. For ANOVA, we also examined
sphericity by Mauchly’s test. Where the assumption of sphericity was violated, we
applied Greenhouse-Geisser correction. Multiple comparison corrections were
applied to within-ROI and between-ROIs analyses, using Bonferroni correc-
tion. For Figure 3c, multiple comparison correction was applied to within-ROI
(3 (feature) × 3 (ROI) = 9) and between-ROI (3 (feature) × 3 (ROI-pair) = 9)
comparisons. For Figure 4b, multiple comparison correction was applied based
on within-ROI (3 (feature) × 4 (ROI) = 12) and between-ROIs (3 (feature) × 6
(ROI-pair) = 18) comparisons. For Figure 5d, further multiple comparison cor-
rection was not applied, as the data survived whole brain multiple comparison.
A Supplementary Methods Checklist is available.
50. Chapman, H.A., Kim, D.A., Susskind, J.M. & Anderson, A.K. In bad taste: evidence
for the oral origins of moral disgust. Science 323, 1222–1226 (2009).
51. Tzourio-Mazoyer, N. et al. Automated anatomical labeling of activations in SPM
using a macroscopic anatomical parcellation of the MNI MRI single-subject brain.
Neuroimage 15, 273–289 (2002).
52. Eickhoff, S.B. et al. A new SPM toolbox for combining probabilistic cytoarchitectonic
maps and functional imaging data. Neuroimage 25, 1325–1335 (2005).
53. Reinhard, E.S.M., Shirley, P. & Ferwerda, J. Photographic tone reproduction for
digital images. ACM Trans. Graph. 21, 267–276 (2002).
54. Itti, L. & Koch, C. Computational modelling of visual attention. Nat. Rev. Neurosci.
2, 194–203 (2001).
55. Walther, D. & Koch, C. Modeling attention to salient proto-objects. Neural Netw.
19, 1395–1407 (2006).
56. Naselaris, T., Stansbury, D.E. & Gallant, J.L. Cortical representation of animate and
inanimate objects in complex natural scenes. J. Physiol. Paris 106, 239–249 (2012).
57. Op de Beeck, H., Wagemans, J. & Vogels, R. Inferotemporal neurons represent
low-dimensional configurations of parameterized shapes. Nat. Neurosci. 4,
1244–1252 (2001).
... Fine-grained topography was also present within the midcingulate cortex. Coefficients exhibited a clear peak near the bank of the callosal sulcus (z = 4.06, Montreal Neurological Institute coordinate (MNI x,y,z ) = [6,8,26], P < 0.0001, q FDR < 0.05), whereas humans 21 . Finally, the nature of pleasurable stimuli accessible for study in animal models does not extend to many important modalities of human pleasure, such as music and humour. ...
... Due to these limitations, a growing number of researchers have turned to multivariate approaches to evaluate how affective variables are represented in human brain activity 25,26 . Unlike standard univariate analysis, multivariate methods are capable of estimating a spatial profile of activity within and across regions that characterizes a variable of interest 27,28 , even in cases where multiple neural populations overlap in a single region. ...
... Unlike standard univariate analysis, multivariate methods are capable of estimating a spatial profile of activity within and across regions that characterizes a variable of interest 27,28 , even in cases where multiple neural populations overlap in a single region. Indeed, pattern-based methods have revealed responses in orbitofrontal cortex and adjacent ventromedial prefrontal cortex that discriminate states of pleasure from displeasure 26 . Thus far, however, there is surprisingly limited evidence that neural populations in subcortical structures represent diverse pleasures using a common code, or that they form part of a distributed network mediated by opioidergic mechanisms positioned to influence learning and decision making as predicted by contemporary accounts of reward learning 4 . ...
Full-text available
Pleasure is a fundamental driver of human behaviour, yet its neural basis remains largely unknown. Rodent studies highlight opioidergic neural circuits connecting the nucleus accumbens, ventral pallidum, insula and orbitofrontal cortex as critical for the initiation and regulation of pleasure, and human neuroimaging studies exhibit some translational parity. However, whether activation in these regions conveys a generalizable representation of pleasure regulated by opioidergic mechanisms remains unclear. Here we use pattern recognition techniques to develop a human functional magnetic resonance imaging signature of mesocorticolimbic activity unique to states of pleasure. In independent validation tests, this signature is sensitive to pleasant tastes and affect evoked by humour. The signature is spatially co-extensive with mu-opioid receptor gene expression, and its response is attenuated by the opioid antagonist naloxone. These findings provide evidence for a basis of pleasure in humans that is distributed across brain systems.
... At the core of this cognitive process lies the human emotion system. [1][2][3] There are currently two prominent theories regarding the neural representation of emotional experiences: the ''locationism'' and ''constructionism'' models. The former proposes that emotional experiences are composed of a finite set of discrete basic emotions (e.g., happiness, anger, fear, and surprise), each of which is encoded independently by highly specialized brain regions. ...
... Furthermore, we found that semantic features appeared to have stronger predictive power for neural activity in the lateral occipital cortex (LOC) than emotion features, but this does not mean that LO regions are unrelated to emotion processing. Notably, LOC has also been consistently identified in neuroimaging studies using affective visual stimuli 45,46 (but not in other modalities 2,47,48 ). ...
Full-text available
Affective neuroscience seeks to uncover the neural underpinnings of emotions that humans experience. However, it remains unclear whether an affective space underlies the discrete emotion categories in the human brain, and how it relates to the hypothesized affective dimensions. To address this question, we developed a voxel-wise encoding model to investigate the cortical organization of human emotions. Results revealed that the distributed emotion representations are constructed through a fundamental affective space. We further compared each dimension of this space to 14 hypothesized affective dimensions, and found that many affective dimensions are captured by the fundamental affective space. Our results suggest that emotional experiences are represented by broadly spatial overlapping cortical patterns and form smooth gradients across large areas of the cortex. This finding reveals the specific structure of the affective space and its relationship to hypothesized affective dimensions, while highlighting the distributed nature of emotional representations in the cortex.
... Although some human functional Magnetic Resonance Imaging (fMRI) studies have identified a set of brain regions that respond to both positive and negative emotions (12)(13)(14), these studies have used non-painful aversive stimuli rather than painful stimuli. A few animal studies have reported that the amygdala (15) and the anterior cingulate cortex (16) contain neurons that respond to both pain and pleasure, but these findings have been limited to specific local regions, and their generalizability to humans has not been well established. ...
... ; doi: bioRxiv preprint the affective intensity model, whereas the limbic and default mode networks were correlated with the valence model. These findings are in line with previous studies suggesting that the ventral attention network is important for detecting and identifying important and relevant stimuli given one's current contexts (31,34,48), while the limbic and default mode networks are important for modality-general value information (12,49) and subjective affective values (50)(51)(52). Thus, our results suggest that the affective intensity and valence are processed with distinct brain circuits that are co-localized but connected to distinct large-scale brain systems. ...
Full-text available
Pleasure and pain are two opposites that compete and influence each other, implying the existence of brain systems that integrate them to generate modality-general affective experiences. Here, we examined the brain's general affective codes (i.e., affective valence and intensity) across sustained pleasure and pain through an fMRI experiment (n = 58). We found that the distinct sub-populations of voxels within the ventromedial and lateral prefrontal cortices, the orbitofrontal cortex, the anterior insula, and the amygdala were involved in decoding affective valence versus intensity, which was replicated in an independent test dataset (n = 62). The affective valence and intensity models were connected to distinct large-scale brain networks; the intensity model to the ventral attention network and the valence model to the limbic and default mode networks. Overall, this study identified the brain representations of affective valence and intensity across pleasure and pain, promoting the systems-level understanding of human affective experiences.
... Pandas package in Python was used to construct an individual-level array of voxel-wise estimates for each condition. With this array, we calculated individual voxel-wise representational similarity coefficients using Pearson correlation while six motions were regressed out as confounds (e.g., Chikazoe et al., 2014; see Figure 3a). This resulted in an average voxel-wise pattern F I G U R E 3 Schematic of representational similarity analysis procedure. ...
... To do so, we used one-sample t-tests to examine whether the neural representational similarity for each condition at the voxel level is significantly above zero. A similar analysis strategy can be found in previous studies (e.g., Chikazoe et al., 2014;Kriegeskorte et al., 2008; The neural similarity at the voxel level between decision-making for the self and best friend (i.e., self-peer overlap) was not significantly different from 0 in any ROI (left NACC: t = .12, ...
Full-text available
Adolescence is marked by increased peer influence on risk taking; however, recent literature suggests enormous individual variation in peer influence susceptibility to risk-taking behaviors. The current study uses representation similarity analysis to test whether neural similarity between decision-making for self and peers (i.e., best friends) in a risky context is associated with individual differences in self-reported peer influence susceptibility and risky behaviors in adolescents. Adolescent participants (N = 166, Mage = 12.89) completed a neuroimaging task in which they made risky decisions to receive rewards for themselves, their best friend, and their parents. Adolescent participants self-reported peer influence susceptibility and engagement in risk-taking behaviors. We found that adolescents with greater similarity in nucleus accumbens (NACC) response patterns between the self and their best friend reported greater susceptibility to peer influence and increased risk-taking behaviors. However, neural similarity in ventromedial prefrontal cortex (vmPFC) was not significantly associated with adolescents' peer influence susceptibility and risk-taking behaviors. Further, when examining neural similarity between adolescents' self and their parent in the NACC and vmPFC, we did not find links to peer influence susceptibility and risk-taking behaviors. Together, our results suggest that greater similarity for self and friend in the NACC is associated with individual differences in adolescents' peer influence susceptibility and risk-taking behaviors.
... As shown in Figs. 2a and 2b, the two models showed a weak correlation between the unthresholded whole-brain patterns of predictive weights, r = 0.120, while the medial prefrontal cortex (mPFC), which is known to be important both for self-referential and valence information processing [40][41][42][43] , showed a weak, but larger correlation (r = 0.269) than the whole brain, suggesting the existence of overlapping representations between self-relevance and valence within the mPFC. For example, both self-relevance and valence models thresholded at p < 0.001 (two-tailed, bootstrap test) showed negative weights within the dorsomedial prefrontal cortex (dmPFC, Brodmann area [BA] 9) and the subgenual anterior cingulate cortex (sgACC, BA25) and positive weights in the ventromedial prefrontal cortex (vmPFC, BA11). ...
... This could be the case for many other naturalistic situations where we do not have external tasks. In addition, the importance of the limbic network in emotional valence is also consistent with previous studies reporting that the orbitofrontal cortex (OFC) and temporal pole (TP), which are the main components of the limbic network, were important for valence processing 43,54 . We also found that the left TPJ, along with the dmPFC (see Fig. 3b), were important for valence. ...
Full-text available
The contents of spontaneous thought and their dynamics are important factors for one's personality traits and mental health. However, they are difficult to assess because spontaneous thought occurs voluntarily without conscious constraints. Here, we aimed to decode two important content dimensions of spontaneous thought—self-relevance and valence—directly from functional Magnetic Resonance Imaging (fMRI) signals. To train brain decoders, we induced a wide range of levels of self-relevance and emotional valence using individually generated personal stories as well as stories written by others to mimic narrative-like spontaneous thoughts ( n = 49). We then tested the brain decoders on two resting-state fMRI datasets ( n = 49 and 90) with and without intermittent thought sampling, achieving significant predictions. The default mode and ventral attention networks were important contributors to the predictions. Overall, this study paves the way for the brain decoding of spontaneous thought and its use for clinical applications.
... Among these, the orbitofrontal cortex (OFC) is particularly important, as damage or disruption consistently alters value-based choice behavior, suggesting that OFC neurons perform choice-relevant computations (47,48). Integrated value signals are commonly found within OFC, including in single unit firing rates (7-9), population codes (49,50), field potentials (50-52), and fMRI BOLD signals (53,54), and this has been taken as evidence that integrated value is the key decision variable in OFC. However, multiple labs consistently report neurons in monkey OFC (primarily area 13) that encode the value of unique attributes (7,(55)(56)(57)(58)(59), and similar signals can be found in human fMRI BOLD (60). ...
In value-based decisions, there are frequently multiple attributes, such as cost, quality, or quantity, that contribute to the overall goodness of an option. Since one option may not be better in all attributes at once, the decision process should include a means of weighing relevant attributes. Most decision-making models solve this problem by computing an integrated value, or utility, for each option from a weighted combination of attributes. However, behavioral anomalies in decision-making, such as context effects, indicate that other attribute-specific computations might be taking place. Here, we tested whether rhesus macaques show evidence of attribute-specific processing in a value-based decision-making task. Monkeys made a series of decisions involving choice options comprising a sweetness and probability attribute. Each attribute was represented by a separate bar with one of two mappings between bar size and the magnitude of the attribute (i.e., bigger=better or bigger=worse). We found that translating across different mappings produced selective impairments in decision-making. When like attributes differed, monkeys were prevented from easily making direct attribute comparisons, and choices were less accurate and preferences were more variable. This was not the case when mappings of unalike attributes within the same option were different. Likewise, gaze patterns favored transitions between like attributes over transitions between unalike attributes of the same option, so that like attributes were sampled sequentially to support within-attribute comparisons. Together, these data demonstrate that value-based decisions rely, at least in part, on directly comparing like attributes of multi-attribute options. Significance Statement Value-based decision-making is a cognitive function impacted by a number of clinical conditions, including substance use disorder and mood disorders. Understanding the neural mechanisms, including online processing steps involved in decision formation, will provide critical insights into decision-making deficits characteristic of human psychiatric disorders. Using rhesus monkeys as a model species capable of complex decision-making, this study shows that decisions involve a process of comparing like features, or attributes, of multi-attribute options. This is contrary to popular models of decision-making in which attributes are first combined into an overall value, or utility, to make a choice. Therefore, these results serve as an important foundation for establishing a more complete understanding of the neural mechanisms involved in forming complex decisions.
... For example Rolls et al. declared that beside activation of primary olfactory regions in both pleasant and unpleasant odors, medio-rostral OFC is mostly activated in pleasant odor, while BOLD signal was shown in the left lateral OFC in an unpleasant odor stimulation [13]. In addition, other studies represented that OFC and amygdala were involved in odor valence [14,15]. In contrast with pleasant odor, studies emphasized on activation of anterior cingulate (ACC) and paracingulate cortex encountering with unpleasant stimulation [11,16]. ...
Purpose: Olfactory system is a vital sensory system in mammals, giving them the ability to connect with their environment. Anosmia, or the complete loss of olfaction ability, which could be caused by injuries, is an interesting topic for inspectors with the aim of diagnosing patients. Sniffing test is currently utilized to examine if an individual is suffering from anosmia; however, functional Magnetic Resonance Imaging (fMRI) provides unique information about the structure and function of the different areas of the human brain, and therefore this noninvasive method could be used as a tool to locate the olfactory-related regions of the brain. Materials and Methods: In this study, by recruiting 31 healthy and anosmic individuals, we investigated the neural Blood Oxygenation Level Dependent (BOLD) responses in the olfactory cortices following two odor stimuli, rose and eucalyptus, by using a 3T MR scanner. Results: Comparing the two groups, we observed a network of brain areas being more active in normal individuals when smelling the odors. In addition, a number of brain areas also showed an activation decline during the odor stimuli, which is hypothesized as a resource allocation deactivation. Conclusion: This study illustrated alterations in the brain activity between normal individuals and anosmic patients when smelling odors, and could potentially help for a better anosmia diagnosis in the future.
... Its arrangement begins from multiple satellites of unimodal sensory information and then converges transmodally toward its terminus, the default mode network (DMN). The DMN is an integrative network of regions heavily implicated in self-referential processing (Raichle, 2015) that incorporates abstract, domain-general value processing centers (Chikazoe et al., 2014). In short, the PG appears to encode an axis of information hierarchy from unimodal sensory information to amodal conceptual information that culminates at the seat of subjective information processing. ...
Full-text available
As subjective experiences go, beauty matters. Although aesthetics has long been a topic of study, research in this area has not resulted in a level of interest and progress commensurate with its import. Here, we briefly discuss two recent advances, one computational and one neuroscientific, and their pertinence to aesthetic processing. First, we hypothesize that deep neural networks provide the capacity to model representations essential to aesthetic experiences. Second, we highlight the principal gradient as an axis of information processing that is potentially key to examining where and how aesthetic processing takes place in the brain. In concert with established neuroimaging tools, we suggest that these advances may cultivate a new frontier in the understanding of our aesthetic experiences.
... This demonstrates that V5/MT+ is unique in that it represents both real visual motion and the valence of the images (dissociating between positive from negative valenced images). Additionally, the identification of valence-based processing by multivariate but not univariate approached suggest that these representations are coded by activity pattern changes across multiple voxels (Kriegeskorte et al., 2008), likely indicating population coding of this information (Chikazoe et al., 2014). ...
Full-text available
While a delicious dessert being presented to us may elicit strong feelings of happiness and excitement, the same treat falling slowly away can lead to sadness and disappointment. Our emotional response to the item depends on its visual motion direction. Despite this importance, it remains unclear whether (and how) cortical areas devoted to decoding motion direction represents or integrates emotion with perceived motion direction. Motion-selective visual area V5/MT+ sits, both functionally and anatomically, at the nexus of dorsal and ventral visual streams. These pathways, however, differ in how they are modulated by emotional cues. The current study was designed to disentangle how emotion and motion perception interact, as well as use emotion-dependent modulation of visual cortices to understand the relation of V5/MT+ to canonical processing streams. During functional magnetic resonance imaging (fMRI), approaching, receding, or static motion after-effects (MAEs) were induced on stationary positive, negative, and neutral stimuli. An independent localizer scan was conducted to identify the visual-motion area V5/MT+. Through univariate and multivariate analyses, we demonstrated that emotion representations in V5/MT+ share a more similar response profile to that observed in ventral visual than dorsal, visual structures. Specifically, V5/MT+ and ventral structures were sensitive to the emotional content of visual stimuli, whereas dorsal visual structures were not. Overall, this work highlights the critical role of V5/MT+ in the representation and processing of visually acquired emotional content. It further suggests a role for this region in utilizing affectively salient visual information to augment motion perception of biologically relevant stimuli.
... Nach dieser Definition erfordert eine Emotion ein Bewusstsein seines Selbst und damit wäre die Emo tion als Begriff für viele andere Spezies nicht mehr anwendbar. Andere Autoren argumentieren, dass Bewusstsein lediglich eine Eigenschaft von Emotionen und deren Verarbeitung darstellt sich diese additiv zu den basalen Emotionseigenschaften gesellt [9][10][11]. ...
Full-text available
Zusammenfassung Die menschliche Mimik ist einzigartig in ihrer Fähigkeit unseren Emotionen Ausdruck zu verleihen und diese anderen Menschen zu übermitteln. Die mimische Expression grundlegender Emotionen ist über verschiedene Kulturen hinweg sehr ähnlich und du weist auch Gemeinsamkeiten zu anderen Säugetieren auf. Dies deutet auf einen gemeinsamen genetischen Ursprung des Zusammenhangs von Mimik und Emotion. Neuere Untersuchungen zeigen aber auch kulturelle Einflüsse und Unterschiede. Die Erkennung von Emotionen aus der Mimik und auch der Prozess des mimischen Ausdrucks der eigenen Emotionen erfolgt in einem äußerst komplexen zerebralen Netzwerk. Aufgrund der Komplexität des zerebralen Verarbeitungssystems gibt es eine Vielzahl von neurologischen und psychiatrischen Erkrankungen, welche die Kopplung von Mimik und Emotionen erheblich stören können. Auch durch das Tragen von Masken wird unsere Fähigkeit zur Übermittlung und zum Erkennen von Emotionen über die Mimik eingeschränkt. Durch die Mimik lassen sich aber nicht nur „echte“ Emotionen ausdrücken, sondern auch gespielte. Damit eröffnet die Mimik die Möglichkeit sozial erwünschten Ausdruck vorzuspielen und auch Emotionen bewusst vorzutäuschen. Diese Täuschungen sind jedoch zumeist nicht perfekt und können von kurzfristigen Gesichtsbewegungen begleitet sein, die auf die tatsächlich vorhandenen Emotionen hinweisen (Mikroexpressionen). Diese Mikroexpressionen sind von nur sehr kurzer Dauer und vom Menschen häufig kaum wahrnehmbar, jedoch das ideale Anwendungsgebiet für computergestützte Analysen. Diese automatische Identifikation von Mikroexpressionen hat in den letzten Jahren nicht nur wissenschaftliche Aufmerksamkeit erfahren, sondern ihr Einsatz wird auch in sicherheitsrelevanten Bereichen getestet. Der vorliegende Artikel fasst den aktuellen Wissensstand von Mimik und Emotionen zusammen.
Full-text available
The orientation of a large grating can be decoded from V1 functional magnetic resonance imaging (fMRI) data, even at low resolution (3-mm isotropic voxels). This finding has suggested that columnar-level neuronal information might be accessible to fMRI at 3T. However, orientation decodability might alternatively arise from global orientation-bias maps. Such global maps across V1 could result from bottom-up processing, if the preferences of V1 neurons were biased toward particular orientations (e.g. radial from fixation, or cardinal, i.e. vertical or horizontal). Global maps could also arise from local recurrent or top-down processing, reflecting pre-attentive perceptual grouping, attention spreading, or predictive coding of global form. Here we investigate whether fMRI orientation decoding with 2-mm voxels requires (a) globally coherent orientation stimuli and/or (b) global-scale patterns of V1 activity. We used opposite-orientation gratings (balanced about the cardinal orientations) and spirals (balanced about the radial orientation), along with novel patch-swapped variants of these stimuli. The two stimuli of a patch-swapped pair have opposite orientations everywhere (like their globally coherent parent stimuli). However, the two stimuli appear globally similar, a patchwork of opposite orientations. We find that all stimulus pairs are robustly decodable, demonstrating that fMRI orientation decoding does not require globally coherent orientation stimuli. Furthermore, decoding remained robust after spatial high-pass filtering for all stimuli, showing that fine-grained components of the fMRI patterns reflect visual orientations. Consistent with previous studies, we found evidence for global radial and vertical bias maps in V1. However, these were weak or absent for patch-swapped stimuli, suggesting that global bias maps depend on globally coherent orientations and might arise through recurrent or top-down processes related to the perception of global form.
Full-text available
The cognitive concept of representation plays a key role in theories of brain information processing. However, linking neuronal activity to representational content and cognitive theory remains challenging. Recent studies have characterized the representational geometry of neural population codes by means of representational distance matrices, enabling researchers to compare representations across stages of processing and to test cognitive and computational theories. Representational geometry provides a useful intermediate level of description, capturing both the information represented in a neuronal population code and the format in which it is represented. We review recent insights gained with this approach in perception, memory, cognition, and action. Analyses of representational geometry can compare representations between models and the brain, and promise to explain brain computation as transformation of representational similarity structure.
Full-text available
Numerous emotion researchers have asked their study participants to attend to the distinct feelings of arousal and valence, and self-report and physiological data have supported the independence of the two. We examined whether this dissociation reflects introspection about distinct emotional qualia or the way in which valence is measured. With either valence (Experiment 1) or arousal (Experiment 2) as the primary focus, when valence was measured using a bipolar scale (ranging from negative to positive), it was largely dissociable from arousal. By contrast, when two separate unipolar scales of pleasant and unpleasant valence were used, their sum was equivalent to feelings of arousal and its autonomic correlates. The association (or dissociation) of valence and arousal was related to the estimation (or nonestimation) of mixed-valence experiences, which suggests that the distinction between valence and arousal may reflect less the nature of emotional experience and more how it is measured. These findings further encourage use of unipolar valence scales in psychological measurement.
Full-text available
We often have to make choices among multiattribute stimuli (e.g., a food that differs on its taste and health). Behavioral data suggest that choices are made by computing the value of the different attributes and then integrating them into an overall stimulus value signal. However, it is not known whether this theory describes the way the brain computes the stimulus value signals, or how the underlying computations might be implemented. We investigated these questions using a human fMRI task in which individuals had to evaluate T-shirts that varied in their visual esthetic (e.g., color) and semantic (e.g., meaning of logo printed in T-shirt) components. We found that activity in the fusiform gyrus, an area associated with the processing of visual features, correlated with the value of the visual esthetic attributes, but not with the value of the semantic attributes. In contrast, activity in posterior superior temporal gyrus, an area associated with the processing of semantic meaning, exhibited the opposite pattern. Furthermore, both areas exhibited functional connectivity with an area of ventromedial prefrontal cortex that reflects the computation of overall stimulus values at the time of decision. The results provide supporting evidence for the hypothesis that some attribute values are computed in cortical areas specialized in the processing of such features, and that those attribute-specific values are then passed to the vmPFC to be integrated into an overall stimulus value signal to guide the decision.
IN an effort to define human cortical gustatory areas we reviewed functional neuroimaging data for which coordinates standardized in Talairach proportional space were available. We observed a wide distribution of peaks within the insula and parietal and frontal opercula, suggesting multiple gustatory regions within this cortical area. Multiple peaks also emerged in the orbitofrontal cortex. However, only two peaks, both in the right hemisphere, were observed in the caudolateral orbitofrontal cortex, the region likely homologous to the secondary taste area described in monkeys. Overall significantly more peaks originated from the right hemisphere suggesting asymmetrical cortical representation of taste favoring the right hemisphere